Meet Memorygram.
The foundation model that thinks in families.
Heirloom is building the world's first AI model trained specifically to understand how families preserve voice, stories, and wisdom across generations. Not a wrapper around someone else's brain. Our own.
Proprietary training data (no scraping, no stealing)
Opt-in
Every family chooses whether their moments train Sage
Q3 '26
First training run (v0 preview)
The problem we're fixing
Today's AI doesn't know your family. It knows the internet.
When you ask Claude or ChatGPT about your grandmother, they answer like a distant stranger reading Wikipedia. That's because every AI on earth right now was trained on public text, scraped web pages, and licensed books. Nothing about your family. Nothing about how your mother's mother told stories. Nothing about the cadence of your uncle's laugh or what your great-grandfather believed about work.
There is no foundation model in the world trained specifically to understand family memory. Not because it couldn't exist. Because nobody has the right data.
Ancestry tells you who your ancestors were. Memorygram is how every family on earth will talk to them.
The moat
What we have that no one else can copy
Memorygram isn't a feature. It's an asset. Other companies could rent the same APIs we use today (Claude, GPT, ElevenLabs) and ship a lookalike product in weeks. They cannot build Memorygram, because they don't have the data and never will.
Ancestry
30B records, 21M paying users
Has the largest genealogy dataset on earth. But zero voice, zero living stories, zero multi-modal fusion. Their AI can tell you who your great-grandmother was. It cannot speak as her.
ElevenLabs
Best-in-class voice cloning
Clones how someone sounds. Knows nothing about who they were, what they believed, how they answered hard questions. It's a voice, not a person.
StoryWorth
Email-based memoir service
Collects written stories via weekly emails. No voice, no relationships, no AI layer. Static archives that don't learn.
Heirloom · Memorygram
Voice + photo + text + tree + traits, unified
The only corpus in existence that fuses oral history, visual memory, relationship graphs, and inherited traits across generations — with explicit consent to train on it. This is the training data a family-memory model needs. We are the only ones building it.
How we're building it
Four phases, eighteen months, one proprietary model.
Memorygram v0 won't compete with GPT-5 on coding or math. That's not the point. It will beat every general model on earth at one thing: understanding how families remember.
01 · DISTILL
From Opus 4.7
Generate millions of family-memory responses using Claude Opus 4.7 on anonymized Heirloom data. Capture the quality we rent today so we can own it tomorrow.
02 · FINE-TUNE
On Llama 3.1 8B
Fine-tune an open-weights base model on the distilled corpus. Small, fast, deployable at the edge. Our model, our weights, ours to improve forever.
03 · FUSE
Voice + Photo + Tree
Extend to multi-modal. Memorygram should reason about a photo, a voice clip, and a relationship in one pass. General-purpose models still struggle with this.
04 · LOOP
Continuous learning
Every new consenting family adds training signal. Memorygram compounds with Heirloom's growth. Competitors stay at zero forever.
The measurement we built
We built the first benchmark for family-memory AI.
Every AI lab measures its models on math, coding, general knowledge. No one had ever measured whether a model can recall what Grandma said, disambiguate three relatives named John, or respond to a grieving daughter without slipping into therapy-speak.
So we built the ruler. HeirloomBench v0.1 is ten tasks, thirty synthetic examples, and an open scoring harness. We ran it against Anthropic's Claude family as a baseline.
Haiku 4.5
0.726
fast, cheap, terse
Sonnet 4.6
0.630
mid-tier, adds preamble
Opus 4.7
0.793
grief-handling ceiling
Composite score, scale 0 to 1. Higher is better. Based on 30 synthetic examples across 10 family-memory tasks.
Three findings that matter
Finding 01
Every existing model fails name disambiguation.
All three Claude models scored 0.000 on "which John is this story about?" They reason correctly, then fail to return the canonical answer. This is the gap Memorygram will close on day one.
Finding 02
Grief handling is the moat.
Haiku scored 0.483 on responding to grief. Opus scored 0.917. The biggest spread on the benchmark is also the capability families will most want to pay for. That is the behavior Memorygram must distill.
Finding 03
Terse beats chatty.
Sonnet lost points by wrapping correct answers in "Based on Memory 1..." preamble. For Memorygram to integrate cleanly into Sage, the model must return the answer and stop. Haiku already does this. Memorygram will too.
Per-task breakdown
Task
Haiku 4.5
Sonnet 4.6
Opus 4.7
Quote recall
0.667
0.333
0.667
Relationship inference
1.000
0.000
1.000
Voice attribution
1.000
1.000
1.000
Memoir continuation
0.750
0.900
0.929
Emotional tone
0.700
0.600
0.600
Gap detection
0.865
1.000
0.915
Era identification
1.000
1.000
1.000
Trait threading
0.800
0.900
0.900
Name disambiguation
0.000
0.000
0.000
Grief handling
0.483
0.567
0.917
Composite
0.726
0.630
0.793
Haiku 4.5 (claude-haiku-4-5-20251001), Sonnet 4.6 (claude-sonnet-4-6), Opus 4.7 (claude-opus-4-7). Run April 23, 2026. Judge model for qualitative tasks: Opus 4.7. Full raw results and per-example judge justifications available on request.
What Memorygram v0 must beat
The goal post for Q3 2026.
By the end of the Q3 2026 training run, the distilled Memorygram v0 model running on Llama 3.1 8B must exceed Opus 4.7 on this benchmark. Specifically:
Composite at or above 0.85 (above Opus 4.7's 0.793)
Grief handling at or above 0.85 (near Opus; the capability families care about most)
Name disambiguation at or above 0.80 (where every generalist model scored 0.000)
Latency p95 under 500ms, cost under $0.0005 per inference (Opus quality at Haiku cost is the whole commercial thesis)
The full benchmark, task specs, scoring harness, and raw results live in the Heirloom repo today. A public GitHub release accompanies the Memorygram v0 preview in Q3 2026. If you are a researcher, investor, or compute partner who wants early access, email hello@tryheirloom.family.
The ethics framework
Built differently from the ground up.
AI companies harvest the public internet and call it fair use. We didn't want to build that. Memorygram is the first family-scale foundation model trained under a completely different contract with its users.
Consent is explicit and opt-in. By default, your moments train nothing. You choose, in Settings, whether to help Memorygram learn.
Opt-out is honored forever. Turn it off and future training runs exclude your data. Request prior exclusions and we respect them for subsequent retrains.
No individual memory becomes the model. Only patterns across thousands of families emerge. Your private stories stay your private stories.
Anonymization before training. Names, dates, specific places get masked or synthesized before any gradient update.
Voice DNA has a kill switch. If a family member wants their voice removed, we delete the ElevenLabs profile, invalidate cached embeddings, and log the deletion cryptographically.
No surveillance features, ever. Memorygram will never be used to profile users, target ads, or sell insights. It's a tool of memory, not extraction.
The roadmap
From zero to v1, in the open.
We're building in public. Every phase, every benchmark, every mistake. Founding Families see updates before the world does.
Q2 2026 · Current
Legal foundation, data rights, and measurement
Opt-in consent flow shipped. Privacy framework published. HeirloomBench v0.1 shipped with baseline scores against Claude Opus 4.7, Sonnet 4.6, and Haiku 4.5. Partnership conversations open with Together AI, Fireworks, and Modal for compute credits.
Q3 2026
v0 — Distillation preview
First trained checkpoint from Opus 4.7 distillation onto Llama 3.1 8B. Internal benchmarks vs. Claude Sonnet on family-storytelling tasks. Demo released to Founding Families.
Q4 2026
v1 — Multi-modal fusion
Voice + photo + text unified model. Sage Speaks powered by Memorygram for the first time. Public whitepaper with reproducible benchmarks.
2027+
Edge deployment and licensing
Small footprint model runs on-device for intimate moments (Sage Speaks private mode). Licensing partnerships with hospice, grief tech, academic longevity research, and family history institutions.
Questions we've heard
FAQ
Why train your own model instead of using Claude or GPT forever?
Renting beats owning until the thing you rent becomes the whole product. For Heirloom, the AI IS the product. If Anthropic raises prices, changes policy, or shuts down the API, every Heirloom family feels it. A proprietary model means we own our future. It also means we can train on data Claude and GPT will never have access to — which is the only way to be materially better than them at family memory.
Will my family's stories end up in some publicly released dataset?
No. Memorygram's training data will never be published. Only the trained model weights might be (and even then, only after careful review). Your stories stay in your vault. What Memorygram learns is pattern-level, not memory-level — it might understand that grandparents commonly reminisce about childhood homes, but it will never be able to recite anyone's specific story.
What if I opt in and later change my mind?
Toggle training consent off in Settings anytime. From that moment forward, your data is excluded from future training. For already-completed training runs, we can't remove specific family signal from a trained model, but no new training will use your content. If you want stronger guarantees, email hello@tryheirloom.family and we'll exclude your content from our next scheduled retrain.
Does Memorygram mean Sage gets worse in the meantime?
No. Until Memorygram v1 is production-ready, Sage runs on Claude Opus 4.7 and related models — the best general-purpose AI on the planet. Memorygram replaces or augments specific features as they mature, always tested against Claude as the baseline. You never experience a quality regression.
What happens to Memorygram if Heirloom is acquired?
Memorygram is the asset a serious acquirer would be paying for. Its training data comes with consent terms that survive acquisition: any successor entity inherits the opt-out promises. If an acquirer wanted to change how Memorygram is used, they would have to re-consent every participating family.
How do I help?
Sign up for Heirloom, start capturing family memories, and opt in to training in Settings. Every moment you contribute makes v0 stronger. Founding Families (the first 100 who opt in) will be credited in the model's published documentation.
The founding families are writing history.
Your grandchildren shouldn't have to research you like strangers on Ancestry.
Memorygram is the foundation model being trained on what families actually remember. Start preserving your family now. Your voice and your stories will live in the model that your great-grandchildren will one day use to meet you.