Lamarckian memory evolution — acquired knowledge inherited across cycles

Your knowledge evolves
with every cycle.

The Dream Engine processes stored knowledge autonomously. It finds connections, resolves contradictions, builds frameworks, and creates structured insights that compound with every cycle. One API call to schedule. Results in your webhook.

What 847 raw memories become

Before the Dream Engine, your knowledge is a pile of disconnected notes. After, it's a structured knowledge graph with insights you never made yourself.

Day 1 — Raw Data
847 memories stored
"Met with Acme Corp, they need enterprise SSO"
"Competitor X raised $24M Series A"
"User complained about onboarding friction"
"Q2 revenue target is $180K ARR"
+ 843 more disconnected entries...
Day 30 — After Dream Engine
23 insights discovered
Pattern: 4 of your last 6 churned customers mentioned "no SSO" in exit surveys. Acme Corp deal depends on it.
Framework: Enterprise readiness checklist derived from 12 lost deals: SSO, SOC 2, data residency, SLA.
Principle: Your highest-value customers evaluate security before features. Lead with compliance, not demo.
Forecast: At current close rate, $180K ARR requires 8 more enterprise deals. Pipeline has 5. Gap = 3.

Five levels. Each cycle goes deeper.

The Dream Engine never repeats itself. Run it 5 times and you get 5 levels of understanding, not 5 copies of the same surface analysis.

L0
Extraction
Raw data becomes structured facts. Text is parsed into atomic units with metadata, entities, and timestamps.
L1
Pattern Recognition
Facts become themes. The engine identifies what keeps coming up, what contradicts, and what's trending.
L2
Insight Generation
Patterns become connections. Cross-domain links surface relationships spanning topics you'd never compare manually.
L3
Framework Building
Insights become mental models. Clusters of connected insights organize into reusable decision frameworks.
L4
Principle Crystallization
Frameworks become strategic truths. The highest-confidence conclusions distill into principles that guide all future decisions.

Different users, different insights.

The Dream Engine auto-detects what kind of knowledge you're storing and adapts its analysis. Or set a persona explicitly.

</>
Developers
Architecture decisions, debugging patterns, tech debt clusters, recurring bugs. The engine tracks how your codebase thinking evolves.
persona: "developer"
Founders
Competitive intel, deal patterns, strategy evolution, market signals. Connects what your customers say with what your metrics show.
persona: "founder"
📚
Researchers
Paper synthesis, methodology contradictions, cross-study connections. Finds what the literature says vs. what your data shows.
persona: "researcher"
🎓
Students
Course connections, knowledge gaps, exam prep. Links concepts across subjects to build deeper understanding.
persona: "student"
👥
Teams
Meeting decisions, project context, institutional knowledge. The collective memory of your entire organization.
persona: "team"
💡
Personal
Life experiences, goals, relationships, habits. Tracks your growth over time and surfaces patterns in your life.
persona: "personal"

Five ways to trigger a dream cycle.

Overnight is the default. But the Dream Engine runs whenever it makes sense.

Real-time
On every memory write. Dedup, categorize, and connect automatically.
📈
Threshold
After 10+ new memories accumulate. Auto-triggered.
📅
Scheduled
Hourly, daily, or weekly. Set it and forget it.
🔗
Event-driven
On import, webhook, or integration event.
On-demand
One click in the console. Results in 30-60 seconds.

Nine autonomous strategies. One knowledge evolution engine.

Each strategy is a distinct cognitive operation. Run individually, chain in sequence, or let the engine auto-select based on your knowledge state. Every output feeds the next.

1
Synthesize
Merge multiple memories into unified, higher-order knowledge. Eliminates duplication while preserving nuance.
In: 6 notes about auth bugs
Out: "Auth failures cluster around token refresh in multi-tab sessions"
2
Pattern Extract
Surface recurring themes, statistical regularities, and frequency signals across your entire knowledge base.
In: 200 meeting notes
Out: "3 of 4 churned accounts mentioned pricing in week 2"
3
Insight Generate
Discover non-obvious cross-domain connections. Links concepts you stored months apart in different contexts.
In: Hiring notes + bug reports
Out: "Senior hires reduced P1 bugs by 40% within 90 days"
4
Compress
Merge overlapping memories and collapse redundancy. Your knowledge base gets smaller and denser, never bloated.
In: 847 raw memories
Out: 612 memories (28% reduction, zero information loss)
5
Associate
Build bidirectional knowledge graph links. Every memory gets connected to its semantic neighbors.
In: Isolated API design notes
Out: 18 new cross-links to performance, auth, and DX memories
6
Validate
Flag contradictions and verify internal consistency. Catches when your stored beliefs conflict with newer evidence.
In: "Redis handles our load fine" + latest perf data
Out: Contradiction flagged: p99 latency up 3x since January
7
Evolve
Update beliefs with new evidence using Bayesian reasoning. Confidence scores shift as data accumulates.
In: "GraphQL is better for our API" (0.7 confidence)
Out: Confidence raised to 0.91 after 4 corroborating data points
8
Forecast
Project trends and predict likely outcomes based on historical patterns in your knowledge.
In: 6 months of sprint velocity data
Out: "At current rate, v2.0 ships March 18 +/- 12 days"
9
Reflect
Meta-analysis: identify knowledge gaps, blind spots, and areas where your understanding is thin or stale.
In: Full knowledge base scan
Out: "No data on competitor pricing since Q3. 4 assumptions unvalidated."

Every insight survives an adversarial tournament.

Inspired by AutoReason. Three candidates compete, a blind judge picks the winner. Knowledge only changes when the change is genuinely better. Here is an actual tournament round.

Step 1 — Candidate A
Original knowledge
The current insight as stored. This is the baseline to beat. No change unless something measurably better exists.
Step 2 — Candidate B
Adversarial challenge
A fresh model pass that attacks the original. Reframes assumptions, introduces counter-evidence, proposes alternatives.
Step 3 — Candidate AB
Synthesis
A third pass merges the strongest elements of A and B into a unified, higher-fidelity result. Often the winner.
Live example — watch a tournament round
A · Original
"Python is slower than JavaScript."
B · Challenge
"Python has faster numeric computing via NumPy; JS V8 is faster for I/O."
AB · Synthesis
"Python excels at numeric computing, JavaScript at I/O-bound tasks."
Judge
Blind judge selects AB 0.94 confidence
Reason: AB captures domain-specific nuance that neither A nor B provides alone.
Blind Borda judging
No model grades its own work. Judges see A, B, AB in randomized order with no labels.
Borda count scoring across multiple judges eliminates self-critique bias. "No change needed" is a valid outcome — the original wins when it is already the best answer.
Why this matters
Every other memory system stores what you tell it, forever. The Dream Engine actively improves it.
Over 30 dream cycles, a single memory can be refined dozens of times. Naive claims get replaced with nuanced, battle-tested knowledge. Automatically.

How the Dream Engine compares.

Other memory providers store and retrieve. The Dream Engine is the only system that autonomously improves what it stores.

Provider Consolidation Strategies Tournament Refinement Neuroscience Grounding
Mem0 None — memories are static after storage
Zep Temporal knowledge graphs — structure, no synthesis
Membase Knowledge graph — no consolidation or dream cycle
Hindsight Closest Observation consolidation — basic automatic synthesis 1 (consolidate)
REM Labs Dream Engine 9 strategies, 5 depth levels, autonomous cycle scheduling 9 A/B/AB tournament with blind Borda judging REM sleep, synaptic homeostasis, memory replay

Lamarckian inheritance. No fine-tuning required.

In biology, Darwinian evolution requires genetic mutation and selection across generations. Lamarckian inheritance means acquired traits pass directly to offspring. The Dream Engine works the same way — through memory, not model weights.

Darwinian approach (fine-tuning)
Train a new model version
Requires GPU clusters, training data curation, evaluation suites, and deployment pipelines. Takes days to weeks. Costs thousands. Knowledge is frozen into weights and cannot be inspected or corrected.
 vs 
Lamarckian approach (Dream Engine)
Evolve the memory layer directly
Refined knowledge is written back to the memory store immediately. The next cycle inherits the improvement. No retraining, no GPUs, no deployment. Knowledge is always inspectable, editable, and reversible.
Cycle N
Tournament refines "Python is slower" into "Python excels at numeric computing, JS at I/O"
Cycle N+1
Next cycle starts with the refined version. Connects it to GPU benchmarking data stored last week.
Cycle N+2
Knowledge compounds: "For ML pipelines, Python + CUDA > JS. For edge inference, JS + WASM wins."

What happens while you sleep.

A scheduled dream cycle on a developer's knowledge base with 847 stored memories. Total wall time: 15 minutes. Zero human intervention.

11:00 PM
Cycle begins Scan
847 memories loaded. Embedding similarity matrix computed. Stale and duplicate candidates identified.
11:02 PM
Clustering complete Associate
23 semantic clusters identified. Largest cluster: "API design decisions" (47 memories). Smallest: "Office logistics" (3 memories).
11:05 PM
Redundancy eliminated Synthesize + Compress
12 redundant entries merged. "Added rate limiting to /users endpoint" appeared 4 times across different dates — collapsed into one with full timeline.
11:08 PM
Recurring themes surfaced Pattern Extract
3 themes found: (1) Auth token issues recur monthly, (2) deploys on Fridays correlate with weekend incidents, (3) customer complaints cluster around onboarding flow.
11:12 PM
Contradictions flagged Validate
2 contradictions detected: (1) "Redis handles our load" vs. p99 latency data showing 3x increase, (2) "We don't need SSO" vs. 4 lost enterprise deals citing SSO.
11:15 PM
Tournament refinement Tournament
5 insights enter A/B/AB tournament. 3 syntheses win, 1 original retained (already optimal), 1 adversarial reframing wins. Blind judge confidence range: 0.82–0.96.
6:00 AM
Morning Brief ready Complete
18 new cross-links established. 5 refined insights. 2 contradictions flagged for review. 12 redundancies eliminated. Knowledge base is now 835 memories — smaller, denser, smarter.

Honest numbers. No cherry-picking.

We measure against the hardest public benchmark for long-term memory systems. We do not claim to be number one on retrieval. We do claim the deepest consolidation pipeline in production.

Hindsight
94.6%
LongMemEval accuracy
Consolidation strategies: 1 (TEMPR consolidate)
Tournament refinement: None
Neuroscience grounding: None
Backed by: Nous Research partnership
REM Labs
90%
LongMemEval accuracy
Consolidation strategies: 9 (full pipeline)
Tournament refinement: A/B/AB + blind Borda judge
Neuroscience grounding: REM sleep, synaptic homeostasis
Knowledge evolution: Lamarckian inheritance
Why the 4.6% gap does not tell the full story. LongMemEval measures single-turn retrieval accuracy — can you find fact X in your memory store? Hindsight wins that race today. But retrieval is step one. The Dream Engine runs 9 consolidation strategies versus Hindsight's 1. Over time, a memory system that actively evolves its knowledge produces qualitatively different outputs: frameworks, cross-domain insights, contradiction detection, and forecasts that retrieval-only systems cannot generate. We are closing the retrieval gap while building capabilities no benchmark yet measures.

Smart enough to know when to stop.

The Dream Engine detects when it's producing diminishing returns and automatically advances to deeper analysis instead of generating slop.

>0.6
Slop detection threshold
Every output is compared against existing insights. If similarity exceeds 60%, the entry is filtered out before storage.
20/day
Rate limit per namespace
Quality degrades with over-processing. The engine enforces a ceiling and recommends waiting when value is low.
Auto
Diminishing returns detection
If 3 consecutive runs produce minimal new insights, the engine auto-advances to the next depth level or switches strategy.

One API call to start a dream cycle.

Trigger programmatically. Poll for results. Use the quality report to decide what to run next.

// Start a dream cycle
const dream = await fetch('/v1/memory/dream/start', {
  method: 'POST',
  headers: { 'Authorization': `Bearer ${apiKey}` },
  body: JSON.stringify({
    strategy: 'synthesize',    // or 'full_cycle' for all 9
    persona: 'developer',       // auto-detected if omitted
    namespace: 'default'
  })
});

// Poll for completion
const result = await pollDreamStatus(dream.id);

// Result includes quality report
console.log(result.quality_report);
// { slop_filtered: 1, quality_score: 0.8, diminishing: false }
console.log(result.should_continue);    // true — more memories to process
console.log(result.suggested_wait_hours); // 0 — run again now

Add autonomous consolidation to your memory pipeline.

The Dream Engine turns raw data into structured understanding. One API call to schedule. Free tier included.

Get Your API Key See pricing