Lamarckian memory evolution — acquired knowledge inherited across cycles
The Dream Engine processes stored knowledge autonomously. It finds connections, resolves contradictions, builds frameworks, and creates structured insights that compound with every cycle. One API call to schedule. Results in your webhook.
Before the Dream Engine, your knowledge is a pile of disconnected notes. After, it's a structured knowledge graph with insights you never made yourself.
The Dream Engine never repeats itself. Run it 5 times and you get 5 levels of understanding, not 5 copies of the same surface analysis.
The Dream Engine auto-detects what kind of knowledge you're storing and adapts its analysis. Or set a persona explicitly.
Overnight is the default. But the Dream Engine runs whenever it makes sense.
Each strategy is a distinct cognitive operation. Run individually, chain in sequence, or let the engine auto-select based on your knowledge state. Every output feeds the next.
Inspired by AutoReason. Three candidates compete, a blind judge picks the winner. Knowledge only changes when the change is genuinely better. Here is an actual tournament round.
Other memory providers store and retrieve. The Dream Engine is the only system that autonomously improves what it stores.
| Provider | Consolidation | Strategies | Tournament Refinement | Neuroscience Grounding |
|---|---|---|---|---|
| Mem0 | None — memories are static after storage | — | — | — |
| Zep | Temporal knowledge graphs — structure, no synthesis | — | — | — |
| Membase | Knowledge graph — no consolidation or dream cycle | — | — | — |
| Hindsight Closest | Observation consolidation — basic automatic synthesis | 1 (consolidate) | — | — |
| REM Labs Dream Engine | 9 strategies, 5 depth levels, autonomous cycle scheduling | 9 | A/B/AB tournament with blind Borda judging | REM sleep, synaptic homeostasis, memory replay |
In biology, Darwinian evolution requires genetic mutation and selection across generations. Lamarckian inheritance means acquired traits pass directly to offspring. The Dream Engine works the same way — through memory, not model weights.
A scheduled dream cycle on a developer's knowledge base with 847 stored memories. Total wall time: 15 minutes. Zero human intervention.
We measure against the hardest public benchmark for long-term memory systems. We do not claim to be number one on retrieval. We do claim the deepest consolidation pipeline in production.
The Dream Engine detects when it's producing diminishing returns and automatically advances to deeper analysis instead of generating slop.
Trigger programmatically. Poll for results. Use the quality report to decide what to run next.
// Start a dream cycle const dream = await fetch('/v1/memory/dream/start', { method: 'POST', headers: { 'Authorization': `Bearer ${apiKey}` }, body: JSON.stringify({ strategy: 'synthesize', // or 'full_cycle' for all 9 persona: 'developer', // auto-detected if omitted namespace: 'default' }) }); // Poll for completion const result = await pollDreamStatus(dream.id); // Result includes quality report console.log(result.quality_report); // { slop_filtered: 1, quality_score: 0.8, diminishing: false } console.log(result.should_continue); // true — more memories to process console.log(result.suggested_wait_hours); // 0 — run again now
The Dream Engine turns raw data into structured understanding. One API call to schedule. Free tier included.