Mem0 is a developer-focused memory API. REM Labs is the continuity layer for intelligence — nine consolidation strategies, federation across models, 80+ first-class integrations, unlimited free-tier memories, and 94.6% on LongMemEval (vs Mem0's 66.9%). Here's fourteen dimensions, side-by-side.
Mem0 is a first-mover. REM is the answer to what came next — deeper consolidation, better retrieval, broader protocol surface.
Mem0 stores memories and retrieves them. REM persists, evolves, federates, and reacts — the four pillars of a continuity layer. The evolution happens via nine overnight consolidation strategies we call the Dream Engine.
Mem0 is a storage API — one thing, done well. REM is infrastructure: protocol-native (REST + webhooks + channels + A2A agent card), model-agnostic, self-hostable, and federated across every LLM vendor you use. You bring the models; REM keeps continuity.
Every row links back to a published artifact — docs, repo, or benchmark. If any entry is wrong, we'll fix it within 48h — email hey@remlabs.ai.
| Dimension | REM Labs | Mem0 |
|---|---|---|
| Category | Continuity layer for intelligence | Conversational memory API |
| LongMemEval (500q) | 94.6% · byte-exact upstream GPT-4o judge | 66.9% (third-party eval) |
| Consolidation strategies | 9 (Dream Engine: synthesize, pattern, contradiction, compress, associate, validate, evolve, forecast, reflect) | 1 (extract & store) |
| Model-agnostic | Yes — OpenAI, Anthropic, Gemini, Grok, Llama, local | Yes (LLM adapter) |
| Self-hostable | Yes — Docker + K8s + bare metal, one command, ~90s, unlimited everything | Yes (OSS edition, no Dream Engine) |
| Open source | Apache 2.0 core — SDKs + self-host + extractors | Yes (Apache 2.0) |
| GDPR / forget API | Yes — per-memory + per-namespace + right-to-explanation | Yes (delete endpoint) |
| Federation across agents | Yes — shared namespaces + A2A agent card | No — single-user focus |
| Webhooks / reactivity | Yes — memory.created, dream.completed, contradiction.detected | No native; poll API |
| MCP / A2A protocol | Yes — /.well-known/mcp.json, A2A agent card | No native MCP endpoint |
| Multi-agent / hive | Yes — DreamHive, shared memory across agents | Partial — per-user namespaces |
| Pricing start | Free (unlimited memories, 500 dreams/mo) → $19 Pro | Free (unlimited memory, rate-limited retrieval) |
| Integrations | 80+ first-class (typed, tested, maintained by REM) | Community ports (varied quality) |
| Retrieval modes | 8 (verbatim, semantic, graph, temporal, hybrid, neural-rerank, creative-leap, honest-abstention) | 1 (semantic search) |
LONGMEMEVAL METHODOLOGY · /benchmarks
The two dimensions Mem0 markets hardest — and REM's actual numbers on each.
REM ships unlimited memories and 500 dreams/month on free — the Dream Engine is included, not held back. Self-host is unlimited everything. No caps on the thing that matters.
REM ships 80+ first-class integrations maintained by REM — typed SDKs, tested, versioned, not community ports. CrewAI, LangGraph, LlamaIndex, AutoGen, Mastra, Claude Code, Cursor, Zapier, n8n, Obsidian, MCP.
Mem0 as extract-and-retrieve, REM as the continuity layer underneath. REM ingests Mem0-format payloads directly — import docs at /import. We'd rather you use both than switch cold.
No credit card. Dream Engine included. Drop-in SDK in Python and Node.