Real problems. Real fixes.

Every use case below is a frustration you have already felt. See the before, the after, the code, and the API response.

97.2%
Recall accuracy
< 50ms
p95 latency
3
Methods total
Chatbot Memory
"My chatbot keeps asking users to repeat themselves"
Before
User says "I prefer metric units" in conversation 1. Conversation 2 starts fresh. The bot asks again. And again. Every session is amnesia. Users leave.
After
Store user preferences once. Recall them forever. Your bot opens every session already knowing who it is talking to. Zero repeat questions.
Honest impact
Eliminates "what was your name again?" moments. Support bots handling 500+ users stop repeating the same questions every session.
// Conversation 1 -- user states a preference await rem.remember({ content: "User prefers metric units, dark mode, concise answers", namespace: "user_8291" }) // Conversation 2 -- bot recalls without asking const prefs = await rem.recall({ query: "user preferences", namespace: "user_8291" })
API Response
{ "memories": [ { "content": "User prefers metric units, dark mode, concise answers", "score": 0.96, "created_at": "2026-03-15T14:22:00Z", "namespace": "user_8291" } ], "latency_ms": 23 }
import { REM } from "@remlabs/sdk"; const rem = new REM({ apiKey: process.env.REM_KEY }); async function handleMessage(userId, message) { // 1. Recall everything relevant about this user const context = await rem.recall({ query: message, namespace: `user_${userId}`, limit: 5 }); // 2. Inject memories into your LLM prompt const response = await llm.chat({ system: `You know: ${context.memories.map(m => m.content).join('; ')}`, user: message }); // 3. Extract and store any new preferences if (containsPreference(response)) { await rem.remember({ content: extractPreference(message), namespace: `user_${userId}` }); } return response; }
History Import
"I exported 3,000 ChatGPT conversations and they're just sitting there"
Before
You downloaded the export. It is a massive JSON blob. No way to search it. No way to extract the decisions, preferences, and ideas buried inside 3,000 threads.
After
One command turns dead history into living memory. REM extracts structure from chaos -- preferences, decisions, people, topics -- all instantly searchable and queryable.
Honest impact
Years of accumulated knowledge become instantly searchable. Typical import: 3,000 conversations indexed in under 2 minutes.
# Import your entire ChatGPT history rem import chatgpt conversations.json # => "Indexed 3,041 conversations. Found 847 preferences, # 234 decisions, 91 people mentioned." # Now ask it anything rem recall "what did I decide about the database?"
API Response
{ "memories": [ { "content": "Decided on Postgres over MongoDB for the main DB. Reasons: better JSON support than expected, ACID compliance, team familiarity.", "score": 0.93, "source": "chatgpt_conv_1847", "created_at": "2026-03-12T09:15:00Z" } ], "latency_ms": 31 }
import { REM } from "@remlabs/sdk"; import fs from "fs"; const rem = new REM({ apiKey: process.env.REM_KEY }); // Load the ChatGPT export const conversations = JSON.parse( fs.readFileSync("conversations.json", "utf-8") ); // Batch import -- REM handles chunking and extraction const result = await rem.import({ source: "chatgpt", data: conversations, namespace: "my-history", extract: ["preferences", "decisions", "people"] }); console.log(result.indexed); // 3041 console.log(result.extracted); // { preferences: 847, decisions: 234, people: 91 }
Agent Memory
"My AI agent forgets what it learned yesterday"
Before
Your research agent spends 20 minutes finding 50 papers. Next run, it starts from scratch. Every execution is groundhog day. You pay for the same API calls over and over.
After
Agents store findings after each run. Next execution picks up where the last left off. Knowledge compounds instead of resetting. API costs drop proportionally.
Honest impact
Research agents that run 10x daily save ~40-60% on LLM API costs by avoiding redundant lookups. Findings compound over weeks.
from remlabs import REM rem = REM(api_key="your-key", namespace="research-agent") # End of run -- persist what was learned rem.remember( content="Paper A contradicts Paper B on dosage efficacy. Paper C (2024) confirms A with n=2000 sample." ) # Next run -- pick up where you left off context = rem.recall(query="what do we know about dosage?")
API Response
{ "memories": [ { "content": "Paper A contradicts Paper B on dosage efficacy. Paper C (2024) confirms A with n=2000 sample.", "score": 0.97, "created_at": "2026-04-10T16:30:00Z" }, { "content": "Meta-analysis by Chen et al. found dose-response curve flattens above 200mg. Aligns with Paper A findings.", "score": 0.91, "created_at": "2026-04-09T11:45:00Z" } ], "latency_ms": 18 }
from remlabs import REM from langchain.tools import tool rem = REM(api_key="your-key", namespace="research-agent") # Before the agent runs, load prior knowledge prior = rem.recall( query="summary of all research findings", limit=20 ) # Inject into agent context agent = create_agent( tools=[search_pubmed, read_pdf], context=f"Prior findings:\n{prior.text()}" ) # After run, persist new discoveries for finding in agent.new_findings: rem.remember(content=finding) # Agent gets smarter every single run
Team Knowledge
"Our team keeps answering the same customer questions"
Before
Monday: rep solves a tricky billing issue. Thursday: different rep gets the exact same question. No shared context. The fix dies in a Slack thread nobody will find.
After
Shared namespace, role-based access. Every resolution goes into team memory. Any rep can recall how the team already handled something in seconds.
Honest impact
Support teams of 5+ typically resolve recurring issues 3-5x faster once fixes are stored in shared memory instead of lost in Slack.
// Rep A resolves a ticket await rem.remember({ content: "Billing error 4012: caused by timezone mismatch in payment processor. Fix: update locale in Settings > Billing > Region.", namespace: "support-team" }) // Rep B gets the same question next week const fix = await rem.recall({ query: "billing error 4012", namespace: "support-team" })
API Response
{ "memories": [ { "content": "Billing error 4012: caused by timezone mismatch in payment processor. Fix: update locale in Settings > Billing > Region.", "score": 0.98, "namespace": "support-team", "created_at": "2026-04-01T10:00:00Z" } ], "latency_ms": 14 }
import { REM } from "@remlabs/sdk"; const rem = new REM({ apiKey: process.env.REM_KEY }); async function onTicketResolved(ticket) { // Auto-store the resolution in team memory await rem.remember({ content: `${ticket.error_code}: ${ticket.resolution}`, namespace: "support-team", metadata: { resolved_by: ticket.agent_id, category: ticket.category, customer_tier: ticket.tier } }); } async function suggestFix(newTicket) { // Check if the team has seen this before const existing = await rem.recall({ query: newTicket.description, namespace: "support-team", limit: 3 }); return existing.memories; }
Temporal Awareness
"I need my AI to know what changed, not just what was said"
Before
User said "I live in Seattle" six months ago. Last week they said "I just moved to Portland." Most memory systems treat both as equally valid. Your AI still thinks they are in Seattle.
After
REM tracks knowledge over time. When facts conflict, the latest one wins automatically. Your AI always has the current truth, not a pile of contradictions.
97.2% temporal accuracy
Scored 97.2% on LongMemEval's temporal knowledge update tests (ICLR 2025 benchmark, GPT-4o judge). Next best: 66.9%.
// Six months ago await rem.remember({ content: "User lives in Seattle", namespace: "user_42" }) // Last week -- contradicts the old fact await rem.remember({ content: "User just moved to Portland", namespace: "user_42" }) // REM resolves the conflict automatically const result = await rem.recall({ query: "where does user live?", namespace: "user_42" })
API Response
{ "memories": [ { "content": "User just moved to Portland", "score": 0.95, "created_at": "2026-04-05T08:00:00Z", "supersedes": "User lives in Seattle" } ], "temporal_resolution": true, "latency_ms": 22 }
import { REM } from "@remlabs/sdk"; const rem = new REM({ apiKey: process.env.REM_KEY }); // Your chatbot handler async function handleUserUpdate(userId, message) { // Store the new fact -- REM handles conflict resolution await rem.remember({ content: message, namespace: `user_${userId}` }); // Later recall always returns the latest truth // No manual deduplication needed // No "which version is correct?" logic // REM's temporal scoring handles it automatically const current = await rem.recall({ query: "user location and address", namespace: `user_${userId}` }); // Always returns Portland, never Seattle }
Second Brain
"I want my Obsidian vault to power my AI"
Before
2,000 notes. Years of thinking. Your AI assistant cannot access any of it. You have a second brain that is completely disconnected from the tools you actually use.
After
Sync your vault. Every note becomes searchable memory. [[Wikilinks]] become knowledge graph edges. Ask a question, get a synthesized answer sourced from your own writing.
Honest impact
Turns a passive archive into an active knowledge layer. Wikilinks become queryable relationships. Typical vault: 2,000 notes indexed in under 90 seconds.
# Sync your vault into REM rem import obsidian ~/Documents/vault # => "Indexed 2,041 notes. 847 wikilinks mapped." # Ask across your entire body of notes rem recall "what are my notes on pricing strategy?"
API Response
{ "memories": [ { "content": "Pricing strategy: value-based preferred over cost-plus. Key insight from SaaS pricing research -- anchor on outcomes not features.", "score": 0.94, "source": "obsidian://Pricing Strategy.md", "linked_notes": ["SaaS Metrics", "Revenue Model", "Competitor Pricing"] } ], "total_sources": 12, "latency_ms": 27 }
import { REM } from "@remlabs/sdk"; import { readVault } from "@remlabs/obsidian"; const rem = new REM({ apiKey: process.env.REM_KEY }); // Import with wikilink extraction const vault = await readVault("~/Documents/vault"); for (const note of vault.notes) { await rem.remember({ content: note.content, namespace: "my-vault", metadata: { title: note.title, tags: note.tags, links: note.wikilinks // becomes graph edges } }); } // Query with graph-aware retrieval const answer = await rem.recall({ query: "pricing strategy notes", namespace: "my-vault", mode: "graph" // follows wikilinks });
Knowledge Updates
"My AI doesn't know what changed"
Before
User updates their shipping address. Two weeks later, the AI sends a package to the old address. User changes their project deadline. The AI still plans around the old date. Stale data causes real problems.
After
REM tracks knowledge updates with temporal resolution. rem.recall("user address") always returns the latest. Old versions are kept in history but never surface as current truth.
97.2% temporal accuracy
This is what our 97.2% LongMemEval score measures -- the ability to correctly resolve conflicting information across time. The next best system scores 66.9%.
// January: user sets their address await rem.remember({ content: "Shipping address: 123 Pine St, Seattle, WA 98101", namespace: "user_42" }) // March: user moves await rem.remember({ content: "New shipping address: 456 Oak Ave, Portland, OR 97201", namespace: "user_42" }) // Any time after: always returns the latest const addr = await rem.recall({ query: "user shipping address", namespace: "user_42" })
API Response
{ "memories": [ { "content": "New shipping address: 456 Oak Ave, Portland, OR 97201", "score": 0.97, "created_at": "2026-03-15T12:00:00Z", "supersedes": "123 Pine St, Seattle, WA 98101", "temporal_resolution": true } ], "latency_ms": 19 }
import { REM } from "@remlabs/sdk"; const rem = new REM({ apiKey: process.env.REM_KEY }); // E-commerce checkout flow async function getShippingAddress(userId) { const result = await rem.recall({ query: "shipping address", namespace: `user_${userId}` }); // Always returns the LATEST address // Old addresses kept in history, never surface as current return result.memories[0].content; } // User profile update handler async function onProfileUpdate(userId, field, value) { await rem.remember({ content: `${field}: ${value}`, namespace: `user_${userId}` }); // REM automatically handles temporal superseding // No manual "delete old, insert new" needed }

Your use case is next.

One line to give any AI persistent memory. Free to start.

npx @remlabs/memory
Start building

Or see how we score 97.2% on LongMemEval

Give your AI agent memory that persists. Import ChatGPT history, connect any tool, and let your memory API for chatbots work across platforms.

AI Memory API · Try the Playground · Documentation · Import ChatGPT History · Enterprise AI Memory Infrastructure