Cookbook

Cookbook — 8 recipes, one afternoon.

Drop-in patterns for the most common memory-enabled builds. Each recipe is under 30 lines.

Python TypeScript cURL LangChain MCP
The 8 Recipes

Every recipe. Copy, paste, ship.

Replace rem_... with your real key. Replace sk-... or other provider keys with yours. Everything else works as written.

01
Memory-enabled chatbot (OpenAI + REM)
Python · 22 lines

Every user message gets remembered. Before each response, retrieve relevant context. Over time, the bot learns the user.

Python
import openai, requests
OPENAI_KEY = "sk-..."
REM_KEY = "rem_..."

def chat(user_id, msg):
    ctx = requests.post("https://remlabs.ai/v1/memory/ask",
        headers={"Authorization": f"Bearer {REM_KEY}"},
        json={"question": msg, "namespace": user_id}).json().get("answer", "")
    resp = openai.chat.completions.create(
        model="gpt-4o",
        messages=[
            {"role": "system", "content": f"Context: {ctx}"},
            {"role": "user", "content": msg}
        ]).choices[0].message.content
    requests.post("https://remlabs.ai/v1/memory-set",
        headers={"Authorization": f"Bearer {REM_KEY}"},
        json={"key": f"turn-{hash(msg)}", "value": f"{msg} → {resp}", "namespace": user_id})
    return resp
Why this works: per-user namespace isolates memories; /memory/ask synthesizes with +15.33pp SWE-bench Lite lift (n=150, p<0.05); /memory-set stores the turn for next time.
02
Agent with persistent context (Anthropic + REM)
TypeScript · 21 lines

A research agent that remembers every finding. Stop re-researching.

TypeScript
import Anthropic from "@anthropic-ai/sdk";
import { REM } from "@remlabs/sdk";

const claude = new Anthropic({ apiKey: process.env.ANTHROPIC_KEY });
const rem = new REM({ apiKey: process.env.REM_KEY });

async function research(topic: string) {
  const prior = await rem.memory.ask({ question: topic, namespace: "research" });
  const msg = await claude.messages.create({
    model: "claude-sonnet-4-6",
    max_tokens: 1024,
    messages: [{ role: "user",
      content: `Prior findings: ${prior.answer}\n\nDeeper dive on: ${topic}` }]
  });
  const finding = msg.content[0].type === "text" ? msg.content[0].text : "";
  await rem.memory.set({
    key: `research-${Date.now()}`,
    value: finding,
    namespace: "research",
    metadata: { topic }
  });
  return finding;
}
Why this works: each finding stacks onto the graph; Dream Engine consolidates overnight.
03
Note-taking app backend (cURL)
cURL · 2 calls

Accept any text blob, store it structured. Full text search returns ranked hits.

cURL · write
# Write
curl -X POST https://remlabs.ai/v1/memory-set \
  -H "Authorization: Bearer rem_..." \
  -H "Content-Type: application/json" \
  -d '{"key":"note-'"$(date +%s)"'","value":"Your note body","namespace":"user-42"}'
cURL · search
# Search
curl -X POST https://remlabs.ai/v1/memory-search \
  -H "Authorization: Bearer rem_..." \
  -H "Content-Type: application/json" \
  -d '{"query":"what did I write about Q2 OKRs","namespace":"user-42","limit":5}'
Why this works: FTS5 AND-first + vector rerank gives note-style retrieval out of the box.
04
Real-time webhook listener
Python (Flask) · register + handle

Get pinged the moment a memory enters or changes.

Python (Flask)
from flask import Flask, request
app = Flask(__name__)

@app.post("/rem-webhook")
def on_event():
    ev = request.json
    if ev["event"] == "memory.contradiction":
        notify_slack(f"Contradiction: {ev['memory_a']} vs {ev['memory_b']}")
    elif ev["event"] == "dream.complete":
        email_briefing(ev["summary"])
    return {"ok": True}
Register the webhook
curl -X POST https://remlabs.ai/v1/webhooks \
  -H "Authorization: Bearer rem_..." \
  -d '{"url":"https://yourapp.com/rem-webhook","events":["memory.set","dream.complete","memory.contradiction"]}'
Why this works: 4 event types; Dream Engine contradictions become Slack alerts.
05
Multi-agent federation
Python · shared namespace

Three agents share one namespace. All can read, each can write to its own scope.

Python
import requests
def agent_write(agent_id, key, value):
    requests.post("https://remlabs.ai/v1/memory-set",
        headers={"Authorization": f"Bearer {REM_KEY}"},
        json={"key": key, "value": value,
              "namespace": "team-alpha",
              "metadata": {"agent": agent_id}})

agent_write("researcher", "f-1", "Found: Q3 churn up 8%.")
agent_write("analyst", "a-1", "Cause: onboarding dropoff at step 3.")
agent_write("writer", "w-1", "Draft post: 'Why step 3 broke.'")

# Any agent asks:
ans = requests.post("https://remlabs.ai/v1/memory/ask",
    headers={"Authorization": f"Bearer {REM_KEY}"},
    json={"question": "What's our Q3 churn story?", "namespace": "team-alpha"}).json()
Why this works: shared namespace + agent-scoped metadata = team hive without custom plumbing.
06
Graph query for relationships
cURL · /v1/memory/graph-query

Pull the sub-graph around a concept.

cURL
curl -X POST https://remlabs.ai/v1/memory/graph-query \
  -H "Authorization: Bearer rem_..." \
  -H "Content-Type: application/json" \
  -d '{"seed":"Q3 churn","depth":2,"namespace":"team-alpha"}'

Returns nodes + edges. Render with your favorite graph viz (Cytoscape, D3, VisNetwork).

Why this works: REM maintains a knowledge graph under every namespace — you never have to build one.
07
Dream Engine on-demand
TypeScript · start + poll

Trigger consolidation manually. Pick specific strategies.

TypeScript
const job = await rem.dream.start({
  namespace: "team-alpha",
  strategies: ["contradict", "synthesize", "forecast"],
  task: "Why is our Q3 churn up?"
});
// Poll
let status = job;
while (status.status !== "complete") {
  await new Promise(r => setTimeout(r, 2000));
  status = await rem.dream.status(job.id);
}
console.log(`${status.insights} insights emitted.`);
Why this works: directed dreaming = "ask the graph to think about X, emit insights." Three strategies in <60s on typical namespaces.
08
Bulk import (Obsidian vault / Notion export)
Python · store-batch

Ingest a folder of markdown into REM in one call.

Python
import os, glob, requests
notes = []
for path in glob.glob("./vault/**/*.md", recursive=True):
    with open(path) as f:
        notes.append({
            "value": f.read(),
            "namespace": "obsidian",
            "metadata": {"source": "obsidian", "path": path}
        })

# Batch-send
for chunk in [notes[i:i+100] for i in range(0, len(notes), 100)]:
    r = requests.post("https://remlabs.ai/v1/memory/store-batch",
        headers={"Authorization": f"Bearer {REM_KEY}"},
        json={"items": chunk}).json()
    print(f"stored: {r.get('count', 0)}")
Why this works: store-batch accepts up to 1000 items per call. Nightly Dream Engine consolidates the entire vault. Critical: items MUST use field value, not content or text. Other fields are silently dropped by the backend.
Common Mistakes

Three traps everyone hits on day one.

All three are silent failures that return HTTP 200. Watch for them.

Using content instead of value in store-batch — items silently drop.
Always use value. The server accepts content and text as payload keys with a 200 response, but the items never hit the store. Swap the key name and retry.
Bot-signup keys hit /memory-set before email confirm — returns 403 scope_denied.
Confirm email first, or use /auth/signup (password) which returns a full-scope key immediately. Programmatic bot-signup keys stay provisional until the confirmation link is clicked.
Expecting sub-second Dream Engine — strategies take 2–60s.
Poll /v1/memory/dream/status/{id} instead of blocking. Cycle duration scales with graph size and strategy count. Typical namespace + 3 strategies = 20–45s.
That's the whole kitchen.
Start with a Quickstart, then use this page as a reference. Full endpoint coverage in the API reference.