+
REM Labs · CrewAI

Memory that persists across every CrewAI crew

CrewAI crews pass context between roles during a run — but that context evaporates when the crew disbands. REM gives every crew member a shared, persistent memory substrate: researchers write notes, writers read them, reviewers see the full history, and next month's run picks up where last month's left off.

Free tier · No credit card · Works with any CrewAI process

Why CrewAI + REM

Three problems multi-agent crews hit that short-term memory can't solve.

Continuity across sessions

The same crew, same namespace, resumes next week with everything last week's researcher found — no hand-off doc to write.

9 Dream Engine strategies

Between runs, REM summarizes crew output, dedupes conflicting facts, and scores salience. The crew's institutional memory compounds instead of bloating.

Cross-tool portability

The notes your Researcher agent takes today are queryable from a LangChain reviewer, an AutoGen editor, and your Cursor MCP — same REST endpoint.

Install in 60 seconds

CrewAI agents already take tools=[...]. REM fits in there.

The Python SDK is in private beta. The requests pattern below is production-ready today. Request SDK access in Discord.

1

Get your API key

Sign up at remlabs.ai/console. Copy the sk-rem-... key and export it.

pip install crewai crewai-tools requests # export REM_API_KEY="sk-rem-..."
2

Build two shared tools

recall hits /v1/memory-search-semantic; remember hits /v1/memory-set. Both accept a namespace so you can isolate crews.

3

Hand them to every Agent

Each role in the crew gets tools=[recall, remember]. One namespace across all agents. Done.

Common patterns

Three shapes: per-role tools, shared crew notepad, and inter-run continuity.

1. Memory tools every crew member gets

Python

Two BaseTool classes. Every Agent accepts them. The namespace scopes the crew's brain.

import os, requests from crewai import Agent, Task, Crew, Process from crewai.tools import BaseTool REM = "https://remlabs.ai/v1" H = {"Authorization": f"Bearer {os.environ['REM_API_KEY']}", "Content-Type": "application/json"} NS = "market-research-crew" class Recall(BaseTool): name: str = "recall" description: str = "Search the crew's shared memory for prior findings." def _run(self, query: str) -> str: r = requests.post(f"{REM}/memory-search-semantic", headers=H, json={"namespace": NS, "query": query, "limit": 6}).json() return "\n".join(f"- {m['value'][:240]}" for m in r.get("results", [])) or "(empty)" class Remember(BaseTool): name: str = "remember" description: str = "Store an important finding for the rest of the crew and future runs." def _run(self, fact: str) -> str: requests.post(f"{REM}/memory-set", headers=H, json={"namespace": NS, "key": fact[:40], "value": fact}) return "stored" recall, remember = Recall(), Remember()

2. Researcher → Writer → Reviewer with shared memory

Python

Three roles, one namespace. The writer reads what the researcher wrote — the reviewer sees both.

researcher = Agent( role="Senior Market Researcher", goal="Gather evidence on the AI memory market.", backstory="Sharp, skeptical, cites primary sources.", tools=[recall, remember], verbose=True, ) writer = Agent( role="Analyst", goal="Turn findings into a 1-page brief.", backstory="Writes for busy execs. Always checks the shared memory first.", tools=[recall, remember], verbose=True, ) reviewer = Agent( role="Editor", goal="Catch contradictions between research and draft.", backstory="Reads everything in the crew's memory before approving.", tools=[recall, remember], verbose=True, ) crew = Crew( agents=[researcher, writer, reviewer], tasks=[ Task(description="Find 5 recent AI memory products.", agent=researcher, expected_output="A list."), Task(description="Draft a 300-word brief.", agent=writer, expected_output="The brief."), Task(description="Approve or list corrections.", agent=reviewer, expected_output="Approved / corrections."), ], process=Process.sequential, ) crew.kickoff()

3. Cross-run continuity with namespace handoff

Python

Same namespace next Monday = same institutional memory. No checkpoint files, no re-briefing.

import datetime as dt # Monday's run crew.kickoff() # agents write to namespace "market-research-crew" # ...a week later, same namespace, fresh process... # The Dream Engine has summarized, deduped, and re-weighted overnight. # The researcher starts by calling `recall("everything we learned last week")` # and instantly has a digest — not a raw dump. # Optional: list what the crew remembers curl_cmd = f"""curl -X POST https://remlabs.ai/v1/memory-list \\ -H 'Authorization: Bearer {os.environ["REM_API_KEY"]}' \\ -H 'Content-Type: application/json' \\ -d '{{"namespace":"{NS}","limit":100}}'""" print(curl_cmd)

What you get

Everything CrewAI's default short-term memory leaves on the table.

Cross-session memory

One namespace per crew. Survives process exits, restarts, and new hires on the crew roster.

Semantic search under 180ms

Every crew member's queries hit a vector + FTS5 fusion index. Fast enough to call inside a tool without slowing the run.

Overnight consolidation

Between runs, Dream Engine summarizes the crew's raw output into a clean brief. Your next kickoff starts with wisdom, not transcripts.

Shared across all your agents

CrewAI writes, LangChain reads, AutoGen reviews. The same namespace works from every framework and from raw curl.

Give your CrewAI crew an institutional memory

Free tier, no credit card. Ship a crew that remembers across runs in under a minute.

Get API key Read full docs