Continue.dev + REM Labs Memory Integration

Continue.dev is the leading open-source AI code assistant for VS Code and JetBrains. This guide shows how to add REM Labs as a custom context provider, giving Continue persistent memory that works with any LLM backend -- OpenAI, Anthropic, Ollama, or your own.

Why Continue Needs External Memory

Continue.dev supports multiple models and custom context providers, but it does not ship with persistent memory. Each session starts clean. By adding REM Labs as a context provider, every conversation benefits from accumulated project knowledge -- regardless of which LLM you are using behind the scenes.

Step 1: Get an API Key

Sign up at remlabs.ai/console for a free API key. The free tier includes 1,000 memories and 60 requests per minute.

Step 2: Add REM Labs as a Context Provider

Edit your Continue configuration file (~/.continue/config.json):

{ "contextProviders": [ { "name": "remlabs", "params": { "apiKey": "your-api-key", "apiBase": "https://api.remlabs.ai/v1", "namespace": "my-project", "maxResults": 5 } } ] }

Step 3: Use the @remlabs Context Provider

In any Continue conversation, use the @remlabs context provider to inject memories:

# Type in Continue chat: @remlabs database setup # Continue fetches relevant memories and includes # them as context for the current conversation

Custom Context Provider (Advanced)

For deeper integration, create a custom context provider that automatically injects relevant memories based on the current file and conversation:

// ~/.continue/config.ts import { ContextProviderWithParams } from "@continuedev/core"; const remlabsProvider: ContextProviderWithParams = { title: "remlabs", displayTitle: "REM Labs Memory", description: "Recall memories from REM Labs", getContextItems: async (query, extras) => { const resp = await fetch("https://api.remlabs.ai/v1/recall", { method: "POST", headers: { "Authorization": "Bearer your-api-key", "Content-Type": "application/json" }, body: JSON.stringify({ query: query || extras.fullInput, namespace: "my-project", limit: 5 }) }); const data = await resp.json(); return data.memories.map(m => ({ name: m.tags?.[0] || "memory", description: m.content.slice(0, 100), content: m.content })); } };

Storing Memories from Continue

Use Continue's slash commands to store memories. Add a custom slash command in your config:

{ "slashCommands": [ { "name": "remember", "description": "Store a memory in REM Labs", "run": async function* (sdk) { const input = sdk.input; await fetch("https://api.remlabs.ai/v1/remember", { method: "POST", headers: { "Authorization": "Bearer your-api-key", "Content-Type": "application/json" }, body: JSON.stringify({ content: input, namespace: "my-project" }) }); yield "Memory stored successfully."; } } ] }

Works with any model: Because REM Labs is a context provider, it works regardless of whether you use GPT-4, Claude, Llama, or any other model with Continue. The memory layer is model-agnostic.

Give Continue a memory layer

Free tier. Open source friendly. Any LLM backend.

Get started free →