AI Memory Nodes for n8n Workflows

n8n's visual workflow builder is powerful for automation, but its AI agent nodes have no persistent memory between executions. This guide uses REM Labs HTTP Request nodes to store and retrieve context so your n8n AI workflows remember past interactions and deliver smarter responses.

The Problem: Stateless AI Agents in n8n

n8n's AI Agent node lets you build conversational workflows with tools, but each execution starts fresh. The agent has no idea what the user said yesterday or what it decided last week. For customer support bots, sales assistants, or any recurring workflow, this is a dealbreaker.

REM Labs is a REST API -- which means you can wire it into n8n using standard HTTP Request nodes. No custom code, no npm packages. Just two HTTP calls: one to search, one to store.

Step 1: Get Your API Key

Sign up at remlabs.ai/console and copy your API key. In n8n, create a new credential of type Header Auth with:

Step 2: Memory Search Node

Add an HTTP Request node before your AI Agent node with these settings:

Method: POST URL: https://api.api.remlabs.ai/v1/memory/search Body (JSON): { "query": "{{ $json.chatInput }}", "namespace": "n8n-support-bot", "limit": 5 } Headers: Authorization: Bearer sk-slop-... Content-Type: application/json

This searches REM for memories relevant to whatever the user just typed. The response contains an array of memory objects with value and score fields.

Step 3: Feed Context to the AI Agent

Connect the HTTP Request output to a Set node that formats the context, then pipe it into the AI Agent's system message:

// Set node expression for system message: You are a support assistant. Here is context from previous interactions: {{ $json.data.map(m => m.value).join('\n') }} Use this context to give personalized, informed responses.

The AI Agent now receives relevant history from past conversations as part of its system prompt. It can reference previous tickets, known preferences, and established facts.

Step 4: Memory Store Node

Add another HTTP Request node after the AI Agent to persist the exchange:

Method: POST URL: https://api.api.remlabs.ai/v1/memory/store Body (JSON): { "value": "User: {{ $('AI Agent').first().json.chatInput }}\nAssistant: {{ $('AI Agent').first().json.output }}", "namespace": "n8n-support-bot", "tags": ["support", "session"] } Headers: Authorization: Bearer sk-slop-... Content-Type: application/json

Every exchange is now stored persistently. The next time the workflow runs, the search node retrieves relevant past context automatically.

Example: Customer Support Workflow

A complete memory-enhanced support workflow in n8n looks like:

  1. Webhook Trigger -- receives chat input
  2. HTTP Request (Search) -- retrieves relevant memories from REM
  3. Set Node -- formats context for the system prompt
  4. AI Agent -- generates a response with context
  5. HTTP Request (Store) -- persists the exchange to REM
  6. Respond to Webhook -- returns the response

Six nodes. No code. Your support bot now remembers every customer interaction across sessions.

Works with n8n Cloud and self-hosted: Since REM is called via standard HTTP, it works identically on n8n Cloud and self-hosted instances. No custom node packages to install.

Give your n8n workflows a memory

Free tier. No credit card. Just add an HTTP Request node.

Get started free →