Integration
Tutorial
April 13, 2026
Add Persistent Memory to Flowise Chatbots
Flowise makes building LLM chatflows visual and fast, but its memory options are limited to the current session. This guide uses REM Labs as an external memory backend so your Flowise chatbots remember users across sessions, recall past conversations, and personalize every response.
Why Flowise Needs External Memory
Flowise ships with Buffer Memory, Window Memory, and Conversation Summary Memory. All of them store data in-process. Restart Flowise, switch to a new container, or scale horizontally -- the memory is gone. For production chatbots that handle real users, you need a persistent memory layer with semantic search.
Step 1: Get Your API Key
Sign up at remlabs.ai/console and copy your API key. You will use this in Flowise's HTTP Tool nodes.
Step 2: Add a Memory Search Tool
In your Flowise chatflow, add a Custom Tool node connected to your Agent. Configure it as follows:
Tool Name: search_memory
Tool Description: Search past conversations and stored facts about the user.
Input Schema:
{
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "What to search for in memory"
}
},
"required": ["query"]
}
JavaScript Function:
const response = await fetch('https://api.api.remlabs.ai/v1/memory/search', {
method: 'POST',
headers: {
'Authorization': 'Bearer sk-slop-...',
'Content-Type': 'application/json'
},
body: JSON.stringify({
query: $query,
namespace: 'flowise-chatbot',
limit: 5
})
});
const data = await response.json();
return data.map(m => m.value).join('\n');
The agent will automatically call this tool when it decides it needs context from previous conversations.
Step 3: Add a Memory Store Tool
Tool Name: store_memory
Tool Description: Save an important fact or observation about the user for future reference.
Input Schema:
{
"type": "object",
"properties": {
"value": {
"type": "string",
"description": "The fact or observation to remember"
}
},
"required": ["value"]
}
JavaScript Function:
const response = await fetch('https://api.api.remlabs.ai/v1/memory/store', {
method: 'POST',
headers: {
'Authorization': 'Bearer sk-slop-...',
'Content-Type': 'application/json'
},
body: JSON.stringify({
value: $value,
namespace: 'flowise-chatbot',
tags: ['user-fact']
})
});
return 'Memory stored successfully.';
Now the agent can proactively store facts it learns about users -- preferences, names, past issues, purchase history -- without any explicit instructions.
Step 4: System Prompt Configuration
Update the Agent's system prompt to encourage memory usage:
You are a helpful customer assistant with access to memory tools.
At the start of each conversation, use search_memory to check
for relevant past context about this user.
When you learn something important about the user (name, preferences,
past issues), use store_memory to save it for future conversations.
Always provide personalized responses based on what you remember.
Alternative: API-Level Integration
If you prefer to call REM outside of agent tools, you can use Flowise's API to inject context before the chatflow runs:
// Before calling Flowise, search REM for context
const memories = await fetch('https://api.api.remlabs.ai/v1/memory/search', {
method: 'POST',
headers: {
'Authorization': 'Bearer sk-slop-...',
'Content-Type': 'application/json'
},
body: JSON.stringify({
query: userMessage,
namespace: 'flowise-chatbot',
limit: 5
})
}).then(r => r.json());
// Pass context as an override to Flowise
const result = await fetch('http://localhost:3000/api/v1/prediction/your-chatflow-id', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
question: userMessage,
overrideConfig: {
systemMessage: `Context from memory:\n${memories.map(m => m.value).join('\n')}`
}
})
});
Triple-indexed: Every memory is indexed with vector embeddings, full-text search, and entity graph extraction. Multi-signal fusion retrieval reaches 90% accuracy on LongMemEval -- far beyond what in-process buffer memory achieves.
Give your Flowise chatbot a memory
Free tier. No credit card. Just add a Custom Tool node.
Get started free →