Memory Layer for Dify AI Applications

Dify makes building AI apps visual and fast, but its conversation memory resets between sessions. This guide connects REM Labs to Dify using HTTP request nodes and custom tools, giving your Dify chatbots and agent apps persistent memory with multi-signal retrieval.

The Memory Gap in Dify

Dify provides conversation variables and a built-in knowledge base for document retrieval. But it has no concept of persistent conversational memory -- facts learned from one user session are not available in the next. For AI apps that need to remember user preferences, past decisions, and established context, you need an external memory layer.

Step 1: Create an API Credential in Dify

Get your REM Labs API key at remlabs.ai/console. In Dify, you will use this in HTTP request blocks.

Step 2: Add a Memory Search Tool

In your Dify app's tool configuration, create a custom API tool:

Tool Name: search_memory Description: Search for relevant information from past conversations API Endpoint: POST https://api.api.remlabs.ai/v1/memory/search Headers: Authorization: Bearer sk-slop-... Content-Type: application/json Request Body: { "query": "{{query}}", "namespace": "dify-app", "limit": 5 } Response Mapping: result = data.map(item => item.value).join("\n")

This tool becomes available to your Dify Agent node. When the agent determines it needs past context, it calls search_memory with the relevant query and receives matching memories.

Step 3: Add a Memory Store Tool

Tool Name: store_memory Description: Store an important fact or user preference for future reference API Endpoint: POST https://api.api.remlabs.ai/v1/memory/store Headers: Authorization: Bearer sk-slop-... Content-Type: application/json Request Body: { "value": "{{value}}", "namespace": "dify-app", "tags": ["user-fact"] }

The agent can now proactively remember things. When a user mentions their name, role, company, or preferences, the agent stores it for future sessions.

Step 4: Workflow-Based Integration

For Dify Workflow apps (not just chatbots), you can add HTTP request nodes directly in the workflow graph:

Workflow flow: 1. Start -> HTTP Request (search REM) -> LLM Node -> HTTP Request (store to REM) -> End HTTP Request Node (Search): Method: POST URL: https://api.api.remlabs.ai/v1/memory/search Body: { "query": "{{#start.input#}}", "namespace": "dify-workflow", "limit": 5 } LLM Node: System Prompt: "You are a helpful assistant. Previous context:\n{{#http_search.body.data#}}" User Input: "{{#start.input#}}" HTTP Request Node (Store): Method: POST URL: https://api.api.remlabs.ai/v1/memory/store Body: { "value": "Q: {{#start.input#}}\nA: {{#llm.output#}}", "namespace": "dify-workflow" }

The workflow searches memory before generating a response, then stores the exchange afterward. Each run builds on previous context.

Per-User Memory Isolation

Use Dify's conversation variables to create per-user namespaces:

// In the search body, use the user ID as namespace { "query": "{{query}}", "namespace": "dify-user-{{user_id}}", "limit": 5 }

Each user gets their own isolated memory store. Memories from one user never leak into another's retrieval results.

No Dify plugins needed: REM Labs is a standard REST API. Any Dify app can call it using built-in HTTP request nodes. Works on Dify Cloud and self-hosted instances identically.

Give your Dify apps a memory

Free tier. No credit card. Connect via HTTP request nodes.

Get started free →