Logseq + AI Memory: Graph-Based Knowledge Sync

Logseq stores your knowledge as local Markdown files in a graph structure. REM Labs makes that graph searchable by AI with semantic retrieval, entity extraction, and temporal awareness. This guide shows how to sync your Logseq graph to REM Labs by reading the Markdown files directly.

Why Logseq + REM Labs

Logseq is local-first and privacy-respecting, but its search is limited to full-text matching. REM Labs adds semantic understanding -- so when you ask "What did I learn about database indexing?" it finds relevant blocks even if they never mention the word "indexing" directly. Your data stays in your Logseq folder; REM is an additional layer for AI-powered retrieval.

Step 1: Locate Your Logseq Graph

# Logseq stores pages as Markdown files # Default locations: # macOS: ~/Documents/logseq-graph/pages/ # Linux: ~/logseq-graph/pages/ # Windows: C:\Users\you\logseq-graph\pages\ ls ~/logseq-graph/pages/ # Database Design.md # Meeting Notes.md # Project Alpha.md # ...

Step 2: Parse and Import

import { RemClient } from "@remlabs/sdk"; import fs from "fs"; import path from "path"; const rem = new RemClient({ apiKey: process.env.REMLABS_API_KEY }); const graphPath = process.env.LOGSEQ_GRAPH_PATH || "~/logseq-graph/pages"; function extractLinks(content) { const links = content.match(/\[\[([^\]]+)\]\]/g) || []; return links.map(l => l.slice(2, -2)); } function extractTags(content) { const tags = content.match(/#[\w-]+/g) || []; return tags.map(t => t.slice(1)); } const files = fs.readdirSync(graphPath).filter(f => f.endsWith(".md")); let count = 0; for (const file of files) { const content = fs.readFileSync(path.join(graphPath, file), "utf-8"); const title = path.basename(file, ".md"); const links = extractLinks(content); const tags = [...extractTags(content), ...links]; if (content.trim().length < 20) continue; await rem.remember({ content: `# ${title}\n\n${content}`, namespace: "logseq", tags: [...new Set(tags)], metadata: { title, source: "logseq", file: file, linked_pages: links } }); count++; } console.log(`Imported ${count} Logseq pages`);

Step 3: Watch for Changes

Use file watching to sync changes as you write in Logseq:

import chokidar from "chokidar"; const watcher = chokidar.watch(graphPath, { ignored: /(^|[\/\\])\../, persistent: true }); watcher.on("change", async (filePath) => { if (!filePath.endsWith(".md")) return; const content = fs.readFileSync(filePath, "utf-8"); const title = path.basename(filePath, ".md"); const links = extractLinks(content); await rem.remember({ content: `# ${title}\n\n${content}`, namespace: "logseq", tags: [...extractLinks(content), ...extractTags(content)], metadata: { title, source: "logseq", file: path.basename(filePath) } }); console.log(`Synced: ${title}`); });

Step 4: Query Your Graph

// Ask questions about your knowledge in natural language const results = await rem.recall({ query: "How does our deployment pipeline work?", namespace: "logseq", limit: 5 }); results.forEach(m => { console.log(`[${m.score.toFixed(2)}] ${m.metadata.title}`); console.log(` Linked: ${m.metadata.linked_pages.join(", ")}`); });

Journal pages: Logseq journal pages (stored in journals/) are great for temporal queries. Import them into a separate namespace like logseq:journals so you can ask "What was I thinking about last Tuesday?"

Add AI search to your Logseq graph

Free tier. Local file sync. Semantic search across your knowledge.

Get Started