Migrate from Pinecone to REM Labs

Pinecone is a vector database. REM Labs is an AI memory system. This guide walks through exporting your Pinecone vectors and importing them into REM Labs, where they gain full-text search, entity extraction, temporal decay, and neural reranking on top of vector similarity.

Why Migrate

Step 1: Export from Pinecone

Use the Pinecone client to fetch all vectors with their metadata:

import { Pinecone } from "@pinecone-database/pinecone"; import fs from "fs"; const pc = new Pinecone({ apiKey: process.env.PINECONE_API_KEY }); const index = pc.index("your-index"); // List all vector IDs const allIds = []; for await (const ids of index.listPaginated()) { allIds.push(...ids.vectors.map(v => v.id)); } // Fetch vectors in batches of 100 const exported = []; for (let i = 0; i < allIds.length; i += 100) { const batch = allIds.slice(i, i + 100); const response = await index.fetch(batch); for (const [id, vec] of Object.entries(response.records)) { exported.push({ id, text: vec.metadata?.text || "", metadata: vec.metadata || {} }); } } fs.writeFileSync("pinecone-export.json", JSON.stringify(exported, null, 2)); console.log(`Exported ${exported.length} vectors`);

Step 2: Import into REM Labs

Read the export and push each record to REM Labs. You send raw text -- REM handles embedding automatically:

import { RemClient } from "@remlabs/sdk"; import fs from "fs"; const rem = new RemClient({ apiKey: process.env.REMLABS_API_KEY }); const data = JSON.parse(fs.readFileSync("pinecone-export.json", "utf-8")); let imported = 0; for (const record of data) { if (!record.text) continue; await rem.remember({ content: record.text, namespace: "pinecone-import", metadata: { pinecone_id: record.id, ...record.metadata }, tags: record.metadata?.tags || [] }); imported++; if (imported % 100 === 0) { console.log(`Imported ${imported}/${data.length}`); } } console.log(`Migration complete: ${imported} memories imported`);

Step 3: Verify the Migration

Run a few test queries to compare results between Pinecone and REM:

// Test query against REM Labs const results = await rem.recall({ query: "What is our refund policy?", namespace: "pinecone-import", limit: 5 }); results.forEach((m, i) => { console.log(`${i + 1}. [${m.score.toFixed(3)}] ${m.content.slice(0, 100)}...`); });

Step 4: Update Your Application Code

Replace Pinecone query calls with REM Labs recall. The key difference: you send natural language queries instead of pre-computed embeddings.

// Before (Pinecone) const embedding = await openai.embeddings.create({ model: "text-embedding-3-small", input: userQuery }); const results = await index.query({ vector: embedding.data[0].embedding, topK: 5, includeMetadata: true }); // After (REM Labs) const results = await rem.recall({ query: userQuery, namespace: "pinecone-import", limit: 5 });

No embedding calls needed: REM Labs generates embeddings internally. This eliminates the OpenAI embeddings API call from your query path, reducing both latency and cost.

Move beyond vector-only search

Free tier. No embedding management. Multi-signal retrieval out of the box.

Get Started