Agent Integration Guide
Complete guide to integrating Raven with your AI agents.
Overview
This guide walks through integrating Raven memory into your AI agent's workflow. By the end, your agent will be able to remember user preferences, conversation context, and learned patterns across sessions.
Integration Architecture
User Message
↓
┌─────────────────────────────────┐
│ Your Agent │
│ │
│ 1. Receive user message │
│ 2. Query Raven for context ←──┼── Raven API
│ 3. Build prompt with context │
│ 4. Call LLM │
│ 5. Store interaction ───┼─→ Raven API
│ 6. Return response │
│ │
└─────────────────────────────────┘
↓
Agent ResponseInitial Setup
1. Register Your Tenant
First, register your application to get an API key:
curl -X POST http://localhost:3000/api/v1/tenants \
-H "Content-Type: application/json" \
-d '{"name": "My AI Agent", "email": "dev@example.com"}'2. Create a Client Wrapper
Create a simple client to interact with Raven:
interface RavenConfig {
apiKey: string;
baseUrl: string;
}
interface MemoryContext {
context: Array<{
type: 'episodic' | 'semantic';
content: any;
relevance_score: number;
}>;
facts: string[];
}
export class RavenClient {
private apiKey: string;
private baseUrl: string;
constructor(config: RavenConfig) {
this.apiKey = config.apiKey;
this.baseUrl = config.baseUrl;
}
private async request(endpoint: string, options: RequestInit = {}) {
const response = await fetch(`${this.baseUrl}${endpoint}`, {
...options,
headers: {
'Authorization': `Bearer ${this.apiKey}`,
'Content-Type': 'application/json',
...options.headers,
},
});
if (!response.ok) {
throw new Error(`Raven API error: ${response.status}`);
}
return response.json();
}
async createUser(externalRef: string, displayName?: string) {
return this.request('/api/v1/users', {
method: 'POST',
body: JSON.stringify({ external_ref: externalRef, display_name: displayName }),
});
}
async getOrCreateUser(externalRef: string) {
// Try to find existing user
const users = await this.request(`/api/v1/users?external_ref=${externalRef}`);
if (users.users?.length > 0) {
return users.users[0];
}
// Create new user
return this.createUser(externalRef);
}
async createConversation(userId: string, title?: string) {
return this.request('/api/v1/conversations', {
method: 'POST',
body: JSON.stringify({ user_id: userId, title }),
});
}
async queryMemory(userId: string, conversationId: string, query: string): Promise<MemoryContext> {
return this.request('/api/v1/memory/query', {
method: 'POST',
body: JSON.stringify({
user_id: userId,
conversation_id: conversationId,
query,
include_facts: true,
}),
});
}
async ingestMemory(userId: string, conversationId: string, userMessage: string, agentResponse: string) {
return this.request('/api/v1/memory/ingest', {
method: 'POST',
body: JSON.stringify({
user_id: userId,
conversation_id: conversationId,
user_message: userMessage,
agent_response: agentResponse,
}),
});
}
async flushMemory(userId: string, conversationId: string) {
return this.request('/api/v1/memory/flush', {
method: 'POST',
body: JSON.stringify({ user_id: userId, conversation_id: conversationId }),
});
}
}Agent Implementation
3. Integrate Memory into Your Agent
import { RavenClient } from './raven-client';
import OpenAI from 'openai';
const raven = new RavenClient({
apiKey: process.env.RAVEN_API_KEY!,
baseUrl: process.env.RAVEN_URL || 'http://localhost:3000',
});
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
interface ConversationState {
userId: string;
conversationId: string;
}
export async function initializeConversation(externalUserId: string): Promise<ConversationState> {
// Get or create user
const user = await raven.getOrCreateUser(externalUserId);
// Create a new conversation
const conversation = await raven.createConversation(
user.user_id,
`Session ${new Date().toISOString()}`
);
return {
userId: user.user_id,
conversationId: conversation.conversation_id,
};
}
export async function handleMessage(
state: ConversationState,
userMessage: string
): Promise<string> {
// Step 1: Query Raven for relevant context
const memoryContext = await raven.queryMemory(
state.userId,
state.conversationId,
userMessage
);
// Step 2: Build the prompt with memory context
const systemPrompt = buildSystemPrompt(memoryContext);
// Step 3: Call the LLM
const completion = await openai.chat.completions.create({
model: 'gpt-4',
messages: [
{ role: 'system', content: systemPrompt },
{ role: 'user', content: userMessage },
],
});
const agentResponse = completion.choices[0].message.content || '';
// Step 4: Store the interaction in memory
await raven.ingestMemory(
state.userId,
state.conversationId,
userMessage,
agentResponse
);
return agentResponse;
}
function buildSystemPrompt(context: any): string {
let prompt = `You are a helpful AI assistant with persistent memory.
## Known Facts About This User
`;
// Add extracted facts
if (context.facts?.length > 0) {
context.facts.forEach((fact: string) => {
prompt += `- ${fact}\n`;
});
} else {
prompt += `- No specific facts known yet\n`;
}
// Add relevant episodic memory
if (context.context?.length > 0) {
prompt += `\n## Relevant Past Conversations\n`;
context.context.slice(0, 5).forEach((memory: any) => {
if (memory.type === 'episodic' && memory.content) {
prompt += `User: ${memory.content.user_message}\n`;
prompt += `Assistant: ${memory.content.agent_response}\n\n`;
}
});
}
prompt += `\n## Instructions
- Use the context above to provide personalized responses
- Remember user preferences mentioned in past conversations
- Be consistent with previous interactions
`;
return prompt;
}
export async function endConversation(state: ConversationState) {
// Flush any remaining buffered memory
await raven.flushMemory(state.userId, state.conversationId);
}Usage Example
import { initializeConversation, handleMessage, endConversation } from './agent';
async function main() {
// Initialize conversation for user
const state = await initializeConversation('user-123');
console.log('Started conversation:', state.conversationId);
// First interaction
const response1 = await handleMessage(
state,
"Hi! I'm working on a TypeScript project and prefer functional programming."
);
console.log('Agent:', response1);
// Second interaction - agent should remember preferences
const response2 = await handleMessage(
state,
"Can you show me how to implement a map function?"
);
console.log('Agent:', response2);
// Agent should use TypeScript and functional style!
// End conversation and flush memory
await endConversation(state);
}
main();Best Practices
Query Before Responding
Always query memory before generating a response. The context retrieval is fast and dramatically improves response quality.
Store Every Interaction
Ingest all interactions, not just "important" ones. The semantic analysis will extract what matters.
Flush on Session End
Call flush when a conversation ends to ensure all buffered memory is persisted.
Use Meaningful Queries
Query with the actual user message or a summary of what context you need. Semantic search will find relevant memories.
Leverage Metadata
Use conversation metadata for categorization (project, topic, priority) to help with organization.
Secure Your API Key
Never expose your API key in client-side code. Always call Raven from your backend.
Error Handling
export async function handleMessageSafely(
state: ConversationState,
userMessage: string
): Promise<string> {
let memoryContext = { context: [], facts: [] };
// Query memory with fallback
try {
memoryContext = await raven.queryMemory(
state.userId,
state.conversationId,
userMessage
);
} catch (error) {
console.warn('Memory query failed, continuing without context:', error);
// Continue without memory - agent still works, just less personalized
}
// Generate response
const response = await generateResponse(userMessage, memoryContext);
// Store interaction with retry
try {
await raven.ingestMemory(
state.userId,
state.conversationId,
userMessage,
response
);
} catch (error) {
console.warn('Memory ingestion failed:', error);
// Queue for retry or log for manual intervention
}
return response;
}