AI Chatbots
Build chatbots with persistent memory and context awareness using Memphora
Create conversational AI applications that remember user preferences, maintain context across sessions, and provide personalized responses. Memphora's semantic search and automatic memory extraction make it easy to build intelligent chatbots that learn from every conversation.
Key Features
🧠 Persistent Memory
Remember user preferences, context, and conversation history across sessions and devices
🔍 Semantic Search
Find relevant memories using natural language queries, not just keyword matching
💬 Conversation Extraction
Automatically extract key memories from conversations without manual intervention
⚡ Fast Retrieval
3-layer caching system (L1 hot cache, L2 warm cache, L3 persistent cache) for instant results
API Code Examples with Metadata Types
💡 Important: Memphora is flexible - you decide the metadata types. The examples below are common patterns for chatbots, but you can define your own custom metadata types based on your application's needs. Memphora doesn't enforce any specific metadata structure.
Simple Python examples showing how to store different types of chatbot memories using the Memphora SDK. Each example demonstrates the proper metadata structure for different memory types.
Metadata Types
Common Chatbot Metadata Types (Examples)
type: "preference"Example: User preferences, likes, dislikes, settings. "User prefers dark mode UI", "Favorite programming language: Python"
type: "fact"Example: Factual information about the user. "User works at Google as a software engineer", "User lives in San Francisco"
type: "conversation"Example: Full conversation records with context. Stores complete conversation history for reference and analysis
type: "context"Example: Contextual information for current session. Temporary context that helps maintain conversation flow
Common Use Cases
✅ Tested & Verified Features
- ✓Conversation memory extraction
- ✓User preference tracking
- ✓Context-aware responses
- ✓Multi-turn conversation handling
- ✓Semantic search for relevant memories
- ✓Memory deduplication and merging
- ✓Context compression for efficient LLM usage
- ✓Graph relationships between memories