Quick Start¶
Get Memori running in 5 minutes.
1. Install¶
2. Set API Key¶
3. Basic Usage¶
Create demo.py
:
from memori import Memori
from litellm import completion
# Initialize memory
memori = Memori(conscious_ingest=True)
memori.enable()
# First conversation - establish context
response1 = completion(
model="gpt-4o-mini",
messages=[{
"role": "user",
"content": "I'm working on a Python FastAPI project"
}]
)
print("Assistant:", response1.choices[0].message.content)
# Second conversation - memory provides context
response2 = completion(
model="gpt-4o-mini",
messages=[{
"role": "user",
"content": "Help me add user authentication"
}]
)
print("Assistant:", response2.choices[0].message.content)
4. Run¶
5. See Results¶
- First response: General FastAPI help
- Second response: Contextual authentication help (knows about your FastAPI project!)
- Database created:
memori.db
with your conversation memories
What Happened?¶
- Universal Recording:
memori.enable()
automatically captures ALL LLM conversations - Intelligent Processing: Extracts entities (Python, FastAPI, projects) and categorizes memories
- Context Injection: Second conversation automatically includes relevant memories
- Persistent Storage: All memories stored in SQLite database for future sessions
Next Steps¶
- Basic Usage - Learn core concepts
- Configuration - Customize for your needs
- Examples - Real-world use cases
Pro Tip
Try asking the same questions in a new session - Memori will remember your project context!