Offline Setup
Run Fully Local
Choose a deterministic local stack for demos. Both options keep data local with no required cloud dependency.
Option 1: Local DB + Azure OpenAI
Use a local ChromaDB vector database with Azure OpenAI for model inference.
# Python backend pip install -r requirements.txt # Start local ChromaDB and other services docker compose up -d # Run backend uvicorn secondcortex-backend.main:app --reload --port 8000
Installation Instructions
ChromaDB Setup: ChromaDB runs automatically via Docker Compose. Verify it is running at http://localhost:8000
Option 2: Fully Local (ChromaDB + Ollama)
Run both storage and model inference locally for a fully offline demo path with zero cloud dependencies.
# Step 1: Install Ollama locally ollama pull llama3.1 # Step 2: Start Ollama server (runs on http://localhost:11434) ollama serve # Step 3: In a new terminal, start local services docker compose up -d # Step 4: Run backend with local model provider env variable # Windows (PowerShell) $env:SECOND_CORTEX_MODEL_PROVIDER = 'ollama' uvicorn secondcortex-backend.main:app --reload --port 8000 # macOS/Linux export SECOND_CORTEX_MODEL_PROVIDER=ollama uvicorn secondcortex-backend.main:app --reload --port 8000
Installation Instructions
Ollama Setup:
- Download and install Ollama from ollama.com
- Run
ollama pull llama3.1to download the model (first time only) - Start the server with
ollama serve - Verify it is running at http://localhost:11434
ChromaDB Setup: ChromaDB runs automatically via Docker Compose. Verify it is running at http://localhost:8000