Run Fully Local

Choose a deterministic local stack for demos. Both options keep data local with no required cloud dependency.

Option 1: Local DB + Azure OpenAI

Use a local ChromaDB vector database with Azure OpenAI for model inference.

# Python backend
pip install -r requirements.txt

# Start local ChromaDB and other services
docker compose up -d

# Run backend
uvicorn secondcortex-backend.main:app --reload --port 8000

Installation Instructions

ChromaDB Setup: ChromaDB runs automatically via Docker Compose. Verify it is running at http://localhost:8000

Option 2: Fully Local (ChromaDB + Ollama)

Run both storage and model inference locally for a fully offline demo path with zero cloud dependencies.

# Step 1: Install Ollama locally
ollama pull llama3.1

# Step 2: Start Ollama server (runs on http://localhost:11434)
ollama serve

# Step 3: In a new terminal, start local services
docker compose up -d

# Step 4: Run backend with local model provider env variable
# Windows (PowerShell)
$env:SECOND_CORTEX_MODEL_PROVIDER = 'ollama'
uvicorn secondcortex-backend.main:app --reload --port 8000

# macOS/Linux
export SECOND_CORTEX_MODEL_PROVIDER=ollama
uvicorn secondcortex-backend.main:app --reload --port 8000

Installation Instructions

Ollama Setup:

  1. Download and install Ollama from ollama.com
  2. Run ollama pull llama3.1 to download the model (first time only)
  3. Start the server with ollama serve
  4. Verify it is running at http://localhost:11434

ChromaDB Setup: ChromaDB runs automatically via Docker Compose. Verify it is running at http://localhost:8000