Neither You Nor Your AI Agents Will Lose Context.

Securely captured, intelligently stored, instantly retrievable for you and your AI.

Semantic FirewallPowerSync Realtime SyncChromaDB + Azure AI SearchDecision ArchaeologyIncident ReconstructionMCP SSE ServerGPT-4o PlannerGroq ExecutorSimulator Safety ReportVS Code ExtensionSemantic RetrievalTeam Context GraphAzure App ServiceSemantic FirewallPowerSync Realtime SyncChromaDB + Azure AI SearchDecision ArchaeologyIncident ReconstructionMCP SSE ServerGPT-4o PlannerGroq ExecutorSimulator Safety ReportVS Code ExtensionSemantic RetrievalTeam Context GraphAzure App Service
0
Snapshots Indexed
0
Top In Microsoft AI Unlocked
0
Typical Retrieval
0+
MCP Tools
0
Embedding Dimensions
Quick Access
Recently shipped
feature shortcuts.

Recently shipped feature shortcuts.

Live Context Graph
Open realtime context graph with timeline and team activity overlays.
Open ->
Team Cortex
Track team progress, incident timelines, and compressed daily or weekly summaries from the PM surface.
Open ->
Decision Archaeology
Hover over any function to see the full decision history behind it.
Open ->
SecondCortex Thesis
Read the core thesis and product principles behind SecondCortex.
Open ->
Install Extension
Install SecondCortex extension from VS Code Marketplace.
Open ->
GitHub Repository
Explore source code, releases, and implementation details.
Open ->
How it works
From keystroke to
memory in milliseconds.

IDE events are captured locally, passed through a privacy-first Semantic Firewall, embedded into vector memory, and exposed to any AI agent via MCP so your tools finally know why your code looks the way it does.

01
Capture
The VS Code extension monitors every IDE event: open tabs, active files, terminal commands, git state, code comments, diagnostics, function signatures, and debug sessions with a debounced snapshot system.
eventCapture.ts
02
Firewall
Every snapshot passes through Semantic Firewall before upload. AST-level analysis with TypeScript Compiler API detects tokens and credentials; regex fallback catches format-matched secrets.
semanticFirewall.ts
03
Embed + Sync
Sanitized snapshots are vectorized using text-embedding-3-small (1536d) and stored in personal memory. PowerSync syncs snapshots to the team backend in real-time with offline queue + replay support.
vector_db.py
04
Retrieve
When you ask a question or trigger restore, Retriever searches by semantic similarity across personal history and team context to return not just code, but decisions and failed branches.
retriever.py
05
Execute
After confirmation, Executor applies the plan to your workspace: opening files, switching branches, and running commands. Simulator runs a pre-flight safety check before touching anything.
executor.py
The agents
Four agents.
One pipeline.

A focused multi-agent architecture where each component has a distinct role. Every agent has a circuit breaker (`max_steps=3`) to prevent infinite loops.

PLN
Planner
Task Decomposition
Takes a natural language request and breaks it into a structured action plan. Interprets developer intent, creates parallel search tasks, and routes retrieval scope (personal, team, cross-repo).
LLM: GPT-4o via Azure OpenAI
Output: Structured action plan with search tasks
Requires explicit user confirmation
RTV
Retriever
Semantic Memory Search
Searches vector memory via cosine similarity to surface relevant history: files, branches, decisions, terminal commands, comments, and incidents. Exposed as MCP for any compatible AI agent.
Store: ChromaDB or Azure AI Search (team mode)
Embeddings: text-embedding-3-small (1536d)
Exposed via MCP SSE endpoint with 15+ tools
EXC
Executor
Workspace Restoration
Applies approved action plan to your VS Code workspace: opening files, switching branches, restoring terminal context. Runs the Simulator first to check for unstashed changes and conflicts.
LLM: Groq Llama-3.1-8b (fast inference)
Sub-agent: Simulator (git pre-flight safety checks)
PowerShell + bash compatible
SIM
Simulator
Pre-Flight Safety
Runs before every Executor action. Detects unstashed files, branch conflicts, uncommitted changes, and running processes that would be interrupted. Blocks execution on unresolved conflicts.
Input: Proposed plan + current git state
Output: Safety report with conflict detection
Hard block on unresolved risks
Live Memory
Query your past work
and your team's.
ChromaDB / Azure AI Search - Team NamespaceLIVE - 2 members - 24 snapshots indexed
src/payment/processor.ts
Reduced retry limit from 5 to 3. Decision context: cascade failure risk at higher values under concurrent load.
feat/payment-v22h ago
src/auth/tokenRefresh.ts
Implemented token queue pattern to resolve race condition when two requests hit expiry simultaneously.
feat/auth-fix5h ago
semanticFirewall.ts
Added AST-level detection for TypeScript Compiler API variable-name patterns with regex fallback.
feat/security1d ago
agents/simulator.py
Pre-flight simulator generates conflict safety reports from git diff before Executor runs.
feat/simulator2d ago
services/vector_db.py
Team retrieval supports ChromaDB + Azure AI Search with scoped semantic search.
main3d ago
Ask your second cortex anything about your codebase.
Natural language semantic search across your entire development history and your team's. Not just grep, but decisions.
Try asking:
why was retry limit reduced
token refresh race condition fix
what is Prateek working on
what caused yesterday's incident
where are secrets handled
Retriever - Awaiting Query
Click a memory entry or type a question to retrieve decision history from personal and team memory.
Security
Your secrets stay
yours.

Your secrets stay yours by architecture, not policy. Every snapshot is sanitized locally before upload and every execution path requires confirmation.

Semantic Firewall (Local)
AST-level analysis + regex fallback detect API keys, JWTs, bearer tokens, Stripe keys, OpenAI keys, and private key formats before data leaves your machine.
Local-First Storage
Snapshots write to local SQLite first via PowerSync. If SecondCortex shuts down, your memory still lives locally in a standard database.
Per-User + Per-Team Isolation
Personal memory uses per-user namespace isolation. Team memory uses team_id-scoped filtering in Azure AI Search to prevent cross-team leakage.
Confirmation Before Execution
Executor never runs without your sign-off. Simulator runs a safety check first and destructive operations always require explicit confirmation.
Offline Mode Available
Full local mode uses LanceDB + Nomic Embed for zero cloud dependency and zero network calls in regulated or air-gapped environments.
Decision Archaeology
Every function remembers
why it was written.
handleTokenExpiry() - Decision History
Last changed by Prateek - March 14 - "fix: resolve concurrent refresh race" Why this approach: Token queue pattern selected over mutex lock to avoid latency penalty under concurrent requests. Branches tried: feat/mutex-lock to feat/token-queue (current) Key commands: jest --watch auth.test.ts, ab -n 1000 -c 50 /auth/refresh Context confidence: 94% | Evidence: git commit a3f4b2c, snapshot snap_2891
Hover over any function in VS Code.
SecondCortex surfaces branches tried, approaches abandoned, and reasoning behind each architectural choice via git blame + vector memory + GPT-4o synthesis.
Incident Reconstruction
40 minutes of incident reconstruction.
30 seconds.
cortex investigate --incident --window 48h
Team: SecondCortex Labs - 2 developers 14:32 Prateek feat/payment-v2 processor.ts +47 -12 14:33 Prateek Comment: reducing retry from 5 to 3, cascade risk 14:38 Saketh docker push to staging deploy 14:41 Tests 3 failing: payment.test.ts 14:43 Prateek commit: fix: reduce retry limit 5 to 3 14:43 Monitor ALERT: staging response time >5000ms 14:44 Monitor ALERT: staging down Root cause: Retry reduction triggered backoff storm under concurrent load. Edge case was not covered in test suite.
Architecture
Production-grade
from day one.
System Architecture Overview
Capture Layer: VS Code Extension (TypeScript)
->
Event Capture + Debouncer (30s threshold)
->
Semantic Firewall (AST + Regex, local)
Intelligence Layer: FastAPI Backend
->
4-Operation Router (ADD/UPDATE/DELETE/NOOP)
->
4-Agent Pipeline (Planner -> Retriever -> Executor -> Simulator)
Memory Layer: ChromaDB + Azure AI Search + LanceDB
->
Sync: PowerSync (SQLite <-> Azure Postgres)
->
Embeddings: text-embedding-3-small (1536d)
Integration Layer: MCP SSE Server (/mcp)
->
15+ tools: search_memory, get_context_for_task, get_function_context, get_raw_snapshots
Presentation Layer: Next.js 15 Frontend
+
Deployment: GitHub Actions + GHCR Docker + Azure App Service
10 production deployments shipped
MCP Integration
Every AI agent gets
your memory.
MCP Server Config
{ "tools": { "mcp": { "servers": { "secondcortex": { "url": "https://sc-backend-suhaan.azurewebsites.net/mcp", "transport": "sse" } } } } }
Claude Code, Cursor, GitHub Copilot, PicoClaw.
Any MCP-compatible agent can query your codebase decisions, debugging history, and team institutional context without you manually pasting context.
Open MCP Endpoint
Team Memory
Institutional knowledge
that survives.

Every decision from every developer becomes queryable forever. New engineer onboarding, leave coverage, and 2am incident triage all benefit from persistent shared context.

Call To Action

Build with context.
Ship with confidence.

Install the VS Code extension. SecondCortex starts building your memory immediately. Selected Top 55 of 10,000+ in Microsoft AI Unlocked.

Install Extension - FreeView on GitHub