AI layer overview
The staff7 AI layer is an intelligence interface built directly on top of the platform's live data. It lets users query their staffing, financial, and operational data in plain language — and act on it directly from the console.
AI Console
The console is a terminal-style chat interface accessible from the sidebar under Agents. It maintains conversation history within the session and streams responses in real time.
/api/ai proxies to Ollama and reformats NDJSON chunks as SSE events. Actions use a separate PUT /api/ai endpoint.Commands
Commands pre-load specific Supabase context before sending your message. Without a command, the agent loads a general staff + profitability summary. With a command, it fetches the most relevant data for your question.
/staff.bench and /staff.all both load the /staff context. The sub-label after the dot is passed to the agent as a hint to focus its response.Agentic actions
The action agent uses native tool calling (kimi-k2.5 supports it natively) to detect write intentions and execute them with a confirmation step. No action is executed without explicit user confirmation.
// Example flow
User > "approve Clara's leave"
ACTION > ⚠ Approve leave request
consultant_name: Clara Kim
[ CONFIRM ] [ CANCEL ]
User > CONFIRM
STAFF7 > ✓ Leave request approved for Clara Kim —
CP from 2026-03-14 to 2026-03-18 (5 days).Live context injection
Every request to /api/ai fetches fresh data from Supabase and injects it into the system prompt. The agent always sees current data — no stale cache, no manual sync.
| Command | Tables / views queried | Key fields |
|---|---|---|
| /staff | consultant_occupancy, leave_requests, consultant_profitability | status, contract_type, actual_cost, target_rate, occupancy_rate |
| /fin | project_financials, consultant_profitability | sold_rate, margin_pct, revenue, gross_margin |
| /profit | consultant_profitability, consultant_occupancy | Full profitability + pre-computed summary aggregates |
| /timesheet | consultant_occupancy, timesheets | date, value, status — current week only |
| /leave | leave_requests, consultant_occupancy | pending only, limit 30 |
// System prompt structure (every request)
SYSTEM_PROMPT // role + rules + field glossary
--- LIVE DATA ---
{ consultants: [...], profitability_summary: {...} }
--- END ---
[conversation history]
[new user message]Profitability AI
The /profit.analysis command loads the richest context — the full consultant_profitability view plus pre-computed aggregates — enabling questions that span finance, staffing, and contract type.
// Pre-computed summary injected into /profit context
{
total_consultants: 11,
employees: 8,
freelances: 3,
total_ca: 312000,
total_marge: 142000,
avg_marge_pct: "34.2",
below_target_count: 2,
below_target: [
{ name: "David Mora", tjm_cout: 680, tjm_cible: 700 },
{ name: "Lucas Martin", tjm_cout: 450, tjm_cible: 460 }
]
}Privacy & model choice
staff7 is designed around privacy-by-design. You choose where your data goes — local model or cloud API.
| Mode | Data leaves your infra? | Setup |
|---|---|---|
| Local Ollama | Never — model runs on your server | Install Ollama, set OLLAMA_HOST=http://localhost:11434 |
| Ollama Cloud | Yes — sent to Ollama Cloud API | Set OLLAMA_HOST + OLLAMA_API_KEY |
| Any OpenAI-compat. | Yes — sent to third-party API | Point OLLAMA_HOST to any compatible endpoint |
# .env.local
OLLAMA_HOST=http://localhost:11434 # local model
OLLAMA_MODEL=llama3.2:3b # or mistral, phi3, etc.
OLLAMA_API_KEY= # empty for local
# or cloud
OLLAMA_HOST=https://ollama.com
OLLAMA_MODEL=kimi-k2.5:cloud
OLLAMA_API_KEY=sk-...RLS & tenant isolation
The AI query route does not use the Supabase service role key for reads. It uses the authenticated user's JWT — so RLS policies apply to every query the agent makes. Write operations (actions) use the service role key server-side, after role verification.
// /api/ai — JWT extraction (query agent)
const authHeader = req.headers.get('Authorization') ?? ''
const userToken = authHeader.replace('Bearer ', '').trim()
// Supabase query headers — RLS applied
const h = {
'apikey': anonKey,
'Authorization': `Bearer ${userToken}`,
}
// Action agent — service role after role check
if (role !== 'admin' && role !== 'super_admin') return 403
// → executes with service role keyAuthorization: Bearer <session.access_token>. If the session has expired, the route returns an SSE error: "Session expired — please sign in again."MCP integration (roadmap)
The Model Context Protocol (MCP) is planned as the next evolution of the AI layer — standardising tool calls across models and enabling external integrations.