AI layer overview
The staff7 AI layer is an intelligence interface built directly on top of the platform's live data. It lets users query their staffing, financial, and operational data in plain language — and act on it directly from the console.
AI Console
The console is a terminal-style chat interface accessible from the sidebar under Agents. It maintains conversation history within the session and streams responses in real time.
/api/ai proxies to Ollama and reformats NDJSON chunks as SSE events. Actions use a separate PUT /api/ai endpoint.Commands
Commands pre-load specific Supabase context before sending your message. Without a command, the agent loads a general staff + profitability summary.
/staff.bench and /staff.all both load the /staff context. The sub-label after the dot is passed as a hint to focus the response.Agentic actions
The action agent uses native tool calling (kimi-k2.5 supports it natively) to detect write intentions and execute them with a confirmation step.
// Example flow
User > "approve Clara's leave"
ACTION > ⚠ Approve leave request
consultant_name: Clara Kim
[ CONFIRM ] [ CANCEL ]
User > CONFIRM
STAFF7 > ✓ Leave request approved for Clara Kim —
CP from 2026-03-14 to 2026-03-18 (5 days).Live context injection
Every request to /api/ai fetches fresh data from Supabase and injects it into the system prompt. The agent always sees current data — no stale cache.
| Command | Tables / views queried | Key fields |
|---|---|---|
| /staff | consultant_occupancy, leave_requests, consultant_profitability | status, contract_type, actual_cost, target_rate, occupancy_rate |
| /fin | project_financials, consultant_profitability | sold_rate, margin_pct, revenue, gross_margin |
| /profit | consultant_profitability, consultant_occupancy | Full profitability + pre-computed summary aggregates |
| /timesheet | consultant_occupancy, timesheets | date, value, status — current week only |
| /leave | leave_requests, consultant_occupancy | pending only, limit 30 |
Profitability AI
The /profit.analysis command loads the richest context — the full consultant_profitability view plus pre-computed aggregates.
// Pre-computed summary injected into /profit context
{
total_consultants: 11,
employees: 8, freelances: 3,
total_ca: 312000, total_marge: 142000,
avg_marge_pct: "34.2",
below_target_count: 2,
below_target: [
{ name: "David Mora", tjm_cout: 680, tjm_cible: 700 },
{ name: "Lucas Martin", tjm_cout: 450, tjm_cible: 460 }
]
}Privacy & model choice
staff7 is designed around privacy-by-design. You choose where your data goes — local model or cloud API.
| Mode | Data leaves your infra? | Setup |
|---|---|---|
| Local Ollama | Never — model runs on your server | Install Ollama, set OLLAMA_HOST=http://localhost:11434 |
| Ollama Cloud | Yes — sent to Ollama Cloud API | Set OLLAMA_HOST + OLLAMA_API_KEY |
| Any OpenAI-compat. | Yes — sent to third-party API | Point OLLAMA_HOST to any compatible endpoint |
# .env.local — local model (privacy-first)
OLLAMA_HOST=http://localhost:11434
OLLAMA_MODEL=llama3.2:3b
OLLAMA_API_KEY=
# or cloud
OLLAMA_HOST=https://ollama.com
OLLAMA_MODEL=kimi-k2.5:cloud
OLLAMA_API_KEY=sk-...RLS & tenant isolation
The AI query route does not use the service role key for reads. It uses the authenticated user's JWT — RLS applies to every agent query.
// /api/ai — JWT passthrough (query agent)
const userToken = req.headers.get('Authorization')?.replace('Bearer ', '')
// Supabase query — RLS applied via user token
const headers = {
'apikey': anonKey,
'Authorization': `Bearer ${userToken}`,
}
// Action agent — service role AFTER role check
if (role !== 'admin' && role !== 'super_admin') return 403
// → executes with service role keyMCP integration (roadmap)
The Model Context Protocol (MCP) is planned as the next evolution — standardising tool calls and enabling external integrations.