> Version: 2026.1.104 | Last Updated: 2026-03-12
Interpretos Local is an AI copilot for enterprise applications. It connects to Oracle E-Business Suite, PeopleSoft, and IBM Maximo and lets users query their systems using natural language.
This guide covers everything an administrator needs to install, configure, operate, and extend their Interpretos Local instance. It also serves as the knowledge base for the built-in Admin Assistant ("Need help?" button).
On first launch, a 6-step wizard guides you through initial configuration. After completing the AI Model Provider step (Step 2), a "Need help?" button appears — click it to chat with an AI assistant that can answer questions about any setup step in real time.
Choose one of three paths:
Create the first administrator account.
Choose the LLM that powers your chat experience. This is the fast model — used for simple queries like lookups and detail views.
| Provider | API Key Required | Recommended Model | Notes |
|---|---|---|---|
| Interpretos Cloud | No | Automatic | Free tier, 100 queries/day, no configuration |
| Google Gemini | Yes | gemini-3-flash-preview | Get key at aistudio.google.com/apikey |
| OpenAI | Yes | gpt-4.1-mini | Get key at platform.openai.com/api-keys |
| Anthropic | Yes | claude-sonnet-4-6 | Get key at console.anthropic.com |
| Custom / Self-hosted | Yes | Varies | Any OpenAI-compatible endpoint (Ollama, vLLM, LiteLLM) |
For self-hosted models, provide the base URL (e.g., http://localhost:11434/v1 for Ollama).
Click Test Connection to verify before proceeding.
For BYOK providers (Google, OpenAI, Anthropic, Custom), a "Add a Smart Model" panel appears below the fast model configuration.
What it does: Simple queries (lookups, details) use the fast model above. Complex queries (analytics, dashboards, multi-table joins) need a frontier model for accurate results. The system automatically routes each query to the right model based on complexity.
Configuration:
claude-opus-4-6, gpt-4.1, gemini-3-pro-previewHow auto-routing works: The RAG (pattern selection) step assesses query complexity. Simple entity lookups stay on the fast model. Analytical, multi-entity, or aggregation queries are automatically upgraded to the smart model. Users see this as seamless — the right model is chosen per query.
Smart model quota: To control costs, smart model usage is limited to 30 queries per user per day (configurable in Admin Panel > Settings). When the quota is exhausted, queries fall back to the fast model.
Interpretos Cloud users get automatic fast/smart routing — no configuration needed.
Connect to your enterprise systems. You can configure one or more integrations.
Direct Database Mode (recommended — 1-3 second queries):
SSH Tunnel Mode (11-17 second queries):
-----BEGIN RSA PRIVATE KEY-----)/u01/app/oracle/product/12.1.0)Common issues:
ssh-keygen -p -m PEM -f your_keyecho $ORACLE_HOME on the database server to find ittelnet host port$ORACLE_HOME/network/admin/tnsnames.ora on the DB serverConnects via SSH tunnel to Oracle database (no direct mode).
/opt/oracle/psft/db/oracle-server/19.3.0.0PeopleSoft uses CDB/PDB architecture. The PDB name is critical — it's set via ALTER SESSION SET CONTAINER before any query. If you're unsure, ask your DBA for the PDB name or run SELECT name FROM v$pdbs on the CDB.
Connects via REST API (OSLC interface).
/maximo (e.g., https://maximo.example.com/maximo)Common issues:
/maximo, not /maximo/api or just the hostnameEnable anonymous aggregate metrics to help improve the product.
/app/data/telemetry_audit.jsonl for your auditSummary of all settings. Click Complete Setup to finish. A security recommendations panel reminds you to enable TLS and set a secret key for production.
Access the admin panel by clicking the gear icon in the header bar. The "Need help?" button in the top-right corner opens the AI-powered Admin Assistant — it can answer questions about any panel, setting, or configuration task.
Manage system users and their roles.
Adding users:
User actions:
Roles:
| Role | Chat | Own Preferences | Admin Panel | Manage Users | EBS Login Bar |
|---|---|---|---|---|---|
| user | Yes | Yes | No | No | Shown (RBAC) |
| admin | Yes | Yes | Yes | Yes | Hidden (uses global credentials) |
Admin users skip the per-user RBAC login prompt and query using the global admin credentials configured during setup.
Map each user to their identity in each enterprise system. This controls what data they see through role-based access control (RBAC).
How RBAC works: When user "jsmith" sends a chat query, Interpretos uses jsmith's personal credentials (not the admin's) to query the enterprise system. If jsmith's Oracle account is restricted to the BEDFORD site, they'll only see BEDFORD data. Admin users bypass this — they always use the global admin credentials and see all data.
Setting credentials:
If no per-user credentials are set, the user inherits the global admin connection — meaning they see ALL data. For security, configure per-user credentials for restricted users.
Credential fields by integration:
| Integration | Key Fields |
|---|---|
| EBS | SSH host/user/key, DB user/password/service, Oracle Home, RBAC username |
| PeopleSoft | SSH host/user/key, DB user/password/service, PDB name, RBAC username |
| Maximo | Auth type, API base URL, API key or username/password |
Generate programmatic API keys for automation, bots, and integrations.
Creating a key:
Using API keys:
# Via X-API-Key header
curl -X POST http://localhost:8080/api/chat \
-H "Content-Type: application/json" \
-H "X-API-Key: ik_your_key_here" \
-d '{"message": "How many open work orders are there?"}'
# Via Authorization header
curl -X POST http://localhost:8080/api/chat \
-H "Content-Type: application/json" \
-H "Authorization: Bearer ik_your_key_here" \
-d '{"message": "Show me overdue POs"}'
Full API documentation is available at /api/docs (interactive Swagger UI).
Change the primary (fast) LLM provider after setup. Same options as setup step 2. Always test before saving.
Extra Parameters: Optional JSON object passed directly to the chat LLM API call. Use this for provider-specific tuning:
| Provider | Example | Effect |
|---|---|---|
| Google Gemini | {"thinking_budget": 0} | Disables thinking mode for faster responses |
| Google Gemini | {"thinking_level": "minimal"} | Minimal thinking |
| OpenAI | {"reasoning": {"effort": "low"}} | Low reasoning effort for GPT-5.x |
| Ollama | {"num_ctx": 8192} | Sets context window size |
Configure model presets that users can switch between during chat. Up to 3 presets can be defined.
Preset roles:
How it works:
Smart model quota: Limits smart model usage to a configurable number of queries per user per day (default: 30). When exhausted, queries fall back to the fast model. Configure the limit via PUT /api/admin/smart-quota or the Admin Panel.
Click the "Model Options for Users" section header to expand, configure presets, test each one, and save.
Configure an optional separate, faster LLM for pattern selection (the step that picks which knowledge pattern to use before answering).
Click the "Pattern Selection LLM" section header to expand, enable the checkbox, configure, test, and save.
Extra Parameters: Optional JSON object passed directly to the LLM API call. Use this for provider-specific tuning without code changes:
| Provider | Example | Effect |
|---|---|---|
| Google Gemini | {"thinking_budget": 0} | Disables thinking mode (prevents 26s+ delays on Gemini 3 Flash) |
| Google Gemini | {"thinking_level": "minimal"} | Minimal thinking (alternative to budget=0) |
| OpenAI | {"reasoning": {"effort": "none"}} | Disables reasoning for GPT-5.x models |
| Ollama | {"num_ctx": 4096} | Sets context window size |
Pattern selection doesn't need reasoning/thinking — it's a simple pattern-matching task. If using a model with built-in thinking (Gemini 3 Flash, GPT-5.4 Thinking), disable it here for dramatically faster responses (26s to 2s in testing).
Update EBS, PeopleSoft, or Maximo connection settings. Same fields as setup step 3.
Set admin-wide default context for all users on a specific integration.
Examples:
Select the integration from the dropdown, write instructions (max 1000 characters), and save. Users can also set their own personal instructions via the Preferences button in the chat header.
Re-runs the guided setup. Preserves your license and user accounts but lets you reconfigure AI provider and integration connections.
Create custom patterns to teach the chatbot queries specific to your environment. Custom patterns extend the built-in knowledge without affecting it. Custom patterns are encrypted at rest on the persistent data volume.
Click + New Pattern and fill in:
Click Save Pattern. The chatbot immediately starts using the new pattern — no restart needed. Pattern IDs are auto-generated from the description.
Click Test on any pattern, enter a sample question, and click Run Test. The system sends the query through the full chatbot pipeline and shows:
| Action | Description |
|---|---|
| Enable/Disable | Toggle a pattern on or off without deleting it. Disabled patterns are ignored by the chatbot. |
| Edit | Modify any field. Test status resets after edits. |
| Clone | Duplicate a pattern — useful for creating variations (e.g., same query for a different entity). |
| Export | Download a single pattern as JSON for backup or sharing with another instance. |
| Delete | Permanently remove a pattern and its data. |
Custom patterns are stored on the persistent data volume (/app/data/custom_patterns/) and survive container restarts and upgrades.
Check for and install updates.
View and manage your product license.
Health monitoring, debugging, and audit trail for compliance.
Click Test All Connections to validate every configured service (LLM provider, databases, APIs). Green/red indicators show what's working.
Toggle between INFO (normal) and DEBUG (verbose) log levels. Enable DEBUG when troubleshooting specific issues, then disable it — verbose logs consume more disk space.
Record and replay LLM API calls for debugging chatbot behavior.
Enable capture: Toggle LLM call recording on. Every LLM request/response is saved to disk.
View captured calls: Browse recent calls filtered by type (chat, RAG, etc.). Each captured call shows the full request (messages, model, parameters) and response.
Replay a call: Re-send a captured request to the LLM and compare the new response with the original. Useful for debugging non-deterministic behavior or testing model changes.
Configuration: Set maximum file size and number of captured files to manage disk usage.
Endpoints: GET /api/admin/llm-capture/status, POST /api/admin/llm-capture/toggle, PUT /api/admin/llm-capture/configure, GET /api/admin/llm-capture/calls, GET /api/admin/llm-capture/calls/{id}, POST /api/admin/llm-capture/replay/{id}
View recent server errors with timestamps, details, and stack traces. Useful for diagnosing connection failures or LLM errors.
Every query sent to connected enterprise systems is logged here — SQL statements for EBS/PeopleSoft, API calls for Maximo. Stored at /app/data/query_audit.jsonl.
Use this for: security audits, compliance reviews, debugging incorrect query results, and understanding what the AI is actually asking your systems.
Download a diagnostic package to attach to support tickets. This bundle is privacy-safe: it excludes credentials, SQL text, conversation content, and usernames. It contains system info, connectivity results, error summaries, and anonymized metrics.
These features are available to all authenticated users (not just admins).
Users can switch between model presets configured by the admin:
The active preset persists across sessions. Switch via the model selector in the chat header.
Users can set their own custom instructions via the Preferences button in the chat header. These are appended to the admin-wide custom instructions. Max 1000 characters.
Example: "I work in the BEDFORD site, always filter results to BEDFORD unless I say otherwise."
For EBS and PeopleSoft integrations with per-user credentials configured, users see a login bar at the top of the chat. They enter their enterprise username and password to establish their RBAC context. This ensures queries run with their credentials, not the admin's.
Admin users do not see this login bar — they always use the global credentials.
Users can change their own password via the user menu in the header.
When DEMO_MODE=true is set as an environment variable, the instance operates in demo mode for public evaluation.
Demo registration flow:
Demo account limits:
Automatic cleanup: a background thread removes expired demo users every 60 seconds, preserving analytics data (email, organization, query count, conversations used).
Environment variables:
DEMO_MODE=true — enable demo modeCDDI_DEMO_SKIP_VERIFICATION=true — bypass email verification (internal testing only)Interpretos Local exposes a full REST API. Interactive documentation is available at /api/docs (Swagger UI) and the OpenAPI spec at /api/docs/openapi.yaml.
Two methods:
POST /api/auth/login with {"username": "...", "password": "..."} — returns a token. Use as Authorization: Bearer .X-API-Key: ik_xxx or Authorization: Bearer ik_xxx.| Endpoint | Method | Description |
|---|---|---|
/api/chat | POST | Send a message, get full response |
/api/chat/stream | POST | Send a message, get SSE streaming response |
/api/chat/cancel | POST | Cancel an in-progress query |
/api/reset | POST | Clear conversation state |
/api/conversations | GET | List all conversations |
/api/conversations | POST | Create new conversation |
/api/conversations/{id} | GET | Get conversation history |
/api/conversations/{id} | DELETE | Delete a conversation |
/api/conversations/{id}/title | PUT | Rename a conversation |
/api/status | GET | Session status (model, integrations, RAG mode) |
/api/health | GET | Health check (200 OK) |
/api/version | GET | Container version |
/api/integrations | GET | List available integrations |
/api/integrations | POST | Set active integrations |
Interpretos Local exposes an OpenAI-compatible API, allowing any tool or platform that supports OpenAI (such as [Sana](https://sana.ai), LangChain, or custom scripts) to use Interpretos as a drop-in AI provider.
from openai import OpenAI
client = OpenAI(
base_url="https://your-interpretos-host:8080/v1",
api_key="ik_...", # Your Interpretos API key
)
response = client.chat.completions.create(
model="interpretos-auto",
messages=[{"role": "user", "content": "What are the top 5 assets by cost?"}],
)
print(response.choices[0].message.content)
| Endpoint | Method | Description |
|---|---|---|
/v1/chat/completions | POST | Chat completions (OpenAI format) |
/v1/models | GET | List available models |
| Model ID | Description |
|---|---|
interpretos-auto | Automatic model selection (default) |
interpretos-fast | Fast model for simple lookups |
interpretos-smart | Smart model for complex analytical queries |
Use your Interpretos API key (created in Admin Panel > API Keys) as the Bearer token. The OpenAI SDK sends this automatically when you set api_key.
Streaming is supported (stream: true). The chatbot processes the full query and returns the complete response in a single chunk, followed by [DONE].
When connecting Interpretos to platforms like Sana, configure:
| Setting | Value |
|---|---|
| Base URL | https://your-interpretos-host:8080/v1 |
| API Key | Your ik_... key |
| Model | interpretos-auto |
temperature, max_tokens, and top_p are accepted but ignored — the chatbot manages its own LLM settings.| Endpoint | Method | Description |
|---|---|---|
/api/auth/me | GET | Get current user info |
/api/auth/logout | POST | Logout |
/api/auth/change-password | POST | Change own password |
/api/model-presets | GET | List available model presets |
/api/model-preset | POST | Switch active model preset (fast/smart/auto) |
/api/rag-mode | POST | Change RAG mode |
/api/rag-modes | GET | List available RAG modes |
/api/ebs-login | POST | EBS user login (establish RBAC context) |
/api/ebs-logout | POST | EBS logout |
/api/ebs-status | GET | Check EBS login status |
/api/psft-login | POST | PeopleSoft user login |
/api/psft-logout | POST | PeopleSoft logout |
/api/psft-status | GET | Check PeopleSoft login status |
/api/users/{username}/custom-instructions | GET | Get personal custom instructions |
/api/users/{username}/custom-instructions | PUT | Set personal custom instructions |
/api/credentials/test | POST | Test credential connectivity |
| Endpoint | Method | Description |
|---|---|---|
/api/admin/users | GET | List all users |
/api/admin/users | POST | Create user |
/api/admin/users/{username} | DELETE | Delete user |
/api/admin/users/{username}/password | PUT | Reset password |
/api/admin/users/bulk | POST | Bulk create/update users (max 500) |
/api/users/{username}/credentials/{integration} | PUT | Set per-user credentials |
/api/users/{username}/credentials/{integration} | GET | Get per-user credentials |
/api/users/{username}/credentials/{integration} | DELETE | Delete per-user credentials |
/api/admin/credentials/bulk | POST | Bulk set credentials (max 500) |
/api/admin/api-keys | GET | List API keys |
/api/admin/api-keys | POST | Create API key |
/api/admin/api-keys/{id} | DELETE | Revoke API key |
/api/admin/settings | GET | Get all settings |
/api/admin/settings/llm | PUT | Update LLM provider |
/api/admin/settings/llm/test | POST | Test LLM connection |
/api/admin/settings/rag-llm | PUT | Configure pattern selection LLM |
/api/admin/settings/rag-llm/test | POST | Test pattern selection LLM |
/api/admin/settings/ebs | PUT | Update EBS connection |
/api/admin/settings/ebs/test | POST | Test EBS connection |
/api/admin/settings/peoplesoft | PUT | Update PeopleSoft connection |
/api/admin/settings/peoplesoft/test | POST | Test PeopleSoft connection |
/api/admin/settings/model-presets | PUT | Save model presets (up to 3) |
/api/admin/settings/model-presets/test | POST | Test a model preset |
/api/admin/settings/custom-instructions/{integration} | GET | Get admin-wide instructions |
/api/admin/settings/custom-instructions/{integration} | PUT | Set admin-wide instructions |
/api/admin/settings/ui | PUT | Control UI element visibility |
/api/admin/smart-quota | GET | View smart model quota and usage |
/api/admin/smart-quota | PUT | Set daily smart model limit |
/api/admin/license | GET | View license status |
/api/admin/license | POST | Upload/install license |
/api/admin/restart-setup | POST | Restart setup wizard |
/api/admin/patterns/custom | GET | List custom patterns |
/api/admin/patterns/custom | POST | Create custom pattern |
/api/admin/patterns/custom/{id} | GET | Get pattern details |
/api/admin/patterns/custom/{id} | PUT | Update pattern |
/api/admin/patterns/custom/{id} | DELETE | Delete pattern |
/api/admin/patterns/custom/{id}/test | POST | Test pattern with a query |
/api/admin/patterns/custom/{id}/toggle | PUT | Enable/disable pattern |
/api/admin/patterns/custom/{id}/duplicate | POST | Clone pattern |
/api/admin/patterns/custom/{id}/export | GET | Export single pattern (JSON) |
/api/admin/patterns/custom/export | GET | Export all patterns (JSON bundle) |
/api/admin/patterns/custom/import | POST | Import patterns from JSON |
/api/admin/diagnostics/test-connections | POST | Test all connections |
/api/admin/diagnostics/health | GET | System health dashboard |
/api/admin/diagnostics/errors | GET | Recent errors |
/api/admin/diagnostics/log-level | GET | Current log level |
/api/admin/diagnostics/log-level | PUT | Set log level (DEBUG/INFO/WARNING) |
/api/admin/diagnostics/export | GET | Download support bundle |
/api/admin/audit/queries | GET | Query audit log |
/api/admin/audit/telemetry | GET | Telemetry transmission log |
/api/admin/audit/telemetry/preview | GET | Preview next telemetry payload |
/api/admin/audit/telemetry/setting | PUT | Toggle telemetry on/off |
/api/admin/llm-capture/status | GET | LLM capture status |
/api/admin/llm-capture/toggle | POST | Enable/disable LLM capture |
/api/admin/llm-capture/configure | PUT | Configure capture settings |
/api/admin/llm-capture/calls | GET | List captured LLM calls |
/api/admin/llm-capture/calls/{id} | GET | Get captured call details |
/api/admin/llm-capture/replay/{id} | POST | Replay a captured LLM call |
/api/admin/updates/check | GET | Check for updates |
/api/admin/updates/install | POST | Install pattern updates |
/api/admin/updates/status | GET | Last update check result |
/api/admin/updates/installed | GET | List installed pattern packs |
| Endpoint | Method | Description |
|---|---|---|
/api/setup/assistant/chat | POST | Chat with the admin assistant (no auth during setup, admin-only after) |
/api/assistant/conversations | GET | List assistant conversations |
/api/assistant/conversations | POST | Create new assistant conversation |
/api/assistant/conversations/{id} | GET | Load assistant conversation |
/api/assistant/conversations/{id} | DELETE | Delete assistant conversation |
/api/assistant/conversations/{id}/title | PUT | Rename assistant conversation |
| Endpoint | Method | Description |
|---|---|---|
/api/demo/config | GET | Public demo configuration |
/api/demo/start | POST | Start demo registration (email verification) |
/api/demo/verify | POST | Verify email code and create demo account |
/api/demo/status | GET | Current demo session status |
For enterprise deployments, use the bulk endpoints to create users and assign credentials in batch.
Bulk user creation (POST /api/admin/users/bulk):
{
"users": [
{
"username": "jsmith",
"password": "Welcome123!",
"display_name": "Jane Smith",
"role": "user",
"credentials": {
"oracle_ebs": {
"EBS_DB_USER": "JSMITH",
"EBS_DB_PASSWORD": "...",
"EBS_DB_SERVICE": "VIS"
}
}
}
],
"update_existing": false
}
Response includes created, updated, skipped, and errors counts. Max 500 users per request.
Bulk credential assignment (POST /api/admin/credentials/bulk):
{
"assignments": [
{
"username": "jsmith",
"integration_id": "oracle_ebs",
"credentials": {
"EBS_DB_USER": "JSMITH",
"EBS_DB_PASSWORD": "...",
"EBS_DB_SERVICE": "VIS"
}
}
]
}
Max 500 assignments per request.
These environment variables can be set on the Docker container to override settings.
| Variable | Description | Default |
|---|---|---|
LLM_PROVIDER | Active provider (google, openai, anthropic, custom) | From setup |
LLM_MODEL | Active model name | From setup |
LLM_BASE_URL | Custom endpoint URL | — |
LLM_EXTRA_PARAMS | JSON for provider-specific params | {} |
GOOGLE_API_KEY | Google Gemini API key | — |
OPENAI_API_KEY | OpenAI API key | — |
OPENAI_BASE_URL | OpenAI-compatible base URL | — |
ANTHROPIC_API_KEY | Anthropic API key | — |
| Variable | Description |
|---|---|
RAG_LLM_PROVIDER | Dedicated RAG LLM provider |
RAG_LLM_MODEL | Dedicated RAG LLM model |
RAG_LLM_API_KEY | Dedicated RAG LLM API key |
RAG_LLM_BASE_URL | Dedicated RAG LLM endpoint |
RAG_LLM_EXTRA_PARAMS | JSON params for pattern selection |
| Variable | Description |
|---|---|
DB_CONNECTION_MODE | ssh or direct |
DB_HOST / EBS_DB_HOST | Direct DB hostname |
DB_PORT / EBS_DB_PORT | Direct DB port (default 1521) |
DB_USER / EBS_DB_USER | Database username |
DB_PASSWORD / EBS_DB_PASSWORD | Database password |
DB_SERVICE / EBS_DB_SERVICE | TNS service name |
SSH_HOST / EBS_SSH_HOST | SSH server hostname |
SSH_USER / EBS_SSH_USER | SSH username |
SSH_KEY_PATH / EBS_SSH_KEY_PATH | Path to SSH private key |
ORACLE_HOME / EBS_ORACLE_HOME | Oracle home directory |
EBS_TNS_ADMIN | TNS admin path (auto-derived from Oracle Home if not set) |
| Variable | Description |
|---|---|
PSFT_SSH_HOST | SSH server |
PSFT_SSH_USER | SSH username |
PSFT_SSH_KEY_PATH | SSH key path |
PSFT_ORACLE_HOME | Oracle home |
PSFT_DB_USER | DB username (default: SYSADM) |
PSFT_DB_PASSWORD | DB password |
PSFT_DB_SERVICE | DB service |
PSFT_PDB_NAME | PDB container name |
PSFT_TNS_ADMIN | TNS admin path |
| Variable | Description |
|---|---|
API_BASE | Maximo REST API base URL |
API_KEY | Maximo API key |
MAXIMO_AUTH_TYPE | maxauth or api_key |
| Variable | Description | Default |
|---|---|---|
CDDI_DATA_DIR | Data storage path | /app/data |
CDDI_SECRET_KEY | Flask secret key (set for production) | Auto-generated |
CDDI_LOG_LEVEL | Logging level | INFO |
CDDI_TELEMETRY | Enable telemetry | false |
CDDI_LLM_CAPTURE | Enable LLM call recording | false |
DEMO_MODE | Enable demo mode | false |
Override precedence: Environment variable > UI/setup wizard setting > default value.
Advanced RAG behavior can be tuned via /app/config/rag_config.yaml. Most admins will not need to modify this file — the defaults work well for all supported integrations.
Key settings:
retrieval.top_k: Number of patterns to retrieve (default: 3)retrieval.min_score_threshold: Minimum relevance score (default: 0.5)retrieval.scoring_weights: Weight distribution — semantic (0.40), entity (0.25), aspect (0.20), table (0.10), intent (0.05)boosting.rules: Boost specific patterns for specific keywordsagentic_rag.enabled: Use LLM-powered pattern selection (default: false, enabled via UI)token_optimization.enable_compression: Compress prompts for token efficiency (default: true)| File | Purpose | Format |
|---|---|---|
/app/data/setup_config.json | All setup configuration (encrypted) | JSON |
/app/config/rag_config.yaml | RAG behavior tuning | YAML |
/app/data/smart_quota.json | Daily smart model quota per user | JSON |
/app/data/custom_patterns/ | User-created patterns (encrypted) | JSON files |
/app/data/telemetry_audit.jsonl | Telemetry transmission audit | JSONL |
/app/data/query_audit.jsonl | SQL/API query audit log | JSONL |
These recipes show how to use the REST API to automate common administrative tasks. All examples use Python with the requests library.
import csv
import requests
BASE = "http://localhost:8080"
HEADERS = {"X-API-Key": "ik_your_admin_key", "Content-Type": "application/json"}
with open("users.csv") as f:
for row in csv.DictReader(f): # columns: username, display_name, password, role
resp = requests.post(f"{BASE}/api/admin/users", headers=HEADERS, json={
"username": row["username"],
"display_name": row["display_name"],
"password": row["password"],
"role": row.get("role", "user")
})
print(f"{row['username']}: {resp.json().get('status', resp.status_code)}")
CSV format:
username,display_name,password,role
jsmith,Jane Smith,Welcome123!,user
mbrown,Mike Brown,Welcome123!,user
admin2,Senior Admin,SecurePass!,admin
For large deployments (50+ users), use the bulk endpoint instead:
import csv
import requests
BASE = "http://localhost:8080"
HEADERS = {"X-API-Key": "ik_your_admin_key", "Content-Type": "application/json"}
users = []
with open("users.csv") as f:
for row in csv.DictReader(f):
users.append({
"username": row["username"],
"display_name": row["display_name"],
"password": row["password"],
"role": row.get("role", "user")
})
resp = requests.post(f"{BASE}/api/admin/users/bulk", headers=HEADERS, json={
"users": users,
"update_existing": False
})
result = resp.json()
print(f"Created: {result['created']}, Skipped: {result['skipped']}, Errors: {len(result['errors'])}")
import csv
import requests
BASE = "http://localhost:8080"
HEADERS = {"X-API-Key": "ik_your_admin_key", "Content-Type": "application/json"}
with open("credentials.csv") as f:
for row in csv.DictReader(f): # columns: username, db_user, db_password, ...
resp = requests.put(
f"{BASE}/api/users/{row['username']}/credentials/oracle_ebs",
headers=HEADERS,
json={
"EBS_DB_USER": row["db_user"],
"EBS_DB_PASSWORD": row["db_password"],
"EBS_DB_SERVICE": row.get("db_service", "VIS"),
"username": row.get("ebs_username", row["username"])
}
)
print(f"{row['username']}: {resp.json().get('status', resp.status_code)}")
import requests
import smtplib
from email.mime.text import MIMEText
BASE = "http://localhost:8080"
HEADERS = {"X-API-Key": "ik_your_admin_key"}
# Check health
health = requests.get(f"{BASE}/api/health")
if health.status_code != 200:
# Send alert email
msg = MIMEText(f"Interpretos health check failed: {health.status_code}")
msg["Subject"] = "ALERT: Interpretos Down"
msg["From"] = "monitoring@company.com"
msg["To"] = "admin@company.com"
with smtplib.SMTP("smtp.company.com") as s:
s.send_message(msg)
# Test all connections
conns = requests.post(f"{BASE}/api/admin/diagnostics/test-connections", headers=HEADERS)
for result in conns.json().get("results", []):
if result.get("status") != "ok":
print(f"FAILED: {result['name']} - {result.get('error', 'unknown')}")
import requests
BASE = "http://localhost:8080"
HEADERS = {"X-API-Key": "ik_reports_user_key", "Content-Type": "application/json"}
questions = [
"How many purchase orders were approved this week?",
"Show me the top 5 vendors by PO spend this month",
"Are there any overdue work orders?"
]
for q in questions:
resp = requests.post(f"{BASE}/api/chat", headers=HEADERS, json={"message": q})
data = resp.json()
print(f"\nQ: {q}")
print(f"A: {data.get('response', data.get('message', 'No response'))}")
# Reset conversation for next run
requests.post(f"{BASE}/api/reset", headers=HEADERS)
import requests
BASE = "http://localhost:8080"
HEADERS = {"X-API-Key": "ik_your_admin_key"}
username = "departing_user"
resp = requests.delete(f"{BASE}/api/admin/users/{username}", headers=HEADERS)
print(f"Deleted {username}: {resp.json()}")
# This removes the user account, all their credentials, API keys, and chat history
import requests
BASE = "http://localhost:8080"
HEADERS = {"X-API-Key": "ik_your_admin_key", "Content-Type": "application/json"}
# Create a custom pattern for EBS employee lookup
resp = requests.post(f"{BASE}/api/admin/patterns/custom", headers=HEADERS, json={
"integration": "oracle_ebs",
"description": "Employee lookup by employee number",
"query_type": "sql",
"query": "SELECT employee_number, full_name, email_address FROM per_all_people_f WHERE employee_number = :emp_num AND SYSDATE BETWEEN effective_start_date AND effective_end_date",
"example_questions": ["Who is employee 12345?", "Show employee details for 67890"],
"entity_types": ["employee", "person", "hr"],
"tables": ["PER_ALL_PEOPLE_F"]
})
print(f"Created: {resp.json()['pattern']['pattern_id']}")
# Export all patterns for backup
resp = requests.get(f"{BASE}/api/admin/patterns/custom/export", headers=HEADERS)
with open("patterns_backup.json", "w") as f:
f.write(resp.text)
print(f"Exported {resp.json()['count']} patterns")
import requests
import json
BASE = "http://localhost:8080"
HEADERS = {"X-API-Key": "ik_your_admin_key"}
# Get query audit log
audit = requests.get(f"{BASE}/api/admin/audit/queries", headers=HEADERS)
entries = audit.json().get("entries", [])
# Write to file for compliance archive
with open(f"query_audit_export.jsonl", "w") as f:
for entry in entries:
f.write(json.dumps(entry) + "\n")
print(f"Exported {len(entries)} audit entries")
import requests
BASE = "http://localhost:8080"
HEADERS = {"X-API-Key": "ik_your_admin_key", "Content-Type": "application/json"}
# Check current quota and usage
resp = requests.get(f"{BASE}/api/admin/smart-quota", headers=HEADERS)
data = resp.json()
print(f"Daily limit: {data['daily_limit']}")
for user, count in data.get('usage', {}).items():
print(f" {user}: {count}/{data['daily_limit']} smart queries today")
# Increase the limit
requests.put(f"{BASE}/api/admin/smart-quota", headers=HEADERS, json={"daily_limit": 50})
-----BEGIN RSA PRIVATE KEY-----)ssh-keygen -p -m PEM -f your_keyCDDI_SECRET_KEY environment variable rather than relying on the auto-generated filesetup_config.json are encrypted at rest using Fernet symmetric encryptiontelnet 22 -----BEGIN RSA PRIVATE KEY-----)ssh -i key.pem user@hostecho $ORACLE_HOME on the servertnsnames.ora on the DB server or ask your DBASELECT account_status FROM dba_users WHERE username = 'YOUR_USER'GRANT SET CONTAINER TO your_userSELECT name FROM v$pdbs (run as SYSDBA on the CDB)/maximo (not /maximo/api)telnet 443 (or 80 for HTTP){"thinking_budget": 0} — thinking mode is on by default and adds 20-50s of unnecessary delayPUT /api/admin/smart-quota/docs/terms/docs/privacy/docs/security/docs/admin-guide/api/docs (interactive Swagger UI)/api/docs/openapi.yaml*Interpretos Local is developed by Code Development Limited. For support, visit interpretos.ai.*