Skip to main content

AI Assistant

The AI Assistant provides intelligent network analysis powered by LLM providers, with an in-browser Python playground for advanced computations.

Overview

The AI Assistant combines pre-computed network metrics with large language model capabilities to answer questions about your organizational network. It can identify key influencers, detect burnout risk, analyze department connectivity, and generate custom visualizations — all through natural language conversation.

Setup

Configuring a Provider

Navigate to Settings > AI Assistant to configure your API provider:

Cloud Providers

ProviderModelsAPI Key Required
Google Geminigemini-2.0-flash, gemini-2.0-proYes
OpenAIgpt-4o, gpt-4o-miniYes
Groqllama-3.3-70b, mixtral-8x7bYes

Enter your API key and select a model. The AI chat button will appear in the header.

Edge Hosted Providers

ProviderDefault ModelAPI Key Required
Ollama (Local)llama3.2No

Edge hosted providers run models locally on your machine. No data leaves your computer, making this ideal for sensitive organizational data. See the Ollama Setup Guide below for detailed instructions.

Limited context awareness

Local models (7B–8B parameters) have significantly less context awareness than cloud providers. They may struggle with long system prompts, miss network data details, or give generic answers to open-ended questions like "What am I looking at?" For best results with Ollama, ask specific questions (e.g., "Who has the highest betweenness centrality?") rather than broad ones. Cloud providers (Gemini, OpenAI, Groq) use much larger models that handle the full network context reliably.

Opening the Chat

  • Click the chat icon in the sidebar, or
  • Use the keyboard shortcut (configured in Settings)

Pre-Computed Network Metrics

When a graph is loaded, the AI receives rich context about your network including:

  • Degree centrality (in/out/total) — how connected each node is
  • Betweenness centrality — nodes that bridge different groups
  • Closeness centrality — how quickly a node can reach all others
  • Eigenvector centrality — connection to other well-connected nodes
  • PageRank — importance based on incoming connections
  • Clustering coefficient — how interconnected a node's neighbors are
  • k-core number — core vs. periphery position
  • Burt's constraint — structural holes and information access
  • Bridging score — cross-department boundary spanning
  • Structural roles — hub, broker, bridge, or peripheral classification
  • Global efficiency — how efficiently information flows across the network (average inverse shortest path, 0–1 scale)
  • Clique analysis — maximal fully-connected subgroups, largest clique size, and per-node clique membership counts
  • Suggested connections — top 10 pairs of unconnected nodes that share many mutual contacts but lack a direct link (Adamic-Adar link prediction)

Additional context includes department breakdowns, cross-department connection counts, and network topology statistics (density, reciprocity, path length, diameter, global efficiency).

Business Analysis Templates

The AI is trained to apply ONA-specific analysis patterns when you ask about:

Burnout Risk

Identifies overloaded brokers: nodes with high betweenness AND high degree AND low reciprocity — people who channel information but receive little support back.

Churn/Attrition Risk

Detects peripheral nodes with low degree, low closeness, and high constraint. Declining connectivity over time signals disengagement.

Onboarding Health

Evaluates whether new nodes are building connections at an appropriate rate. Low department diversity in connections suggests siloed onboarding.

Influence & Campaigns

Uses PageRank and bridging scores to identify organic influencers who can maximize information diffusion across communities.

Suggested Connections

Recommends the top 10 pairs of people who share many mutual contacts but are not directly connected. Uses the Adamic-Adar link prediction algorithm. Cross-department pairs are flagged as high-value introductions that could improve information flow.

Network Efficiency & Cliques

Global efficiency measures how well information can flow across the entire network (0–1 scale, where 1.0 means every node can reach every other in one hop). Clique analysis reveals tightly-knit subgroups where every member is connected to every other — useful for identifying cohesive teams or echo chambers.

Python Playground

For analysis beyond pre-computed metrics, the AI can generate executable Python code that runs directly in your browser via Pyodide (Python compiled to WebAssembly).

How It Works

  1. Ask a question that requires custom computation (e.g., "Show me a degree distribution histogram")
  2. The AI generates a Python code block with a Run button
  3. Click Run to execute the code in-browser
  4. Results (text output and charts) appear inline in the chat

First Run

The first time you run Python code, Pyodide downloads and initializes (~20MB, cached for subsequent uses). This takes 10-20 seconds. After that, code execution is near-instant.

Pre-Loaded Variables

Every code block has access to:

  • G — A NetworkX DiGraph/Graph of your current network
    • Node attributes: name, department, activity
    • Edge attributes: weight, count
  • df — A Pandas DataFrame of your raw dataset rows (all CSV columns)

Available Libraries

  • networkx — 500+ graph algorithms
  • pandas — data manipulation
  • numpy — numerical computation
  • matplotlib / seaborn — data visualization

Example Prompts

  • "Show me a degree distribution histogram"
  • "Run a diffusion simulation from the most central node"
  • "Detect communities using label propagation and visualize them"
  • "Plot a heatmap of department-to-department connections"
  • "Compare clustering coefficients across departments"

Chat Panel Controls

ButtonAction
Expand (⛶)Grow panel to fill the main content area
Collapse (⧉)Shrink panel back to default size
Clear (🗑)Clear chat history
Close (✕)Close the chat panel

Voice Input/Output

When enabled in Settings, you can:

  • Speak your questions using the microphone button
  • Listen to AI responses via text-to-speech
  • Select your preferred voice and speed

Ollama Setup

Ollama lets you run large language models locally. All data stays on your machine — nothing is sent to external servers.

1. Install Ollama

macOS:

brew install ollama

Or download from ollama.com/download.

Linux:

curl -fsSL https://ollama.com/install.sh | sh

Windows: Download the installer from ollama.com/download.

2. Pull a Model

ollama pull llama3.2

Other recommended models for network analysis:

ModelSizeNotes
llama3.22GBFast, good for most queries
llama3.2:3b2GB3B parameter variant
llama3.1:8b4.7GBStronger reasoning
mistral4.1GBGood balance of speed and quality
qwen2.5:7b4.7GBStrong multilingual support

3. Start the Ollama Server

ollama serve

Ollama runs on http://localhost:11434 by default. The server must be running when using the AI Assistant.

4. Enable Browser Access (CORS)

When running the ONA Dashboard from a web server (including localhost dev servers or Vercel preview), you need to allow cross-origin requests from the browser.

macOS/Linux — set the environment variable before starting Ollama:

OLLAMA_ORIGINS=* ollama serve

To make this permanent on macOS:

launchctl setenv OLLAMA_ORIGINS "*"

Then restart Ollama.

Windows — set the environment variable in System Settings > Environment Variables, or run:

$env:OLLAMA_ORIGINS="*"
ollama serve
tip

If you open the dashboard directly as a file (file:///...), CORS is typically not required. CORS configuration is only needed when the dashboard is served from a web server.

5. Configure in ONA Dashboard

  1. Go to Settings > AI Assistant
  2. Under AI Brand, select Ollama (Local) from the Edge Hosted group
  3. Set the Model field (default: llama3.2) — must match a model you've pulled
  4. Adjust the Ollama URL if your server is on a different host or port (default: http://localhost:11434)
  5. The AI chat button appears in the header immediately (no API key needed)

Troubleshooting

"Failed to fetch" or network error

  • Verify Ollama is running: curl http://localhost:11434/api/tags
  • Check CORS is configured (see step 4 above)
  • Ensure the URL in settings matches your Ollama server address

"Model not found" error

  • Run ollama list to see installed models
  • Pull the model: ollama pull <model-name>
  • Ensure the model name in settings exactly matches (e.g., llama3.2 not llama-3.2)

Slow responses

  • Larger models require more RAM and a capable GPU
  • Try a smaller model like llama3.2 for faster responses
  • Close other memory-intensive applications

Requirements

  • A graph must be created (source/target columns selected) for network metrics and Python playground G variable
  • Dataset can be loaded without a graph — the AI will have access to raw data via df
  • Without any data loaded, the AI can still answer general ONA methodology questions