Now in Public Beta

Detect LLM

Trace every LLM call, evaluate output alignment with your system prompts, and get real-time alerts when your AI goes off-script. Simple SDK — integrate in under 5 minutes.

quickstart.py
# pip install hallutraceai
from hallutraceai import HalluTrace

ht = HalluTrace(api_key="sk_live_...")

ht.trace(
    session_id="chat-123",
    type="agent",
    input="What is Python?",
    output="Python is a programming language.",
    system_prompt="You are a helpful assistant."
)
# That's it. We handle evaluation automatically.
Features

Everything you need to trust your LLM

From trace ingestion to hallucination scoring to real-time alerts — one platform to monitor and evaluate your AI outputs.

Real-Time Tracing

Capture every LLM call — inputs, outputs, system prompts, model names. Grouped by chat session automatically.

Hallucination Detection

LLM-as-judge evaluates if outputs align with your system prompts. Scores from 0 (perfect) to 100 (hallucinated).

Instant Alerts

Get notified via email, SMS, or webhook when hallucination scores exceed your threshold. Default at 50.

Rich Analytics

Score trends, distributions, model comparisons, session breakdowns — all with animated, interactive charts.

CSV Data Tables

No SDK? Upload CSV files with your LLM data and run hallucination checks directly from the dashboard.

Simple Integration

3 lines of Python. Or use our REST API. Or swap your OpenAI base URL. Works with any LLM provider.

Dashboard

See hallucinations at a glance

Rich analytics, score distributions, and session breakdowns — all animated and interactive.

Project: My AI Chatbot

Last 7 days overview

Healthy1,247 evals

Avg Score

12.4

Sessions

342

Flagged

18

Messages

2.8K

Score Distribution

0-10
11-30
31-50
51-70
71-100
SessionMessagesAvg Score
chat-a1b2128
chat-c3d4524
chat-e5f6862
chat-g7h8315
How It Works

Three steps to hallucination-free AI

Integrate SDK

Install our Python or JS SDK. Add 3 lines of code. Every LLM call is now traced — inputs, outputs, system prompts, and metadata.

01
02

Auto Evaluate

Our engine automatically scores each response for hallucination. LLM-as-judge checks alignment with your system prompt. Score 0-100.

Monitor & Alert

View scores in your dashboard with rich charts. Set thresholds. Get instant alerts via email, SMS, or webhook when things go wrong.

03
Pricing

Simple, transparent pricing

Start free. Scale as you grow. Only pay for evaluations you use.

Free

$0/ forever

Perfect for trying out hallucination detection.

Start Free
  • 10,000 evals / month
  • 3 projects
  • 7-day data retention
  • Basic dashboard & charts
  • Email alerts
  • 1 team member
Most Popular

Pay as You Go

$0.001/ per eval

Scale without limits. Pay only for what you use.

Get Started
  • Unlimited evals
  • Unlimited projects
  • 90-day data retention
  • Full analytics & charts
  • Email, SMS, webhook alerts
  • 5 team members
  • JSON export API
  • CSV data tables
  • Priority support

Enterprise

Custom/ contact us

For teams that need full control and SLA.

Contact Sales
  • Everything in Pay as You Go
  • Unlimited retention
  • Unlimited team members
  • SSO / SAML
  • Custom eval prompts
  • Dedicated support & SLA
  • On-premise option
  • Custom integrations

That's just $1 per 1,000 evaluations. No hidden fees. No minimum commitment.

Integration

Integrate in under 5 minutes

Three lines of code. That's all it takes to start detecting hallucinations in your LLM outputs.

from hallutraceai import HalluTrace

ht = HalluTrace(api_key="sk_live_your_key")

ht.trace(
    session_id="chat-123",
    type="agent",
    input="What is the capital of France?",
    output="The capital of France is Paris.",
    system_prompt="You are a geography assistant."
)
OpenAI
Anthropic
Google Gemini
Mistral
Cohere
LangChain
LlamaIndex
Any LLM

Stop hallucinations before your users notice

Join teams using HalluTrace AI to monitor, evaluate, and improve their LLM outputs. Start with 10,000 free evaluations every month.