Trace every answer back to exact evidence — in two lines of code. AI agents evaluate via MCP, humans review in the dashboard.
Add SourceMapR to your existing LangChain or LlamaIndex pipeline. Your code stays the same — we provide retrieval observability automatically.
# pip install sourcemapr llama-index
from sourcemapr import init_tracing, stop_tracing
init_tracing(endpoint="http://localhost:5000")
# Your existing LlamaIndex code — unchanged
from llama_index.core import SimpleDirectoryReader, VectorStoreIndex
documents = SimpleDirectoryReader("./papers").load_data()
index = VectorStoreIndex.from_documents(documents)
response = index.as_query_engine().query("What is attention?")
print(response)
stop_tracing()
http://localhost:5000 to see the full evidence lineage.
Debug retrieval with full evidence tracing — AI agents evaluate, humans review
Trace every response to the exact chunks that were retrieved. See similarity scores, rankings, and complete evidence lineage.
Click any retrieved chunk to see it highlighted in the original PDF. Optimized for PDF files — HTML and other formats are experimental.
See the exact prompt sent to the model, the response, token counts, and latency for every query.
Organize runs into experiments. Compare chunking strategies, retrievers, and embedding models side by side.
Complete trace from document load → parse → chunk → embed → retrieve → answer with full metadata.
Verify grounding without guessing. See exactly what evidence was used to generate each answer.
AI agents read queries and write evaluations via Model Context Protocol. Humans review in the dashboard.
Score relevance, faithfulness, and completeness. AI agents evaluate at scale, humans review and guide.
Drop-in instrumentation for LangChain and LlamaIndex. Full retrieval evidence tracing.
Full pipeline instrumentation
SimpleDirectoryReader
SentenceSplitter
VectorStoreIndex
QueryEngine
Callback-based tracing
PyPDFLoader
RecursiveCharacterTextSplitter
VectorStore Retrievers
LLM & ChatModel
SourceMapR traces every step of your retrieval pipeline with complete evidence lineage
For every answer, SourceMapR shows you: which chunks were retrieved with similarity scores, where they came from in the original document (with PDF highlighting), what prompt was sent to the LLM, and how many tokens were used. Debug hallucinations and verify grounding without guessing.