Cvent Knowledge Graph
Ultra-fast, zero-cost context for every agent.
Ultra-fast, zero-cost context for every agent.
94-question benchmark comparing v1 (graph + LLM) vs KB (context files + LLM). Two models tested: GPT-4.1 and GPT-5.
The Integration Argument
Neither system alone exceeds 57%. Together they project 81% coverage — a 32% improvement over the best single strategy. v1 handles topology/dependency questions; KB handles tribal knowledge. They complement, not compete.
v1 Accuracy
54%
Graph + LLM
KB Accuracy
55%
Context + LLM
Unified
75%
Best of both
Coverage
81%
Questions answered ≥0.5
v1 alone
53/94 questions
KB alone
63/94 questions
Unified (best of both)
78/94 questions
Remaining 17% (16 questions) require code-level understanding → v2 code graph
| Metric | GPT-4.1 | GPT-5 | Verdict |
|---|---|---|---|
| Unified Score | 0.745 | 0.748 | Tied |
| v1 Latency | 3.9s | 32s | GPT-4.1 (8x faster) |
| Total Cost (94q) | $4.17 | $10.04 | GPT-4.1 (2.4x cheaper) |
| v1 Success Rate | 97% | 95% | ~Same |
| KB Success Rate | 99% | 100% | ~Same |
Conclusion: GPT-5 adds zero accuracy uplift for this workload. Its reasoning tokens (192-6,200/query) are wasted on "look up X and summarize" tasks. GPT-4.1 is the production choice.