‘Fighting fire with fire’ — using LLMs to combat LLM hallucinations
The number of errors produced by an LLM can be reduced by grouping its outputs into semantically similar clusters. Remarkably, this task can be performed by a second LLM, and the method’s efficacy can be evaluated by a third.