← Bibliography

Hallucination is inevitable: An innate limitation of large language models

View original ↗

Note

Proves via diagonalization that LLMs cannot eliminate hallucination for all computable functions; external symbolic reasoning required.

Citation Key

xu2024hallucination