Do LLMs Construct World Models? A Cognitive Science Investigation
•9 min read
#cognitive science#llms#world models#philosophy of mind#ai research#gpt-4#symbol grounding#machine learning
Are large language models merely stochastic parrots, or do they develop genuine internal representations of the world? This investigation examines evidence from Othello-GPT, spatial encoding in LLMs, and the symbol grounding problem to explore what cognitive science reveals about AI understanding.