Digital users are starting to believe their own internal thoughts function exactly like the token-prediction code of an AI.
Digital users are increasingly adopting LLMorphism, a cognitive bias where they view their own minds as machine-like architectures. Traditional psychology assumes people project human traits onto machines, but this trend shows individuals doing the opposite. High-frequency AI users now describe their internal monologues as probabilistic outputs rather than conscious choices. This shift suggests that immersion in technology is rewriting how people perceive their own agency. We may be entering an era where people treat their own thoughts as black boxes that simply generate the next likely word.
LLMorphism: When humans come to see themselves as language models
arXiv · 2605.05419
LLMorphism is the biased belief that human cognition works like a large language model. I argue that the rise of conversational LLMs may make this bias increasingly psychologically available. When artificial systems produce human-like language, people may draw a reverse inference: if LLMs can speak like humans, perhaps humans think like LLMs. This inference is biased because similarity at the level of linguistic output does not imply similarity in cognitive architecture. Yet, LLMorphism may spre