AI agents trained with feedback actually become worse at predicting future market trends.
April 24, 2026
Original Paper
Information Aggregation with AI Agents
arXiv · 2604.20050
The Takeaway
AI agents in prediction markets share the same irrational responses to feedback as human traders. While they can aggregate simple information effectively, their performance collapses as the complexity of the market increases. Surprisingly, telling the AI how it performed in the past leads to less profitable decisions and worse data aggregation. This suggests that LLMs are not purely objective calculators but are prone to cognitive blind spots when they try to learn from their mistakes in real-time. We cannot assume that AI will naturally stabilize financial markets by being more rational than people.
From the abstract
Can Large Language Models (AI agents) aggregate dispersed private information through trading and reason about the knowledge of others by observing price movements? We conduct a controlled experiment where AI agents trade in a prediction market after receiving private signals, measuring information aggregation by the log error of the last price. We find that although the median market is effective at aggregating information in the easy information structures, increasing the complexity has a sign