Scientists have discovered that artificial intelligence (AI) becomes a more effective debate partner and reaches more accurate conclusions when it is allowed to mimic the messy, real-time dynamics of human communication.
Current AI systems typically follow a rigid, turn-based format. Researchers from the University of Electro-Communications in Japan proposed a new framework where large language models (LLMs) are assigned personalities based on classical psychology traits. This allows them to interrupt, speak out of turn, or remain silent.
The team reprogrammed LLMs to process responses sentence-by-sentence and tested three conversational settings. The most effective setting allowed AI agents to calculate an "urgency score," enabling them to interrupt when they spotted a critical error or point.
When evaluated on a complex reasoning test, AI agents with interruption capabilities achieved 79.2% accuracy, significantly higher than the 68.7% accuracy of fixed-order systems. This demonstrates that flexibility and personality-driven interactions can enhance collective AI intelligence.
The researchers plan to apply these findings to domains involving creative collaboration, suggesting that future AI-human interactions could benefit from more natural, dynamic discussions.