Chatbots powered by artificial intelligence are invading our information, work, and educational spaces. Cases of teenagers becoming addicted to their interactions with chatbots and driven to suicide are on the rise. A response is emerging: collective intelligence. Not as a gentle utopia, but as a concrete, proven strategy capable of counterbalancing forces that, left unchecked, risk overwhelming us.
A structural danger: the concentration of knowledge
The large language models that power modern chatbots share a troubling trait: a handful of tech companies decide what content trains these systems, what values they encode, and what truths they prioritize. The user, meanwhile, receives fluid and confident responses—without always perceiving the biases, gaps, or commercial interests that shape them.
The risks are manifold: large-scale disinformation, the erosion of critical thinking, the homogenization of opinions, and a growing dependence on algorithmic oracles that are not accountable to society.
Collective intelligence as a counterforce
This is where collective intelligence comes in: the ability of a group to think, decide, and act more effectively than the sum of its individual members. It relies on diverse perspectives, open deliberation, and cooperative structures that distribute power rather than concentrating it.
In the face of chatbots, this intelligence takes several concrete forms. Collaborative encyclopedias like Wikipedia demonstrate that a global community of contributors, subject to transparent rules and peer review, can produce a body of knowledge that is more reliable and nuanced than any algorithm trained in isolation. Collective wisdom here is the result of a lively debate, not statistical optimization.
Initiatives such as Hugging Face, EleutherAI, or citizen data cooperatives seek to develop AI models whose governance belongs to everyone. When users participate in defining authorized uses, correcting biases, and overseeing deployments, the chatbot ceases to be a commercial product and becomes a common good.
These cooperative approaches also impose a welcome slowdown in the face of the dizzying speed of technological deployments. They require consensus, transparency, and the inclusion of marginalized voices—this “democratic friction” is not a weakness; it is a safeguard against irreversible decisions made in the opacity of data centers.
Educating Together to Think Together
The collective response also relies on education. Digital media literacy programs, co-developed by teachers, parents, researchers, and students, help cultivate citizens capable of questioning a chatbot’s responses, identifying its sources, and spotting its blind spots. Critical thinking is cultivated in community, through dialogue and productive disagreement—all of which solitary interaction with a virtual assistant does not foster.
Collective intelligence is not opposed to technology. It redirects its purpose. Where unregulated chatbots promise efficiency for a few, cooperation promises clarity for all.
In a world saturated with algorithms, our most precious resource may be the oldest: the human capacity to think together, to contradict one another, to doubt—and to find, in this fertile chaos, a truth more robust than that of any machine.