The prevalence of artificial intelligence (AI) tools that filter the information given to internet users, such as recommender systems and diverse personalizers, may be creating troubling long-term side effects to the obvious short-term conveniences. Many worry that these automated influencers can subtly and unwittingly nudge individuals toward conformity, thereby (somewhat paradoxically) restricting the choices of each agent and/or the population as a whole. In its various guises, this problem has labels such as filter bubble, echo chamber, and personalization polarization. One key danger of diversity reduction is that it plays into the hands of a cadre of self-interested online actors who can leverage conformity to more easily predict and then control users’ sentiments and behaviors, often in the direction of increased conformity and even greater ease of control. This emerging positive feedback loop and the compliance that fuels it are the focal points of this article, which presents several simple, abstract, agent-based models of both peer-to-peer and AI-to-user influence. One of these AI systems functions as a collaborative filter, whereas the other represents an actor the influential power of which derives directly from its ability to predict user behavior. Many versions of the model, with assorted parameter settings, display emergent polarization or universal convergence, but collaborative filtering exerts a weaker homogenizing force than expected. In addition, the combination of basic agents and a self-interested AI predictor yields an emergent positive feedback that can drive the agent population to complete conformity.

You do not currently have access to this content.