[ Enter Database → ]
← Back to Cognitive Impact
Mechanism of Harm

Algorithmic Recommendation

The feed chooses what the user thinks about next.

Recommendation systems decide which post, video or story a user sees next, based on engagement-prediction models trained on the user's prior behaviour and the behaviour of similar users. The 2023 YouTube audit using 100,000 sock-puppet accounts found that while these systems do not always push users toward greater ideological extremity, they reliably increase the proportion of recommendations drawn from problematic channels — Intellectual Dark Web, Alt-right, Conspiracy and QAnon — particularly for right-leaning users. The system optimises for what holds attention, not for what is true or beneficial.

When the feed chooses what comes next, the user no longer chooses what they think about. The platform's engagement objective replaces the user's information goals as the principal selector of content.

Engagement-Optimised Information Diet
2024

Engagement-prediction models reliably select content that holds attention rather than content that is accurate or beneficial.

Source: Multiple peer-reviewed audits of recommendation systems 2022-2024
Recommendation Drift to Problematic Channels
2023

Algorithm produces a growing proportion of recommendations from IDW, Alt-right, Conspiracy and QAnon channels, particularly for right-leaning users.

Source: 2023 YouTube algorithm audit using 100,000 sock-puppet accounts