In the wake of a disturbing incident involving sexually explicit AI-generated images of Taylor Swift that went viral on X (formerly Twitter), U.S. Sen. John Hickenlooper (D-CO) has intensified calls for more stringent oversight of social media platforms, stating: “Today’s model of self-policing for online platforms is not enough to avoid putting people’s children, teenagers, and loved ones at risk.”
Hickenlooper’s letter last week to the CEOs of X and Meta states that current content moderation practices are insufficient, particularly the slow response to serious safety risks. In the letter, he outlines a set of questions aimed at understanding the platforms’ procedures for addressing non-consensual explicit images and deepfakes, which are fake videos, images, or audio generated by artificial intelligence.
The Taylor Swift deepfakes, which took 17 hours for the platform to address and remove, have not only elicited a direct response from Hickenlooper but also aligned with a broader, critical examination nationally of social media’s impact on user safety.
Recent Senate Judiciary Committee hearings took place this past week, where tech giants X, Meta, TikTok, and Discord were scrutinized over their failure to protect children online. The hearings featured testimonies and apologies, notably from Meta’s CEO Mark Zuckerberg, who publicly apologized for the harm caused to children by social media.
Hickenlooper’s letter, however, calls for concrete actions beyond apologies, pressing, “Everyone should feel heard when they raise a complaint with a platform. Responses to minors’ safety must meet a much higher sense of urgency.”
As the debate intensifies in Washington, Hickenlooper’s stance reflects a growing legislative impetus to hold social media titans accountable for the digital well-being of youth.
Closer to home, the introduction of new legislation at the state Capitol, as reported here, aims to mitigate social media’s harmful effects on youth in Colorado.