On Tuesday, Senators John Hickenlooper (D-CO) and John Thune (R-ND) sent a joint letter to Director Arati Prabhakar of the Office of Science and Technology Policy (OSTP) outlining concerns around authenticating media generated by artificial intelligence (AI). The bipartisan initiative joins efforts such as Senators Richard Blumenthal (D-CT) and Josh Hawley’s (R-MO) proposal for a legal framework around AI regulation as well as the recent agreement of leading AI companies to voluntary safeguards. Members of Congress are racing to establish protocols around AI, even as the technology continues to evolve.
Thus far, few guidelines have been established on how to oversee such an undefined product other than a 48-page document published in January by the National Institute of Standards and Technology entitled “Artificial Intelligence Risk Management Framework” (AI RMF 1.0).
Designed to work in tandem with other AI risk management initiatives, the document seeks to establish a basis of understanding both AI technology and its imminent risks. One such criteria establishes that “trustworthy” AI systems are “valid and reliable, safe, secure and resilient, accountable and transparent, explainable and interpretable, privacy-enhanced, and fair with harmful bias managed.” By design, AI RMF 1.0 outlines general principles of AI regulation which are suggestive and non-binding to any organization.
According to a press release, Hickenlooper and Thune listed three questions in the letter to Director Prabhakar, highlighting the issue of identifying AI-generated media: “1. What current or planned federal research or pilot initiatives will focus on advancing content provenance and certifying the authenticity of AI-generated works? 2. What techniques are being explored to prevent watermarks or content authenticity tools from being removed, manipulated, or counterfeited? 3. How will watermarking techniques differ for various types of AI-generated content (e.g., audio, video, text, image)?”
Hickenlooper, serving as chair of the Senate Commerce, Science, and Transportation (CST) Subcommittee on Consumer Protection, Product Safety, and Data Security, has established AI policy as one of his top objectives. Through sending letters to Acting National Cyber Director Kemba Walden, Federal Trade Commission (FTC) Chair Lina Khan, and top AI companies, Hickenlooper has publicly consulted a range of agencies and actors on how to regulate AI.
Despite widespread bipartisan interest in AI regulation, Senator Ted Cruz (R-TX) issued a letter denouncing the overregulation of AI on the same morning as Hicklooper’s missive. Addressing FTC Chair Lina Khan,Cruz referenced a leaked FTC directive on AI bias, writing, “such regulation would represent an astonishing expansion of power over otherwise-benign products.”
Characterizing such regulation as “speech police” and “extralegal,” Cruz bemoaned the infringement of First Amendment rights.
Later on Tuesday, Hickenlooper held a subcommittee hearing along with Senators Maria Cantwell (D-WA) and Mary Blackburn (R-TN) on “The Need for Transparency in Artificial Intelligence” which offered statements from four experts on AI regulation: Executive Director Sam Gregory of the media human rights organization WITNESS, Carnegie Mellon Professor Dr. Ramayya Krishnan, Policy Information Technology Industry Council Executive Vice President Rob Strayer, and The Software Alliance Chief Executive Officer Victoria Espinel. All four underscored both the incredible economic potential and widespread risks of AI. Espinel spoke to this tension during her statement, saying, “To realize [AI’s] economic benefits, consumers and businesses must trust that AI is developed and deployed responsibly.”
Hickenlooper’s letter and hearing come in the midst of what Blackburn called, “AI week on the Hill,” involving several other hearings over subsequent days.