Deepfake Drake, AI leaders @ the White House, + ChatGPT in your Congressman's office.
S1E11 | Policy and other updates in the US AI landscape.
Hi, Friends —
✰ Welcome to [our digital disco]! Keep scrolling for this week’s key themes in tech news and other misc. thoughts (Snack Time).
✰ You can also check out last week’s newsletter here, plus my deep-dive into the future of business + social media here.
Notable Themes
☞ Congressional offices are hopping on the AI bandwagon.
US Congressional offices acquired 40 licenses for OpenAI's ChatGPT tool and are actively exploring AI's potential use cases. The licenses will be utilized for content creation, summarization, and drafting documents, enabling congressional offices to boost productivity and efficiency. The move marks an early adoption of AI in the policymaking process, at the same time as policymakers debate AI’s use and regulation. (See more on US AI policy below.)
Why does it matter? The United States does not have a centralized legislative AI initiative like the EU's AI Act. Instead, the US approach to AI involves a combination of congressional legislation, White House guidance, and the role of federal agencies. Congress has primarily focused on enhancing the government's use of AI rather than regulating industry use. (The key exception here is a new bill seeking to mandate the disclosure of AI-generated political ad content, in response to a phony AI-generated attack ad released by the Republican National Committee.)
Pros: This development underscores the growing recognition among governmental institutions, like Congress, of AI's transformative capabilities. By embracing tools like ChatGPT, lawmakers aim to streamline processes, communicate effectively, and engage with constituents. It may also exemplify a broader trend of governments adapting to the digital era, and leveraging emerging technologies to better serve the public. If true, the latter would be a key, impactful shift for government organizations, whose slowness has been criticized due to bureaucratic structures and decision-making.
Cons: While ChatGPT use may promote improved, faster processes within government offices, it may also result in limited AI regulation and oversight in the private sector. The uneven implementation of AI policies by gov’t agencies and the potential for changing interpretations may hinder long-term planning and stability for those using the technology. Moreover, many AI algorithms, particularly those using deep learning techniques, operate as black boxes, making it challenging to understand how they arrive at their decisions or predictions. This lack of transparency raises concerns about accountability and bias — issues particularly concerning in the field of policy-making.
☞ The White House is thinking more deeply about AI, too.
The Biden-Harris Administration made two big steps in the AI sector on Friday. First, the White House announced actions to promote responsible US innovation in artificial intelligence, including a $140M investment in AI research, upcoming rules for gov’t use of AI, and public evaluation of the language models of Google, OpenAI, and others. Later that day, VP Harris also held a meeting with CEOs from leading AI companies to discuss concerns about the risks associated with AI. The meeting focused on transparency, safety, and security of AI systems, and the development of appropriate protections. The CEOs committed to engaging with the government to ensure Americans can benefit from AI innovation.
Why does it matter? The White House aims to support technological breakthroughs while ensuring appropriate safeguards are in place. This expressed commitment to responsible AI is essential to i) building public trust in tech and public sectors, and ii) incentivizing the private sector to address ethical concerns. However, in contrast to AI policies in other countries, the US approach leans toward stimulating and guiding the market before any immediate gov’t intervention — a difference that may reduce the crucial ability for nations to collaborate on AI safety and ethical development. Striking the right balance between encouraging innovation and implementing protections can be a delicate task, as overly restrictive policies may hinder innovation and economic growth.
Pros: The government's expressed commitment to responsible AI can set a positive example for other stakeholders and promote ethical practices. Such safeguards and protections for AI are crucial in ensuring that the technology is used responsibly, as AI raises considerable risks including privacy and data security, discriminatory treatment, and the manipulation of opinions and behaviors. Moreover, the public evaluation of AI systems and the emphasis on transparency can help build public trust in — and promote robust reliability of — AI technologies by subjecting them to rigorous testing and scrutiny.
Cons: The White House announcements lacked specific details on what AI safeguards are required, or what private sector engagement with the government will involve. This gap leaves room for uncertainty about the actual impact and effectiveness of the initiatives. Moreover, despite the White House’s recent actions and published Blueprint for an AI Bill of Rights (published Oct. 2022), the US still lacks a flagship legislative AI initiative. This absence limits the scope of US AI regulation, particularly in the private sector, and allows federal agencies to rely on existing authorities. The potential outcome? Fragmented regulation, and potential gaps in addressing ethical concerns and risks associated with AI.
☞ Deepfake Drake
"Heart on My Sleeve," an AI-generated song featuring the voices of Drake and The Weeknd, accumulated millions of views across TikTok, Spotify, and Youtube in April — before being removed by Universal Music Group (UMG). Why? The song was generated by AI tools, meaning the artists weren’t involved in its creation. The artists, and UMG, were also missing out on profits driven by the song. AI-generated music challenges traditional notions of what it means to be a musician or composer, and who owns the rights to a song or artist, because emerging AI algorithms can mimic the styles and voices of established artists. The rise of AI-generated music raises concerns about intellectual property and fair compensation.
Why does it matter? The use of AI models to mimic famous voices can blur the lines between authenticity and deception, impacting the way people perceive and consume music. Moreover, the lack of clear copyright guidelines for generative AI leaves artists and creators vulnerable to unauthorized use of their work. UMG's efforts could lead to industry-wide initiatives, and ongoing legal disputes will likely shape the future landscape of AI-generated music.
Pros: AI-generated music has the potential to augment human creativity, expand musical horizons, and enhance the overall music ecosystem. Artists and companies (including UMG) are already leveraging the tech as a wellspring of creativity and inspiration for artists, providing them with novel melodies and arrangements that can ignite new musical ideas. Moreover, AI-generated music opens up collaborative possibilities, allowing artists to collaborate with algorithms and create innovative hybrid musical outputs. (E.g., David Guetta’s creation of a song with AI-generated Eminem verses).
Cons: The growing issue of AI-generated music presents various ethical and legal challenges in the music industry. The songs often make it challenging to identify the original creators or artists involved. This lack of transparency can lead to misrepresentation and confusion for listeners, and can harm artists’ reputations and livelihoods. Copyright laws also lack specific guidelines for generative AI. The complexity and subjectivity of AI music may lead to inconsistent judgments and legal disputes, negatively impacting artists and AI developers alike.
Snacktime
📓 Reading: Michael Spencer’s World Economic Forum Report, Warnings from Microsoft and Hinton, Future of Jobs Report.
♬ Listening to: So Tied Up - Stint Remix.
✰ Thinking about: Introductions, interactions, and public spaces. It’s interesting how technology has allowed us to expand communities far beyond geographic borders — yet face-to-face connections remain the most valuable way to form bonds. It seems to me that the more we realize this, the more we build digital spheres not just to support digital bonds, but to make the in-person ones more likely or fruitful, too… Like the pendulum of relationship-building is swinging back to the human side of things.
Next up
✎ The future of internet politics and AI-generated ads. Oh, and whatever else happens in tech over the coming week.
✿ As always — any and all feedback is welcome! In the meantime: give someone a hug and say an ‘I love you’ this week. Make the world a little happier.