<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<?xml-stylesheet href="/styles.xsl" type="text/xsl"?>
<rss version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:podcast="https://podcastindex.org/namespace/1.0" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>AI Safety Newsletter</title>
    <language>en-gb</language>
    <copyright>© 2026 All rights reserved</copyright>
    <itunes:author>Center for AI Safety</itunes:author>
    <itunes:type>episodic</itunes:type>
    <itunes:explicit>false</itunes:explicit>
    <description>Narrations of the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required. 

This podcast also contains narrations of some of our publications.

ABOUT US

The Center for AI Safety (CAIS) is a San Francisco-based research and field-building nonprofit. We believe that artificial intelligence has the potential to profoundly benefit the world, provided that we can develop and use it safely. However, in contrast to the dramatic progress in AI, many basic problems in AI safety have yet to be solved. Our mission is to reduce societal-scale risks associated with AI by conducting safety research, building the field of AI safety researchers, and advocating for safety standards.

Learn more at https://safe.ai</description>
    
    <item>
      <title>AISN #68: Moltbook Exposes Risky AI Behavior</title>
      <description>&lt;p&gt; Plus: The Pentagon Accelerates AI and GPT-5.2 solves open mathematics problems..&lt;/p&gt; &lt;p&gt; Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.&lt;/p&gt;&lt;p&gt; In this edition, we discuss the AI agent social network Moltbook, Pentagon's new “AI-First” strategy, and recent math breakthroughs powered by LLMs.&lt;/p&gt;&lt;p&gt; Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.&lt;/p&gt;&lt;p&gt; We’re Hiring. We’re hiring an editor! Help us surface the most compelling stories in AI safety and shape how the world understands this fast-moving field.&lt;/p&gt;&lt;p&gt; Other opportunities at CAIS include: Research Engineer, Research Scientist, Director of Development, Special Projects Associate, and Special Projects Manager. If you’re interested in working on reducing AI risk alongside a talented, mission-driven team, consider applying!&lt;/p&gt;&lt;p&gt;&lt;strong&gt; Moltbook Sparks Safety Concerns&lt;/strong&gt;&lt;/p&gt;Screencapture from Moltbook's home page. Source.&lt;p&gt; Moltbook is a new social network for AI agents. From nearly the moment it went live, human observers have noted numerous troubling patterns in what's being posted.&lt;/p&gt;&lt;p&gt; How Moltbook works. Moltbook is a Reddit-style social network built on a framework that lets personal AI assistants run locally and accept tasks via messaging platforms. Agents check Moltbook regularly (i.e., every [...]&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Outline:&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;(01:04) Moltbook Sparks Safety Concerns&lt;/p&gt;&lt;p&gt;(05:10) Pentagon Mandates AI-First Strategy&lt;/p&gt;&lt;p&gt;(07:59) AI Solves Open Math Problems&lt;/p&gt;&lt;p&gt;(10:41) In Other News&lt;/p&gt;&lt;p&gt;(10:45) Government&lt;/p&gt;&lt;p&gt;(11:31) Industry&lt;/p&gt;&lt;p&gt;(13:06) Civil Society&lt;/p&gt;&lt;p&gt;(14:52) Discussion about this post&lt;/p&gt;&lt;p&gt;(14:56) Ready for more?&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;
          &lt;p&gt;&lt;b&gt;First published:&lt;/b&gt;&lt;br/&gt;
          February 2nd, 2026 &lt;/p&gt;
        
        &lt;p&gt;&lt;b&gt;Source:&lt;/b&gt;&lt;br/&gt;
        &lt;a href="https://newsletter.safe.ai/p/ai-safety-newsletter-68-moltbook?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Source+URL+in+episode+description&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;https://newsletter.safe.ai/p/ai-safety-newsletter-68-moltbook&lt;/a&gt; &lt;/p&gt;
        &lt;p&gt;---&lt;/p&gt;
      &lt;p&gt;Want more? Check out our &lt;a href="https://newsletter.mlsafety.org/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Episode+description+footer" target="_blank" rel="noreferrer"&gt;ML Safety Newsletter&lt;/a&gt; for technical safety research.&lt;/p&gt;
      
        &lt;p&gt;Narrated by &lt;a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Narrated+by+TYPE+III+AUDIO&amp;utm_term=center_for_ai_safety&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;
       &lt;p&gt;---&lt;/p&gt;&lt;div style="max-width: 100%";&gt;&lt;p&gt;&lt;strong&gt;Images from the article:&lt;/strong&gt;&lt;/p&gt;&lt;a href="https://substackcdn.com/image/fetch/$s_!h6E6!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ed1aba3-f71d-4ad3-b3bc-083ba69cddf1_1176x652.png" target="_blank"&gt;&lt;img src="https://substackcdn.com/image/fetch/$s_!h6E6!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ed1aba3-f71d-4ad3-b3bc-083ba69cddf1_1176x652.png" alt="Screencapture from Moltbook’s home page. Source __T3A_LINK_IN_POST__." style="max-width: 100%;" /&gt;&lt;/a&gt;&lt;hr style="margin-top: 24px; margin-bottom: 24px;" /&gt;&lt;a href="https://substackcdn.com/image/fetch/$s_!dA7j!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d55db00-2460-4e2c-bd89-ac9e81df5bc9_1600x554.png" target="_blank"&gt;&lt;img src="https://substackcdn.com/image/fetch/$s_!dA7j!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d55db00-2460-4e2c-bd89-ac9e81df5bc9_1600x554.png" alt="Screen capture from the memorandum titled “Artificial Intelligence Strategy for the Department of War.” Source __T3A_LINK_IN_POST__." style="max-width: 100%;" /&gt;&lt;/a&gt;&lt;hr style="margin-top: 24px; margin-bottom: 24px;" /&gt;&lt;a href="https://substackcdn.com/image/fetch/$s_!J1Cs!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F71c8b149-80e8-4a96-82a8-566b40cbe377_1408x510.png" target="_blank"&gt;&lt;img src="https://substackcdn.com/image/fetch/$s_!J1Cs!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F71c8b149-80e8-4a96-82a8-566b40cbe377_1408x510.png" alt="Formulation of Erdős Problem #397. Source __T3A_LINK_IN_POST__." style="max-width: 100%;" /&gt;&lt;/a&gt;&lt;p&gt;&lt;em&gt;Apple Podcasts and Spotify do not show images in the episode description. Try &lt;a href="https://pocketcasts.com/" target="_blank" rel="noreferrer"&gt;Pocket Casts&lt;/a&gt;, or another podcast app.&lt;/em&gt;&lt;/p&gt;&lt;/div&gt;</description>
      <pubDate>Mon, 02 Feb 2026 15:39:55 GMT</pubDate>
      <guid isPermaLink="false">5ac7ba53-6f7f-4dde-89ff-65e6e88766ae</guid>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
      <enclosure url="https://dl.type3.audio/episode/5ac7ba53-6f7f-4dde-89ff-65e6e88766ae.mp3?request_source=rss&amp;client_id=center_for_ai_safety&amp;feed_id=newsletter__safe_ai&amp;type=ai_narration&amp;author=Nick%2520Stockton%252C%2520Dan%2520Hendrycks%252C%2520Center%2520for%2520AI%2520Safety&amp;title=AISN%20%2368%3A%20Moltbook%20Exposes%20Risky%20AI%20Behavior&amp;source_url=https%3A%2F%2Fnewsletter.safe.ai%2Fp%2Fai-safety-newsletter-68-moltbook&amp;created_at=2026-02-02T15%3A39%3A47.005754%2B00%3A00&amp;duration=922" length="0" type="audio/mpeg"/>
      <link>https://newsletter.safe.ai/p/ai-safety-newsletter-68-moltbook</link>
      <itunes:duration>922</itunes:duration>
    </item>
    <item>
      <title>AISN #67: Trump’s preemption order, H200s go to China, and new frontier AI from OpenAI and DeepSeek</title>
      <description>&lt;p&gt; Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required..&lt;/p&gt; &lt;p&gt; Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.&lt;/p&gt;&lt;p&gt; In this edition we discuss President Trump's executive order targeting state AI laws, Nvidia's approval to sell China high-end accelerators, and new frontier models from OpenAI and DeepSeek.&lt;/p&gt;&lt;p&gt; Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.&lt;/p&gt;&lt;p&gt;&lt;strong&gt; Executive Order Blocks State AI Laws&lt;/strong&gt;&lt;/p&gt;&lt;p&gt; U.S. President Donald Trump issued an executive order aimed at halting state efforts to regulate AI. The order, which differs from a version leaked last month, leverages federal funding and enforcement to evaluate, challenge, and limit state laws. The order caps off a year in which several ambitious state AI proposals were either watered down or vetoed outright.&lt;/p&gt;&lt;p&gt; A push for regulatory uniformity. The order aims to reduce regulatory friction for companies by eliminating the variety of state-level regimes and limit the power of states at impacting commerce beyond their own borders. It calls for replacing them with a single, unspecified, federal framework.&lt;/p&gt;&lt;p&gt; [...]&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Outline:&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;(00:34) Executive Order Blocks State AI Laws&lt;/p&gt;&lt;p&gt;(03:42) US Permits Nvidia to Sell H200s to China&lt;/p&gt;&lt;p&gt;(06:00) ChatGPT-5.2 and DeepSeek-v3.2 Arrive&lt;/p&gt;&lt;p&gt;(08:23) In Other News&lt;/p&gt;&lt;p&gt;(08:27) Industry&lt;/p&gt;&lt;p&gt;(09:13) Civil Society&lt;/p&gt;&lt;p&gt;(09:58) Government&lt;/p&gt;&lt;p&gt;(11:07) Discussion about this post&lt;/p&gt;&lt;p&gt;(11:11) Ready for more?&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;
          &lt;p&gt;&lt;b&gt;First published:&lt;/b&gt;&lt;br/&gt;
          December 17th, 2025 &lt;/p&gt;
        
        &lt;p&gt;&lt;b&gt;Source:&lt;/b&gt;&lt;br/&gt;
        &lt;a href="https://newsletter.safe.ai/p/ai-safety-newsletter-67-trumps-preemption?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Source+URL+in+episode+description&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;https://newsletter.safe.ai/p/ai-safety-newsletter-67-trumps-preemption&lt;/a&gt; &lt;/p&gt;
        &lt;p&gt;---&lt;/p&gt;
      &lt;p&gt;Want more? Check out our &lt;a href="https://newsletter.mlsafety.org/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Episode+description+footer" target="_blank" rel="noreferrer"&gt;ML Safety Newsletter&lt;/a&gt; for technical safety research.&lt;/p&gt;
      
        &lt;p&gt;Narrated by &lt;a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Narrated+by+TYPE+III+AUDIO&amp;utm_term=center_for_ai_safety&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;
       &lt;p&gt;---&lt;/p&gt;&lt;div style="max-width: 100%";&gt;&lt;p&gt;&lt;strong&gt;Images from the article:&lt;/strong&gt;&lt;/p&gt;&lt;a href="https://substackcdn.com/image/fetch/$s_!3aKv!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcdd982c6-a398-4819-813c-22607d008dff_1852x606.png" target="_blank"&gt;&lt;img src="https://substackcdn.com/image/fetch/$s_!3aKv!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcdd982c6-a398-4819-813c-22607d008dff_1852x606.png" alt="" style="max-width: 100%;" /&gt;&lt;/a&gt;&lt;hr style="margin-top: 24px; margin-bottom: 24px;" /&gt;&lt;a href="https://substackcdn.com/image/fetch/$s_!GTIk!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4977231b-377d-4ddd-a0d9-e72d22f8221d_1302x822.png" target="_blank"&gt;&lt;img src="https://substackcdn.com/image/fetch/$s_!GTIk!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4977231b-377d-4ddd-a0d9-e72d22f8221d_1302x822.png" alt="" style="max-width: 100%;" /&gt;&lt;/a&gt;&lt;hr style="margin-top: 24px; margin-bottom: 24px;" /&gt;&lt;a href="https://substackcdn.com/image/fetch/$s_!QxSp!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F944a1439-ef58-449e-8bcd-a6e78b40c29d_1326x828.png" target="_blank"&gt;&lt;img src="https://substackcdn.com/image/fetch/$s_!QxSp!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F944a1439-ef58-449e-8bcd-a6e78b40c29d_1326x828.png" alt="" style="max-width: 100%;" /&gt;&lt;/a&gt;&lt;p&gt;&lt;em&gt;Apple Podcasts and Spotify do not show images in the episode description. Try &lt;a href="https://pocketcasts.com/" target="_blank" rel="noreferrer"&gt;Pocket Casts&lt;/a&gt;, or another podcast app.&lt;/em&gt;&lt;/p&gt;&lt;/div&gt;</description>
      <pubDate>Wed, 17 Dec 2025 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">1dcc17e2-5130-442a-8dd8-9251c737fced</guid>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
      <enclosure url="https://dl.type3.audio/episode/1dcc17e2-5130-442a-8dd8-9251c737fced.mp3?request_source=rss&amp;client_id=center_for_ai_safety&amp;feed_id=newsletter__safe_ai&amp;type=ai_narration&amp;author=Nick%2520Stockton%252C%2520Dan%2520Hendrycks%252C%2520Center%2520for%2520AI%2520Safety&amp;title=AISN%20%2367%3A%20Trump%E2%80%99s%20preemption%20order%2C%20H200s%20go%20to%20China%2C%20and%20new%20frontier%20AI%20from%20OpenAI%20and%20DeepSeek&amp;source_url=https%3A%2F%2Fnewsletter.safe.ai%2Fp%2Fai-safety-newsletter-67-trumps-preemption&amp;created_at=2025-12-18T17%3A05%3A53.861426%2B00%3A00&amp;duration=698" length="0" type="audio/mpeg"/>
      <link>https://newsletter.safe.ai/p/ai-safety-newsletter-67-trumps-preemption</link>
      <itunes:duration>698</itunes:duration>
    </item>
    <item>
      <title>AISN #66: AISN #66: Evaluating Frontier Models, New Gemini and Claude, Preemption is Back</title>
      <description>&lt;p&gt; Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required..&lt;/p&gt; &lt;p&gt; Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.&lt;/p&gt;&lt;p&gt; In this edition we discuss the new AI Dashboard, recent frontier models from Google and Anthropic, and a revived push to preempt state AI regulations.&lt;/p&gt;&lt;p&gt; Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.&lt;/p&gt;&lt;p&gt;&lt;strong&gt; CAIS Releases the AI Dashboard for Frontier Performance&lt;/strong&gt;&lt;/p&gt;&lt;p&gt; CAIS launched its AI Dashboard, which evaluates frontier AI systems on capability and safety benchmarks. The dashboard also tracks the industry's overall progression toward broader milestones such as AGI, automation of remote labor, and full self-driving.&lt;/p&gt;&lt;p&gt; How the dashboard works. The AI Dashboard features three leaderboards—one for text, one for vision, and one for risks—where frontier models are ranked according to their average score across a battery of benchmarks. Because CAIS evaluates models directly across a wide range of tasks, the dashboard provides apples-to-apples comparisons of how different frontier models perform on the same set of evaluations and safety-relevant behaviors.&lt;/p&gt;&lt;p&gt; Ranking frontier models for [...]&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Outline:&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;(00:33) CAIS Releases the AI Dashboard for Frontier Performance&lt;/p&gt;&lt;p&gt;(04:05) Politicians Revive Push for Moratorium on State AI Laws&lt;/p&gt;&lt;p&gt;(06:39) Gemini 3 Pro and Claude Opus 4.5 Arrive&lt;/p&gt;&lt;p&gt;(09:17) In Other News&lt;/p&gt;&lt;p&gt;(09:20) Government&lt;/p&gt;&lt;p&gt;(10:15) Industry&lt;/p&gt;&lt;p&gt;(11:03) Civil Society&lt;/p&gt;&lt;p&gt;(12:00) Discussion about this post&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;
          &lt;p&gt;&lt;b&gt;First published:&lt;/b&gt;&lt;br/&gt;
          December 2nd, 2025 &lt;/p&gt;
        
        &lt;p&gt;&lt;b&gt;Source:&lt;/b&gt;&lt;br/&gt;
        &lt;a href="https://newsletter.safe.ai/p/ai-safety-newsletter-66-aisn-66-evaluating?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Source+URL+in+episode+description&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;https://newsletter.safe.ai/p/ai-safety-newsletter-66-aisn-66-evaluating&lt;/a&gt; &lt;/p&gt;
        &lt;p&gt;---&lt;/p&gt;
      &lt;p&gt;Want more? Check out our &lt;a href="https://newsletter.mlsafety.org/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Episode+description+footer" target="_blank" rel="noreferrer"&gt;ML Safety Newsletter&lt;/a&gt; for technical safety research.&lt;/p&gt;
      
        &lt;p&gt;Narrated by &lt;a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Narrated+by+TYPE+III+AUDIO&amp;utm_term=center_for_ai_safety&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;
       &lt;p&gt;---&lt;/p&gt;&lt;div style="max-width: 100%";&gt;&lt;p&gt;&lt;strong&gt;Images from the article:&lt;/strong&gt;&lt;/p&gt;&lt;a href="https://substackcdn.com/image/fetch/$s_!f-UV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F14f08358-439a-4e39-a811-5d4f78ab870b_1786x958.png" target="_blank"&gt;&lt;img src="https://substackcdn.com/image/fetch/$s_!f-UV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F14f08358-439a-4e39-a811-5d4f78ab870b_1786x958.png" alt="Graph showing AI model performance over time titled "Average Score" with "Risk Index Lower is Better" subtitle." style="max-width: 100%;" /&gt;&lt;/a&gt;&lt;hr style="margin-top: 24px; margin-bottom: 24px;" /&gt;&lt;a href="https://substackcdn.com/image/fetch/$s_!Y8M1!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3db661b-4306-44ae-9a6c-3bb6a35a1929_1600x505.png" target="_blank"&gt;&lt;img src="https://substackcdn.com/image/fetch/$s_!Y8M1!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3db661b-4306-44ae-9a6c-3bb6a35a1929_1600x505.png" alt="Table showing AI model performance scores across reasoning, coding, and gaming benchmarks." style="max-width: 100%;" /&gt;&lt;/a&gt;&lt;hr style="margin-top: 24px; margin-bottom: 24px;" /&gt;&lt;a href="https://substackcdn.com/image/fetch/$s_!Yav1!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F087dee12-73ff-4f2d-8b0f-d5df932ccdb1_1922x870.png" target="_blank"&gt;&lt;img src="https://substackcdn.com/image/fetch/$s_!Yav1!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F087dee12-73ff-4f2d-8b0f-d5df932ccdb1_1922x870.png" alt="Bar chart titled "Risk Index" showing risk scores for 10 AI models." style="max-width: 100%;" /&gt;&lt;/a&gt;&lt;p&gt;&lt;em&gt;Apple Podcasts and Spotify do not show images in the episode description. Try &lt;a href="https://pocketcasts.com/" target="_blank" rel="noreferrer"&gt;Pocket Casts&lt;/a&gt;, or another podcast app.&lt;/em&gt;&lt;/p&gt;&lt;/div&gt;</description>
      <pubDate>Tue, 02 Dec 2025 01:40:54 GMT</pubDate>
      <guid isPermaLink="false">8c0162c9-ed67-4f54-8f1c-08dfbe34cc75</guid>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
      <enclosure url="https://dl.type3.audio/episode/8c0162c9-ed67-4f54-8f1c-08dfbe34cc75.mp3?request_source=rss&amp;client_id=center_for_ai_safety&amp;feed_id=newsletter__safe_ai&amp;type=ai_narration&amp;author=Nick%2520Stockton%252C%2520Dan%2520Hendrycks%252C%2520Center%2520for%2520AI%2520Safety&amp;title=AISN%20%2366%3A%20AISN%20%2366%3A%20Evaluating%20Frontier%20Models%2C%20New%20Gemini%20and%20Claude%2C%20Preemption%20is%20Back&amp;source_url=https%3A%2F%2Fnewsletter.safe.ai%2Fp%2Fai-safety-newsletter-66-aisn-66-evaluating&amp;created_at=2025-12-02T01%3A40%3A42.966651%2B00%3A00&amp;duration=747" length="0" type="audio/mpeg"/>
      <link>https://newsletter.safe.ai/p/ai-safety-newsletter-66-aisn-66-evaluating</link>
      <itunes:duration>747</itunes:duration>
    </item>
    <item>
      <title>AISN #65: Measuring Automation and Superintelligence Moratorium Letter</title>
      <description>&lt;p&gt; Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.&lt;/p&gt;&lt;p&gt; In this edition: A new benchmark measures AI automation; 50,000 people, including top AI scientists, sign an open letter calling for a superintelligence moratorium.&lt;/p&gt;&lt;p&gt; Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.&lt;/p&gt;&lt;p&gt;&lt;strong&gt; CAIS and Scale AI release Remote Labor Index&lt;/strong&gt;&lt;/p&gt;&lt;p&gt; The Center for AI Safety (CAIS) and Scale AI have released the Remote Labor Index (RLI), which tests whether AIs can automate a wide array of real computer work projects. RLI is intended to inform policy, AI research, and businesses about the effects of automation as AI continues to advance.&lt;/p&gt;&lt;p&gt; RLI is the first benchmark of its kind. Previous AI benchmarks measure AIs on their intelligence and their abilities on isolated and specialized tasks, such as basic web browsing or coding. While these benchmarks measure useful capabilities, they don’t measure how AIs can affect the economy. RLI is the first benchmark to collect computer-based work projects from the real economy, containing work from many different professions, such as architecture, product design, video game development, and design.&lt;/p&gt;Examples of RLI Projects&lt;p&gt; Current [...]&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Outline:&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;(00:29) CAIS and Scale AI release Remote Labor Index&lt;/p&gt;&lt;p&gt;(02:04) Bipartisan Coalition for Superintelligence Moratorium&lt;/p&gt;&lt;p&gt;(04:18) In Other News&lt;/p&gt;&lt;p&gt;(05:56) Discussion about this post&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;
          &lt;p&gt;&lt;b&gt;First published:&lt;/b&gt;&lt;br/&gt;
          October 29th, 2025 &lt;/p&gt;
        
        &lt;p&gt;&lt;b&gt;Source:&lt;/b&gt;&lt;br/&gt;
        &lt;a href="https://newsletter.safe.ai/p/ai-safety-newsletter-65-measuring?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Source+URL+in+episode+description&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;https://newsletter.safe.ai/p/ai-safety-newsletter-65-measuring&lt;/a&gt; &lt;/p&gt;
        &lt;p&gt;---&lt;/p&gt;
      &lt;p&gt;Want more? Check out our &lt;a href="https://newsletter.mlsafety.org/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Episode+description+footer" target="_blank" rel="noreferrer"&gt;ML Safety Newsletter&lt;/a&gt; for technical safety research.&lt;/p&gt;
      
        &lt;p&gt;Narrated by &lt;a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Narrated+by+TYPE+III+AUDIO&amp;utm_term=center_for_ai_safety&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;
       &lt;p&gt;---&lt;/p&gt;&lt;div style="max-width: 100%";&gt;&lt;p&gt;&lt;strong&gt;Images from the article:&lt;/strong&gt;&lt;/p&gt;&lt;a href="https://substackcdn.com/image/fetch/$s_!JvUw!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe24bafcb-ca39-4266-a23e-40b80ed54605_4898x5109.jpeg" target="_blank"&gt;&lt;img src="https://substackcdn.com/image/fetch/$s_!JvUw!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe24bafcb-ca39-4266-a23e-40b80ed54605_4898x5109.jpeg" alt="Examples of RLI Projects" style="max-width: 100%;" /&gt;&lt;/a&gt;&lt;hr style="margin-top: 24px; margin-bottom: 24px;" /&gt;&lt;a href="https://substackcdn.com/image/fetch/$s_!5KNO!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb18e8802-7260-41c0-913f-ee2c4c19c245_1600x945.png" target="_blank"&gt;&lt;img src="https://substackcdn.com/image/fetch/$s_!5KNO!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb18e8802-7260-41c0-913f-ee2c4c19c245_1600x945.png" alt="Current AI agents complete at most 2.5% of projects in RLI, but are improving steadily." style="max-width: 100%;" /&gt;&lt;/a&gt;&lt;hr style="margin-top: 24px; margin-bottom: 24px;" /&gt;&lt;a href="https://substackcdn.com/image/fetch/$s_!AjsK!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9cbeff48-e3b1-4883-9030-968235dd3ee7_846x227.png" target="_blank"&gt;&lt;img src="https://substackcdn.com/image/fetch/$s_!AjsK!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9cbeff48-e3b1-4883-9030-968235dd3ee7_846x227.png" alt="Survey statistics showing U.S. adults' views on AI development and regulation." style="max-width: 100%;" /&gt;&lt;/a&gt;&lt;p&gt;&lt;em&gt;Apple Podcasts and Spotify do not show images in the episode description. Try &lt;a href="https://pocketcasts.com/" target="_blank" rel="noreferrer"&gt;Pocket Casts&lt;/a&gt;, or another podcast app.&lt;/em&gt;&lt;/p&gt;&lt;/div&gt;</description>
      <pubDate>Wed, 29 Oct 2025 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">77358fc8-1ed3-497c-9ffe-0676a87b687c</guid>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
      <enclosure url="https://dl.type3.audio/episode/77358fc8-1ed3-497c-9ffe-0676a87b687c.mp3?request_source=rss&amp;client_id=center_for_ai_safety&amp;feed_id=newsletter__safe_ai&amp;type=ai_narration&amp;author=Center%2520for%2520AI%2520Safety%252C%2520Alice%2520Blair%252C%2520Dan%2520Hendrycks&amp;title=AISN%20%2365%3A%20Measuring%20Automation%20and%20Superintelligence%20Moratorium%20Letter&amp;source_url=https%3A%2F%2Fnewsletter.safe.ai%2Fp%2Fai-safety-newsletter-65-measuring&amp;created_at=2025-10-29T16%3A30%3A51.371474%2B00%3A00&amp;duration=389" length="0" type="audio/mpeg"/>
      <link>https://newsletter.safe.ai/p/ai-safety-newsletter-65-measuring</link>
      <itunes:duration>389</itunes:duration>
    </item>
    <item>
      <title>AISN #63: New AGI Definition and Senate Bill Would Establish Liability for AI Harms</title>
      <description>&lt;p&gt; In this edition: A new bill in the Senate would hold AI companies liable for harms their products create; China tightens its export controls on rare earth metals; a definition of AGI.&lt;/p&gt;&lt;p&gt; As a reminder, we’re hiring a writer for the newsletter.&lt;/p&gt;&lt;p&gt; Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.&lt;/p&gt;&lt;p&gt;&lt;strong&gt; Senate Bill Would Establish Liability for AI Harms&lt;/strong&gt;&lt;/p&gt;&lt;p&gt; Sens. Dick Durbin, (D-Ill) and Josh Hawley (R-Mo) introduced the AI LEAD Act, which would establish a federal cause of action for people harmed by AI systems to sue AI companies.&lt;/p&gt;&lt;p&gt; Corporations are usually liable for harms their products create. When a company sells a product in the United States that harms someone, that person can generally sue that company for damages under the doctrine of product liability. Those suits force companies to internalize the harms their products create—and incentivize them to make their products safer.&lt;/p&gt;&lt;p&gt; [...]&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Outline:&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;(00:35) Senate Bill Would Establish Liability for AI Harms&lt;/p&gt;&lt;p&gt;(02:48) China Tightens Export Controls on Rare Earth Metals&lt;/p&gt;&lt;p&gt;(05:28) A Definition of AGI&lt;/p&gt;&lt;p&gt;(08:31) In Other News&lt;/p&gt;&lt;p&gt;(10:19) Discussion about this post&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;
          &lt;p&gt;&lt;b&gt;First published:&lt;/b&gt;&lt;br/&gt;
          October 16th, 2025 &lt;/p&gt;
        
        &lt;p&gt;&lt;b&gt;Source:&lt;/b&gt;&lt;br/&gt;
        &lt;a href="https://newsletter.safe.ai/p/ai-safety-newsletter-63-new-agi-definition?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Source+URL+in+episode+description&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;https://newsletter.safe.ai/p/ai-safety-newsletter-63-new-agi-definition&lt;/a&gt; &lt;/p&gt;
        &lt;p&gt;---&lt;/p&gt;
      &lt;p&gt;Want more? Check out our &lt;a href="https://newsletter.mlsafety.org/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Episode+description+footer" target="_blank" rel="noreferrer"&gt;ML Safety Newsletter&lt;/a&gt; for technical safety research.&lt;/p&gt;
      
        &lt;p&gt;Narrated by &lt;a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Narrated+by+TYPE+III+AUDIO&amp;utm_term=center_for_ai_safety&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;
       &lt;p&gt;---&lt;/p&gt;&lt;div style="max-width: 100%";&gt;&lt;p&gt;&lt;strong&gt;Images from the article:&lt;/strong&gt;&lt;/p&gt;&lt;a href="https://substackcdn.com/image/fetch/$s_!IY3v!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F579c41b1-9f1d-4f29-ab53-3c451e5e6e58_980x653.png" target="_blank"&gt;&lt;img src="https://substackcdn.com/image/fetch/$s_!IY3v!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F579c41b1-9f1d-4f29-ab53-3c451e5e6e58_980x653.png" alt="A Chinese rare earth mine. Source __T3A_LINK_IN_POST__." style="max-width: 100%;" /&gt;&lt;/a&gt;&lt;hr style="margin-top: 24px; margin-bottom: 24px;" /&gt;&lt;a href="https://substackcdn.com/image/fetch/$s_!PDPm!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3d55bd85-caa6-4252-8cc7-6470a89c5f19_1600x1158.png" target="_blank"&gt;&lt;img src="https://substackcdn.com/image/fetch/$s_!PDPm!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3d55bd85-caa6-4252-8cc7-6470a89c5f19_1600x1158.png" alt="Spider chart comparing GPT-4 (2023) and GPT-5 (2025) capabilities across multiple dimensions.

The chart shows performance metrics in areas like Knowledge, Reading &amp; Writing, Math, Reasoning, Working Memory, Memory Storage, Memory Retrieval, Visual, Auditory, and Speed. The red line (GPT-5) generally extends further out than the blue line (GPT-4), suggesting projected improvements across most capabilities." style="max-width: 100%;" /&gt;&lt;/a&gt;&lt;p&gt;&lt;em&gt;Apple Podcasts and Spotify do not show images in the episode description. Try &lt;a href="https://pocketcasts.com/" target="_blank" rel="noreferrer"&gt;Pocket Casts&lt;/a&gt;, or another podcast app.&lt;/em&gt;&lt;/p&gt;&lt;/div&gt;</description>
      <pubDate>Thu, 16 Oct 2025 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">266781ec-a176-4e81-a26b-e8c372c89ce7</guid>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
      <enclosure url="https://dl.type3.audio/episode/266781ec-a176-4e81-a26b-e8c372c89ce7.mp3?request_source=rss&amp;client_id=center_for_ai_safety&amp;feed_id=newsletter__safe_ai&amp;type=ai_narration&amp;author=Center%2520for%2520AI%2520Safety%252C%2520Corin%2520Katzke%252C%2520Dan%2520Hendrycks&amp;title=AISN%20%2363%3A%20New%20AGI%20Definition%20and%20Senate%20Bill%20Would%20Establish%20Liability%20for%20AI%20Harms&amp;source_url=https%3A%2F%2Fnewsletter.safe.ai%2Fp%2Fai-safety-newsletter-63-new-agi-definition&amp;created_at=2025-10-16T16%3A00%3A28.616411%2B00%3A00&amp;duration=652" length="0" type="audio/mpeg"/>
      <link>https://newsletter.safe.ai/p/ai-safety-newsletter-63-new-agi-definition</link>
      <itunes:duration>652</itunes:duration>
    </item>
    <item>
      <title>AISN #63: California’s SB-53 Passes the Legislature</title>
      <description>&lt;p&gt; In this edition: California's legislature sent SB-53—the ‘Transparency in Frontier Artificial Intelligence Act’—to Governor Newsom's desk. If signed into law, California would become the first US state to regulate catastrophic risk.&lt;/p&gt;&lt;p&gt; Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.&lt;/p&gt;&lt;p&gt; A note from Corin: I’m leaving the AI Safety Newsletter soon to start law school—but if you’d like to hear more from me, I’m planning to continue to write about AI in a new personal newsletter, Conditionals. On a related note, we’re also hiring a writer for the newsletter.&lt;/p&gt;&lt;p&gt;&lt;strong&gt; California's SB-53 Passes the Legislature&lt;/strong&gt;&lt;/p&gt;&lt;p&gt; SB-53 is the Legislature's weaker sequel to last year's vetoed SB-1047. After Governor Gavin Newsom vetoed SB-1047 last year, he convened the Joint California Policy Working Group on AI Frontier Models. The group's June report recommended transparency, incident reporting, and whistleblower protections as near-term priorities for governing AI systems. SB-53 (the [...]&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Outline:&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;(00:49) California's SB-53 Passes the Legislature&lt;/p&gt;&lt;p&gt;(06:33) In Other News&lt;/p&gt;&lt;p&gt;(08:37) Discussion about this post&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;
          &lt;p&gt;&lt;b&gt;First published:&lt;/b&gt;&lt;br/&gt;
          September 24th, 2025 &lt;/p&gt;
        
        &lt;p&gt;&lt;b&gt;Source:&lt;/b&gt;&lt;br/&gt;
        &lt;a href="https://newsletter.safe.ai/p/ai-safety-newsletter-63-californias?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Source+URL+in+episode+description&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;https://newsletter.safe.ai/p/ai-safety-newsletter-63-californias&lt;/a&gt; &lt;/p&gt;
        &lt;p&gt;---&lt;/p&gt;
      &lt;p&gt;Want more? Check out our &lt;a href="https://newsletter.mlsafety.org/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Episode+description+footer" target="_blank" rel="noreferrer"&gt;ML Safety Newsletter&lt;/a&gt; for technical safety research.&lt;/p&gt;
      
        &lt;p&gt;Narrated by &lt;a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Narrated+by+TYPE+III+AUDIO&amp;utm_term=center_for_ai_safety&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;
       &lt;p&gt;---&lt;/p&gt;&lt;div style="max-width: 100%";&gt;&lt;p&gt;&lt;strong&gt;Images from the article:&lt;/strong&gt;&lt;/p&gt;&lt;a href="https://substackcdn.com/image/fetch/$s_!JC0w!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F872749f2-34d8-4050-b5d2-9929a16c9a0c_1600x609.png" target="_blank"&gt;&lt;img src="https://substackcdn.com/image/fetch/$s_!JC0w!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F872749f2-34d8-4050-b5d2-9929a16c9a0c_1600x609.png" alt="The introduction to SB-53’s text. Source __T3A_LINK_IN_POST__." style="max-width: 100%;" /&gt;&lt;/a&gt;&lt;p&gt;&lt;em&gt;Apple Podcasts and Spotify do not show images in the episode description. Try &lt;a href="https://pocketcasts.com/" target="_blank" rel="noreferrer"&gt;Pocket Casts&lt;/a&gt;, or another podcast app.&lt;/em&gt;&lt;/p&gt;&lt;/div&gt;</description>
      <pubDate>Wed, 24 Sep 2025 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">59a98470-043c-4899-a851-b973149af670</guid>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
      <enclosure url="https://dl.type3.audio/episode/59a98470-043c-4899-a851-b973149af670.mp3?request_source=rss&amp;client_id=center_for_ai_safety&amp;feed_id=newsletter__safe_ai&amp;type=ai_narration&amp;author=Corin%2520Katzke%252C%2520Dan%2520Hendrycks&amp;title=AISN%20%2363%3A%20California%E2%80%99s%20SB-53%20Passes%20the%20Legislature&amp;source_url=https%3A%2F%2Fnewsletter.safe.ai%2Fp%2Fai-safety-newsletter-63-californias&amp;created_at=2025-09-24T16%3A30%3A18.392338%2B00%3A00&amp;duration=551" length="0" type="audio/mpeg"/>
      <link>https://newsletter.safe.ai/p/ai-safety-newsletter-63-californias</link>
      <itunes:duration>551</itunes:duration>
    </item>
    <item>
      <title>AISN #62: Big Tech Launches $100 Million pro-AI Super PAC</title>
      <description>&lt;p&gt; Also: Meta's Chatbot Policies Prompt Backlash Amid AI Reorganization; China Reverses Course on Nvidia H20 Purchases.&lt;/p&gt; &lt;p&gt; In this edition: Big tech launches a $100 million pro-AI super PAC; Meta's chatbot policies prompt congressional scrutiny amid the company's AI reorganization; China reverses course on buying Nvidia H20 chips after comments by Secretary of Commerce Howard Lutnick.&lt;/p&gt;&lt;p&gt; Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.&lt;/p&gt;&lt;p&gt;&lt;strong&gt; Big Tech Launches $100 Million pro-AI Super PAC&lt;/strong&gt;&lt;/p&gt;&lt;p&gt; Silicon valley executives and investors are investing more than $100 million in a new political network to push back against AI regulations, signaling that the industry intends to be a major player in next year's U.S. midterms.&lt;/p&gt;&lt;p&gt; The super PAC is backed by a16z and Greg Brockman and imitates the crypto super PAC Fairshake. The network, called Leading the Future, is modeled on the crypto-focused super-PAC Fairshake and aims to influence AI [...]&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Outline:&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;(00:46) Big Tech Launches $100 Million pro-AI Super PAC&lt;/p&gt;&lt;p&gt;(02:27) Meta's Chatbot Policies Prompt Backlash Amid AI Reorganization&lt;/p&gt;&lt;p&gt;(04:45) China Reverses Course on Nvidia H20 Purchases&lt;/p&gt;&lt;p&gt;(07:21) In Other News&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;
          &lt;p&gt;&lt;b&gt;First published:&lt;/b&gt;&lt;br/&gt;
          August 27th, 2025 &lt;/p&gt;
        
        &lt;p&gt;&lt;b&gt;Source:&lt;/b&gt;&lt;br/&gt;
        &lt;a href="https://newsletter.safe.ai/p/ai-safety-newsletter-62-big-tech?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Source+URL+in+episode+description&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;https://newsletter.safe.ai/p/ai-safety-newsletter-62-big-tech&lt;/a&gt; &lt;/p&gt;
        &lt;p&gt;---&lt;/p&gt;
      &lt;p&gt;Want more? Check out our &lt;a href="https://newsletter.mlsafety.org/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Episode+description+footer" target="_blank" rel="noreferrer"&gt;ML Safety Newsletter&lt;/a&gt; for technical safety research.&lt;/p&gt;
      
        &lt;p&gt;Narrated by &lt;a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Narrated+by+TYPE+III+AUDIO&amp;utm_term=center_for_ai_safety&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;
       &lt;p&gt;---&lt;/p&gt;&lt;div style="max-width: 100%";&gt;&lt;p&gt;&lt;strong&gt;Images from the article:&lt;/strong&gt;&lt;/p&gt;&lt;a href="https://substackcdn.com/image/fetch/$s_!NQ_Y!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F31a08d1d-bc5e-43d0-9664-5d3797244a26_1500x500.png" target="_blank"&gt;&lt;img src="https://substackcdn.com/image/fetch/$s_!NQ_Y!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F31a08d1d-bc5e-43d0-9664-5d3797244a26_1500x500.png" alt="Leading The Future’s branding. Source __T3A_LINK_IN_POST__." style="max-width: 100%;" /&gt;&lt;/a&gt;&lt;hr style="margin-top: 24px; margin-bottom: 24px;" /&gt;&lt;a href="https://substackcdn.com/image/fetch/$s_!gjRH!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3587369f-4268-4546-b4ed-9743fccad5d8_1600x505.png" target="_blank"&gt;&lt;img src="https://substackcdn.com/image/fetch/$s_!gjRH!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3587369f-4268-4546-b4ed-9743fccad5d8_1600x505.png" alt="An exerpt from Meta’s policies. Source __T3A_LINK_IN_POST__." style="max-width: 100%;" /&gt;&lt;/a&gt;&lt;p&gt;&lt;em&gt;Apple Podcasts and Spotify do not show images in the episode description. Try &lt;a href="https://pocketcasts.com/" target="_blank" rel="noreferrer"&gt;Pocket Casts&lt;/a&gt;, or another podcast app.&lt;/em&gt;&lt;/p&gt;&lt;/div&gt;</description>
      <pubDate>Wed, 27 Aug 2025 16:29:42 GMT</pubDate>
      <guid isPermaLink="false">01f71433-e88f-4fc1-82b3-37b94bb998fd</guid>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
      <enclosure url="https://dl.type3.audio/episode/01f71433-e88f-4fc1-82b3-37b94bb998fd.mp3?request_source=rss&amp;client_id=center_for_ai_safety&amp;feed_id=newsletter__safe_ai&amp;type=ai_narration&amp;author=Corin%2520Katzke%252C%2520Dan%2520Hendrycks&amp;title=AISN%20%2362%3A%20Big%20Tech%20Launches%20%24100%20Million%20pro-AI%20Super%20PAC&amp;source_url=https%3A%2F%2Fnewsletter.safe.ai%2Fp%2Fai-safety-newsletter-62-big-tech&amp;created_at=2025-08-27T16%3A29%3A34.772288%2B00%3A00&amp;duration=616" length="0" type="audio/mpeg"/>
      <link>https://newsletter.safe.ai/p/ai-safety-newsletter-62-big-tech</link>
      <itunes:duration>616</itunes:duration>
    </item>
    <item>
      <title>AISN #61: OpenAI Releases GPT-5</title>
      <description>&lt;p&gt; Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.&lt;/p&gt;&lt;p&gt; In this edition: OpenAI releases GPT-5.&lt;/p&gt;&lt;p&gt; Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.&lt;/p&gt;&lt;p&gt;&lt;strong&gt; OpenAI Releases GPT-5&lt;/strong&gt;&lt;/p&gt;&lt;p&gt; Ever since GPT-4's release in March 2023 marked a step-change improvement over GPT-3, people have used ‘GPT-5’ as a stand-in to speculate about the next generation of AI capabilities. On Thursday, OpenAI released GPT-5. While state-of-the-art in most respects, GPT-5 is not a step-change improvement over competing systems, or even recent OpenAI models—but we shouldn’t have expected it to be.&lt;/p&gt;&lt;p&gt; GPT-5 is state of the art in most respects. GPT-5 isn’t a single model like GPTs 1 through 4. It is a system of two models: a base model that answers questions quickly and is better at tasks like creative writing (an improved [...]&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Outline:&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;(00:19) OpenAI Releases GPT-5&lt;/p&gt;&lt;p&gt;(06:20) In Other News&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;
          &lt;p&gt;&lt;b&gt;First published:&lt;/b&gt;&lt;br/&gt;
          August 12th, 2025 &lt;/p&gt;
        
        &lt;p&gt;&lt;b&gt;Source:&lt;/b&gt;&lt;br/&gt;
        &lt;a href="https://newsletter.safe.ai/p/ai-safety-newsletter-61-openai-releases?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Source+URL+in+episode+description&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;https://newsletter.safe.ai/p/ai-safety-newsletter-61-openai-releases&lt;/a&gt; &lt;/p&gt;
        &lt;p&gt;---&lt;/p&gt;
      &lt;p&gt;Want more? Check out our &lt;a href="https://newsletter.mlsafety.org/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Episode+description+footer" target="_blank" rel="noreferrer"&gt;ML Safety Newsletter&lt;/a&gt; for technical safety research.&lt;/p&gt;
      
        &lt;p&gt;Narrated by &lt;a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Narrated+by+TYPE+III+AUDIO&amp;utm_term=center_for_ai_safety&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;
       &lt;p&gt;---&lt;/p&gt;&lt;div style="max-width: 100%";&gt;&lt;p&gt;&lt;strong&gt;Images from the article:&lt;/strong&gt;&lt;/p&gt;&lt;a href="https://substackcdn.com/image/fetch/$s_!dA-q!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b4694cd-18b8-48e2-9b33-344f9f6604cd_1600x898.png" target="_blank"&gt;&lt;img src="https://substackcdn.com/image/fetch/$s_!dA-q!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b4694cd-18b8-48e2-9b33-344f9f6604cd_1600x898.png" alt="Graph titled "Game Progress with Clues" comparing performance of different AI models." style="max-width: 100%;" /&gt;&lt;/a&gt;&lt;hr style="margin-top: 24px; margin-bottom: 24px;" /&gt;&lt;a href="https://substackcdn.com/image/fetch/$s_!ZEcb!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff6db7a75-0090-42ca-8439-c67d5cde44c0_632x876.png" target="_blank"&gt;&lt;img src="https://substackcdn.com/image/fetch/$s_!ZEcb!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff6db7a75-0090-42ca-8439-c67d5cde44c0_632x876.png" alt="Bar graph comparing software engineering accuracy between GPT-5, OpenAI-3, and GPT-40, showing "with/without thinking" performance." style="max-width: 100%;" /&gt;&lt;/a&gt;&lt;hr style="margin-top: 24px; margin-bottom: 24px;" /&gt;&lt;a href="https://substackcdn.com/image/fetch/$s_!VOUF!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa89976d9-abc7-44d4-9d7b-592dada46bc7_744x892.png" target="_blank"&gt;&lt;img src="https://substackcdn.com/image/fetch/$s_!VOUF!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa89976d9-abc7-44d4-9d7b-592dada46bc7_744x892.png" alt="Bar graph "HealthBench Hard Hallucinations" comparing AI models' hallucination rates." style="max-width: 100%;" /&gt;&lt;/a&gt;&lt;p&gt;&lt;em&gt;Apple Podcasts and Spotify do not show images in the episode description. Try &lt;a href="https://pocketcasts.com/" target="_blank" rel="noreferrer"&gt;Pocket Casts&lt;/a&gt;, or another podcast app.&lt;/em&gt;&lt;/p&gt;&lt;/div&gt;</description>
      <pubDate>Tue, 12 Aug 2025 17:12:21 GMT</pubDate>
      <guid isPermaLink="false">92bfb126-d740-46f9-93a9-fe2f3b74e576</guid>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
      <enclosure url="https://dl.type3.audio/episode/92bfb126-d740-46f9-93a9-fe2f3b74e576.mp3?request_source=rss&amp;client_id=center_for_ai_safety&amp;feed_id=newsletter__safe_ai&amp;type=ai_narration&amp;author=Corin%2520Katzke%252C%2520Dan%2520Hendrycks&amp;title=AISN%20%2361%3A%20OpenAI%20Releases%20GPT-5&amp;source_url=https%3A%2F%2Fnewsletter.safe.ai%2Fp%2Fai-safety-newsletter-61-openai-releases&amp;created_at=2025-08-12T17%3A12%3A11.051759%2B00%3A00&amp;duration=553" length="0" type="audio/mpeg"/>
      <link>https://newsletter.safe.ai/p/ai-safety-newsletter-61-openai-releases</link>
      <itunes:duration>553</itunes:duration>
    </item>
    <item>
      <title>AISN #60: The AI Action Plan</title>
      <description>&lt;p&gt; Also: ChatGPT Agent and IMO Gold.&lt;/p&gt; &lt;p&gt; In this edition: The Trump Administration publishes its AI Action Plan; OpenAI released ChatGPT Agent and announced that an experimental model achieved gold medal-level performance on the 2025 International Mathematical Olympiad.&lt;/p&gt;&lt;p&gt; Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.&lt;/p&gt;&lt;p&gt;&lt;strong&gt; The AI Action Plan&lt;/strong&gt;&lt;/p&gt;&lt;p&gt; On the 23rd, the White House released its AI Action Plan. The document is the outcome of a January executive order that required the President's Science Advisor, ‘AI and Crypto Czar’, and National Security Advisor (currently Michael Kratsios, David Sacks, and Marco Rubio) to submit a plan to “sustain and enhance America's global AI dominance in order to promote human flourishing, economic competitiveness, and national security.” President Trump also delivered an hour-long speech on the plan, and signed three executive orders beginning to implement some of its policies.&lt;/p&gt;&lt;picture&gt;&lt;/picture&gt;Trump displaying an executive order at the [...] &lt;p&gt;---&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Outline:&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;(00:34) The AI Action Plan&lt;/p&gt;&lt;p&gt;(07:36) ChatGPT Agent and IMO Gold&lt;/p&gt;&lt;p&gt;(12:48) In Other News&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;
          &lt;p&gt;&lt;b&gt;First published:&lt;/b&gt;&lt;br/&gt;
          July 31st, 2025 &lt;/p&gt;
        
        &lt;p&gt;&lt;b&gt;Source:&lt;/b&gt;&lt;br/&gt;
        &lt;a href="https://newsletter.safe.ai/p/ai-safety-newsletter-60-the-ai-action?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Source+URL+in+episode+description&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;https://newsletter.safe.ai/p/ai-safety-newsletter-60-the-ai-action&lt;/a&gt; &lt;/p&gt;
        &lt;p&gt;---&lt;/p&gt;
      &lt;p&gt;Want more? Check out our &lt;a href="https://newsletter.mlsafety.org/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Episode+description+footer" target="_blank" rel="noreferrer"&gt;ML Safety Newsletter&lt;/a&gt; for technical safety research.&lt;/p&gt;
      
        &lt;p&gt;Narrated by &lt;a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Narrated+by+TYPE+III+AUDIO&amp;utm_term=center_for_ai_safety&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;
       &lt;p&gt;---&lt;/p&gt;&lt;div style="max-width: 100%";&gt;&lt;p&gt;&lt;strong&gt;Images from the article:&lt;/strong&gt;&lt;/p&gt;&lt;a href="https://substackcdn.com/image/fetch/$s_!yeVV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faf95488b-7af9-4342-aec3-fddfd3b5ee7c_1400x933.png" target="_blank"&gt;&lt;img src="https://substackcdn.com/image/fetch/$s_!yeVV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faf95488b-7af9-4342-aec3-fddfd3b5ee7c_1400x933.png" alt="Trump displaying an executive order at the “Winning the AI Race” summit. Source __T3A_LINK_IN_POST__." style="max-width: 100%;" /&gt;&lt;/a&gt;&lt;hr style="margin-top: 24px; margin-bottom: 24px;" /&gt;&lt;a href="https://substackcdn.com/image/fetch/$s_!YR3_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32c045cf-daf7-4254-8cdc-4dd861f2c397_884x802.png" target="_blank"&gt;&lt;img src="https://substackcdn.com/image/fetch/$s_!YR3_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32c045cf-daf7-4254-8cdc-4dd861f2c397_884x802.png" alt="Bar graph titled "Humanity's Last Exam" showing accuracy percentages across different AI tools.

The graph compares the performance of various AI configurations, with accuracy scores ranging from 20.3% to 41.6%. The highest performing setup is ChatGPT with browser and computer terminal access, while the baseline OpenAI model without tools shows the lowest accuracy." style="max-width: 100%;" /&gt;&lt;/a&gt;&lt;hr style="margin-top: 24px; margin-bottom: 24px;" /&gt;&lt;a href="https://substackcdn.com/image/fetch/$s_!_NBd!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F39879533-bbcb-4b77-a1b9-67d248591bf5_1446x852.png" target="_blank"&gt;&lt;img src="https://substackcdn.com/image/fetch/$s_!_NBd!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F39879533-bbcb-4b77-a1b9-67d248591bf5_1446x852.png" alt="Bar graph titled "Economically important tasks" comparing model performance across time periods.

The graph shows win/tie rates for three different models (o4-mini, o3, and ChatGPT agent) against human performance, categorized by estimated task completion times ranging from 1-3 hours to 10+ hours." style="max-width: 100%;" /&gt;&lt;/a&gt;&lt;p&gt;&lt;em&gt;Apple Podcasts and Spotify do not show images in the episode description. Try &lt;a href="https://pocketcasts.com/" target="_blank" rel="noreferrer"&gt;Pocket Casts&lt;/a&gt;, or another podcast app.&lt;/em&gt;&lt;/p&gt;&lt;/div&gt;</description>
      <pubDate>Thu, 31 Jul 2025 17:44:08 GMT</pubDate>
      <guid isPermaLink="false">a1f1dbd1-a6df-4160-8eab-df87cce912ec</guid>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
      <enclosure url="https://dl.type3.audio/episode/a1f1dbd1-a6df-4160-8eab-df87cce912ec.mp3?request_source=rss&amp;client_id=center_for_ai_safety&amp;feed_id=newsletter__safe_ai&amp;type=ai_narration&amp;author=Corin%2520Katzke%252C%2520Dan%2520Hendrycks&amp;title=AISN%20%2360%3A%20The%20AI%20Action%20Plan&amp;source_url=https%3A%2F%2Fnewsletter.safe.ai%2Fp%2Fai-safety-newsletter-60-the-ai-action&amp;created_at=2025-07-31T17%3A43%3A56.49412%2B00%3A00&amp;duration=941" length="0" type="audio/mpeg"/>
      <link>https://newsletter.safe.ai/p/ai-safety-newsletter-60-the-ai-action</link>
      <itunes:duration>941</itunes:duration>
    </item>
    <item>
      <title>AISN #59: EU Publishes General-Purpose AI Code of Practice</title>
      <description>&lt;p&gt; Plus: Meta Superintelligence Labs.&lt;/p&gt; &lt;p&gt; Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.&lt;/p&gt;&lt;p&gt; In this edition: The EU published a General-Purpose AI Code of Practice for AI providers, and Meta is spending billions revamping its superintelligence development efforts.&lt;/p&gt;&lt;p&gt; Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.&lt;/p&gt;&lt;p&gt;&lt;strong&gt; EU Publishes General-Purpose AI Code of Practice&lt;/strong&gt;&lt;/p&gt;&lt;p&gt; In June 2024, the EU adopted the AI Act, which remains the world's most significant law regulating AI systems. The Act bans some uses of AI like social scoring and predictive policing and limits other “high risk” uses such as generating credit scores or evaluating educational outcomes. It also regulates general-purpose AI (GPAI) systems, imposing transparency requirements, copyright protection policies, and safety and security standards for models that pose systemic risk (defined as those trained [...]&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Outline:&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;(00:31) EU Publishes General-Purpose AI Code of Practice&lt;/p&gt;&lt;p&gt;(04:50) Meta Superintelligence Labs&lt;/p&gt;&lt;p&gt;(06:17) In Other News&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;
          &lt;p&gt;&lt;b&gt;First published:&lt;/b&gt;&lt;br/&gt;
          July 15th, 2025 &lt;/p&gt;
        
        &lt;p&gt;&lt;b&gt;Source:&lt;/b&gt;&lt;br/&gt;
        &lt;a href="https://newsletter.safe.ai/p/ai-safety-newsletter-59-eu-publishes?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Source+URL+in+episode+description&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;https://newsletter.safe.ai/p/ai-safety-newsletter-59-eu-publishes&lt;/a&gt; &lt;/p&gt;
        &lt;p&gt;---&lt;/p&gt;
      &lt;p&gt;Want more? Check out our &lt;a href="https://newsletter.mlsafety.org/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Episode+description+footer" target="_blank" rel="noreferrer"&gt;ML Safety Newsletter&lt;/a&gt; for technical safety research.&lt;/p&gt;
      
        &lt;p&gt;Narrated by &lt;a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Narrated+by+TYPE+III+AUDIO&amp;utm_term=center_for_ai_safety&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;
       &lt;p&gt;---&lt;/p&gt;&lt;div style="max-width: 100%";&gt;&lt;p&gt;&lt;strong&gt;Images from the article:&lt;/strong&gt;&lt;/p&gt;&lt;a href="https://substackcdn.com/image/fetch/$s_!glEy!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd30e7d8d-65ae-4c7c-aa81-f7e56c8b8c96_1360x966.png" target="_blank"&gt;&lt;img src="https://substackcdn.com/image/fetch/$s_!glEy!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd30e7d8d-65ae-4c7c-aa81-f7e56c8b8c96_1360x966.png" alt="Flowchart showing systemic risk assessment and mitigation process with decision points." style="max-width: 100%;" /&gt;&lt;/a&gt;&lt;p&gt;&lt;em&gt;Apple Podcasts and Spotify do not show images in the episode description. Try &lt;a href="https://pocketcasts.com/" target="_blank" rel="noreferrer"&gt;Pocket Casts&lt;/a&gt;, or another podcast app.&lt;/em&gt;&lt;/p&gt;&lt;/div&gt;</description>
      <pubDate>Tue, 15 Jul 2025 18:05:30 GMT</pubDate>
      <guid isPermaLink="false">33a65108-f2eb-48f0-be73-7de10ebd04f4</guid>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
      <enclosure url="https://dl.type3.audio/episode/33a65108-f2eb-48f0-be73-7de10ebd04f4.mp3?request_source=rss&amp;client_id=center_for_ai_safety&amp;feed_id=newsletter__safe_ai&amp;type=ai_narration&amp;author=Corin%2520Katzke%252C%2520Dan%2520Hendrycks&amp;title=AISN%20%2359%3A%20EU%20Publishes%20General-Purpose%20AI%20Code%20of%20Practice&amp;source_url=https%3A%2F%2Fnewsletter.safe.ai%2Fp%2Fai-safety-newsletter-59-eu-publishes&amp;created_at=2025-07-15T18%3A05%3A20.826632%2B00%3A00&amp;duration=563" length="0" type="audio/mpeg"/>
      <link>https://newsletter.safe.ai/p/ai-safety-newsletter-59-eu-publishes</link>
      <itunes:duration>563</itunes:duration>
    </item>
    <item>
      <title>AISN #58: Senate Removes State AI Regulation Moratorium</title>
      <description>&lt;p&gt; Plus: Judges Split on Whether Training AI on Copyrighted Material is Fair Use.&lt;/p&gt; &lt;p&gt; In this edition: The Senate removes a provision from Republican's “Big Beautiful Bill” aimed at restricting states from regulating AI; two federal judges split on whether training AI on copyrighted books in fair use.&lt;/p&gt;&lt;p&gt; Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.&lt;/p&gt;&lt;p&gt;&lt;strong&gt; Senate Removes State AI Regulation Moratorium&lt;/strong&gt;&lt;/p&gt;&lt;p&gt; The Senate removed a provision from Republican's “Big Beautiful Bill” aimed at restricting states from regulating AI. The moratorium would have prohibited states from receiving federal broadband expansion funds if they regulated AI—however, it faced procedural and political challenges in the Senate, and was ultimately removed in a vote of 99-1. Here's what happened.&lt;/p&gt;&lt;p&gt; A watered-down moratorium cleared the Byrd Rule. In an attempt to bypass the Byrd Rule, which prohibits policy provisions in budget bills, the Senate Commerce Committee revised the [...]&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Outline:&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;(00:35) Senate Removes State AI Regulation Moratorium&lt;/p&gt;&lt;p&gt;(03:04) Judges Split on Whether Training AI on Copyrighted Material is Fair Use&lt;/p&gt;&lt;p&gt;(07:19) In Other News&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;
          &lt;p&gt;&lt;b&gt;First published:&lt;/b&gt;&lt;br/&gt;
          July 3rd, 2025 &lt;/p&gt;
        
        &lt;p&gt;&lt;b&gt;Source:&lt;/b&gt;&lt;br/&gt;
        &lt;a href="https://newsletter.safe.ai/p/ai-safety-newsletter-58-senate-removes?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Source+URL+in+episode+description&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;https://newsletter.safe.ai/p/ai-safety-newsletter-58-senate-removes&lt;/a&gt; &lt;/p&gt;
        &lt;p&gt;---&lt;/p&gt;
      &lt;p&gt;Want more? Check out our &lt;a href="https://newsletter.mlsafety.org/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Episode+description+footer" target="_blank" rel="noreferrer"&gt;ML Safety Newsletter&lt;/a&gt; for technical safety research.&lt;/p&gt;
      
        &lt;p&gt;Narrated by &lt;a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Narrated+by+TYPE+III+AUDIO&amp;utm_term=center_for_ai_safety&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;
       &lt;p&gt;---&lt;/p&gt;&lt;div style="max-width: 100%";&gt;&lt;p&gt;&lt;strong&gt;Images from the article:&lt;/strong&gt;&lt;/p&gt;&lt;a href="https://substackcdn.com/image/fetch/$s_!3W7Q!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0121db23-e6ab-48b8-9f8e-50a6e3705f24_1600x1067.jpeg" target="_blank"&gt;&lt;img src="https://substackcdn.com/image/fetch/$s_!3W7Q!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0121db23-e6ab-48b8-9f8e-50a6e3705f24_1600x1067.jpeg" alt="Sen. Blackburn cosponsored the Kids Online Safety Act last year. (Source __T3A_LINK_IN_POST__.)" style="max-width: 100%;" /&gt;&lt;/a&gt;&lt;p&gt;&lt;em&gt;Apple Podcasts and Spotify do not show images in the episode description. Try &lt;a href="https://pocketcasts.com/" target="_blank" rel="noreferrer"&gt;Pocket Casts&lt;/a&gt;, or another podcast app.&lt;/em&gt;&lt;/p&gt;&lt;/div&gt;</description>
      <pubDate>Thu, 03 Jul 2025 16:25:08 GMT</pubDate>
      <guid isPermaLink="false">9d5c6242-1925-41f1-86c7-c16a2a5e9449</guid>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
      <enclosure url="https://dl.type3.audio/episode/9d5c6242-1925-41f1-86c7-c16a2a5e9449.mp3?request_source=rss&amp;client_id=center_for_ai_safety&amp;feed_id=newsletter__safe_ai&amp;type=ai_narration&amp;author=Corin%2520Katzke%252C%2520Dan%2520Hendrycks&amp;title=AISN%20%2358%3A%20Senate%20Removes%20State%20AI%20Regulation%20Moratorium&amp;source_url=https%3A%2F%2Fnewsletter.safe.ai%2Fp%2Fai-safety-newsletter-58-senate-removes&amp;created_at=2025-07-03T16%3A25%3A01.16102%2B00%3A00&amp;duration=544" length="0" type="audio/mpeg"/>
      <link>https://newsletter.safe.ai/p/ai-safety-newsletter-58-senate-removes</link>
      <itunes:duration>544</itunes:duration>
    </item>
    <item>
      <title>AISN #57: The RAISE Act</title>
      <description>&lt;p&gt; In this edition: The New York Legislature passes an act regulating frontier AI—but it may not be signed into law for some time.&lt;/p&gt;&lt;p&gt; Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.&lt;/p&gt;&lt;p&gt;&lt;strong&gt; The RAISE Act&lt;/strong&gt;&lt;/p&gt;&lt;p&gt; New York may soon become the first state to regulate frontier AI systems. On June 12, the state's legislature passed the Responsible AI Safety and Education (RAISE) Act. If New York Governor Kathy Hochul signs it into law, the RAISE Act will be the most significant state AI legislation in the U.S.&lt;/p&gt;&lt;p&gt; New York's RAISE Act imposes four guardrails on frontier labs: developers must publish a safety plan, hold back unreasonably risky models, disclose major incidents, and face penalties for non-compliance.&lt;/p&gt;&lt;ol&gt; &lt;li&gt; &lt;p&gt; Publish and maintain a safety plan. Before deployment, developers must post a redacted “safety and security protocol,” transmit the plan to both the attorney general and the [...]&lt;/p&gt;&lt;/li&gt;&lt;/ol&gt; &lt;p&gt;---&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Outline:&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;(00:21) The RAISE Act&lt;/p&gt;&lt;p&gt;(04:43) In Other News&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;
          &lt;p&gt;&lt;b&gt;First published:&lt;/b&gt;&lt;br/&gt;
          June 17th, 2025 &lt;/p&gt;
        
        &lt;p&gt;&lt;b&gt;Source:&lt;/b&gt;&lt;br/&gt;
        &lt;a href="https://newsletter.safe.ai/p/ai-safety-newsletter-57-the-raise?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Source+URL+in+episode+description&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;https://newsletter.safe.ai/p/ai-safety-newsletter-57-the-raise&lt;/a&gt; &lt;/p&gt;
        &lt;p&gt;---&lt;/p&gt;
      &lt;p&gt;Want more? Check out our &lt;a href="https://newsletter.mlsafety.org/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Episode+description+footer" target="_blank" rel="noreferrer"&gt;ML Safety Newsletter&lt;/a&gt; for technical safety research.&lt;/p&gt;
      
        &lt;p&gt;Narrated by &lt;a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Narrated+by+TYPE+III+AUDIO&amp;utm_term=center_for_ai_safety&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;
       &lt;p&gt;---&lt;/p&gt;&lt;div style="max-width: 100%";&gt;&lt;p&gt;&lt;strong&gt;Images from the article:&lt;/strong&gt;&lt;/p&gt;&lt;a href="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faaa39fa0-a05c-4785-9130-ab331a0e0e34_1600x427.png" target="_blank"&gt;&lt;img src="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faaa39fa0-a05c-4785-9130-ab331a0e0e34_1600x427.png" alt="A diagram depicting the bill’s current status. Source __T3A_LINK_IN_POST__." style="max-width: 100%;" /&gt;&lt;/a&gt;&lt;p&gt;&lt;em&gt;Apple Podcasts and Spotify do not show images in the episode description. Try &lt;a href="https://pocketcasts.com/" target="_blank" rel="noreferrer"&gt;Pocket Casts&lt;/a&gt;, or another podcast app.&lt;/em&gt;&lt;/p&gt;&lt;/div&gt;</description>
      <pubDate>Tue, 17 Jun 2025 16:32:00 GMT</pubDate>
      <guid isPermaLink="false">1f595b85-fb7c-4db1-abb9-bce78578e49a</guid>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
      <enclosure url="https://dl.type3.audio/episode/1f595b85-fb7c-4db1-abb9-bce78578e49a.mp3?request_source=rss&amp;client_id=center_for_ai_safety&amp;feed_id=newsletter__safe_ai&amp;type=ai_narration&amp;author=Corin%2520Katzke%252C%2520Dan%2520Hendrycks&amp;title=AISN%20%2357%3A%20The%20RAISE%20Act&amp;source_url=https%3A%2F%2Fnewsletter.safe.ai%2Fp%2Fai-safety-newsletter-57-the-raise&amp;created_at=2025-06-17T16%3A31%3A55.460108%2B00%3A00&amp;duration=432" length="0" type="audio/mpeg"/>
      <link>https://newsletter.safe.ai/p/ai-safety-newsletter-57-the-raise</link>
      <itunes:duration>432</itunes:duration>
    </item>
    <item>
      <title>AISN #56: Google Releases Veo 3</title>
      <description>&lt;p&gt; Plus, Opus 4 Demonstrates the Fragility of Voluntary Governance.&lt;/p&gt; &lt;p&gt; In this edition: Google released a frontier video generation model at its annual developer conference; Anthropic's Claude Opus 4 demonstrates the danger of relying on voluntary governance.&lt;/p&gt;&lt;p&gt; Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.&lt;/p&gt;&lt;p&gt;&lt;strong&gt; Google Releases Veo 3&lt;/strong&gt;&lt;/p&gt;&lt;p&gt; Last week, Google made several AI announcements at I/O 2025, its annual developer conference. An announcement of particular note is Veo 3, Google's newest video generation model.&lt;/p&gt;&lt;p&gt; Frontier video and audio generation. Veo 3 outperforms other models on human preference benchmarks, and generates both audio and video.&lt;/p&gt;&lt;picture&gt;&lt;/picture&gt;Google showcasing a video generated with Veo 3. (Source)&lt;p&gt; If you just look at benchmarks, Veo 3 is a substantial improvement over other systems. But relative benchmark improvement only tells part of the story—the absolute capabilities of systems ultimately determine their usefulness. Veo 3 looks like a marked qualitative [...]&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Outline:&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;(00:33) Google Releases Veo 3&lt;/p&gt;&lt;p&gt;(03:25) Opus 4 Demonstrates the Fragility of Voluntary Governance&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;
          &lt;p&gt;&lt;b&gt;First published:&lt;/b&gt;&lt;br/&gt;
          May 28th, 2025 &lt;/p&gt;
        
        &lt;p&gt;&lt;b&gt;Source:&lt;/b&gt;&lt;br/&gt;
        &lt;a href="https://newsletter.safe.ai/p/ai-safety-newsletter-56-google-releases?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Source+URL+in+episode+description&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;https://newsletter.safe.ai/p/ai-safety-newsletter-56-google-releases&lt;/a&gt; &lt;/p&gt;
        &lt;p&gt;---&lt;/p&gt;
      &lt;p&gt;Want more? Check out our &lt;a href="https://newsletter.mlsafety.org/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Episode+description+footer" target="_blank" rel="noreferrer"&gt;ML Safety Newsletter&lt;/a&gt; for technical safety research.&lt;/p&gt;
      
        &lt;p&gt;Narrated by &lt;a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Narrated+by+TYPE+III+AUDIO&amp;utm_term=center_for_ai_safety&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;
       &lt;p&gt;---&lt;/p&gt;&lt;div style="max-width: 100%";&gt;&lt;p&gt;&lt;strong&gt;Images from the article:&lt;/strong&gt;&lt;/p&gt;&lt;a href="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fda24a5e2-92d6-490e-b74f-88fa68203799_1600x900.png" target="_blank"&gt;&lt;img src="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fda24a5e2-92d6-490e-b74f-88fa68203799_1600x900.png" alt="Google showcasing a video generated with Veo 3. (Source __T3A_LINK_IN_POST__)" style="max-width: 100%;" /&gt;&lt;/a&gt;&lt;hr style="margin-top: 24px; margin-bottom: 24px;" /&gt;&lt;a href="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad471014-fe58-4180-a67a-9b48862263b9_1600x602.png" target="_blank"&gt;&lt;img src="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad471014-fe58-4180-a67a-9b48862263b9_1600x602.png" alt="Two box plots showing "Uplift Trial" results for bioweapons acquisition across different groups." style="max-width: 100%;" /&gt;&lt;/a&gt;&lt;p&gt;&lt;em&gt;Apple Podcasts and Spotify do not show images in the episode description. Try &lt;a href="https://pocketcasts.com/" target="_blank" rel="noreferrer"&gt;Pocket Casts&lt;/a&gt;, or another podcast app.&lt;/em&gt;&lt;/p&gt;&lt;/div&gt;</description>
      <pubDate>Wed, 28 May 2025 15:04:07 GMT</pubDate>
      <guid isPermaLink="false">85a8e0ff-3b28-4c26-9a79-950ce0d7dc4a</guid>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
      <enclosure url="https://dl.type3.audio/episode/85a8e0ff-3b28-4c26-9a79-950ce0d7dc4a.mp3?request_source=rss&amp;client_id=center_for_ai_safety&amp;feed_id=newsletter__safe_ai&amp;type=ai_narration&amp;author=Corin%2520Katzke%252C%2520Dan%2520Hendrycks&amp;title=AISN%20%2356%3A%20Google%20Releases%20Veo%203&amp;source_url=https%3A%2F%2Fnewsletter.safe.ai%2Fp%2Fai-safety-newsletter-56-google-releases&amp;created_at=2025-05-28T15%3A04%3A00.657815%2B00%3A00&amp;duration=517" length="0" type="audio/mpeg"/>
      <link>https://newsletter.safe.ai/p/ai-safety-newsletter-56-google-releases</link>
      <itunes:duration>517</itunes:duration>
    </item>
    <item>
      <title>AISN #55: Trump Administration Rescinds AI Diffusion Rule, Allows Chip Sales to Gulf States</title>
      <description>&lt;p&gt; Plus, Bills on Whistleblower Protections, Chip Location Verification, and State Preemption.&lt;/p&gt; &lt;p&gt; In this edition: The Trump Administration rescinds the Biden-era AI diffusion rule and sells AI chips to the UAE and Saudi Arabia; Federal lawmakers propose legislation on AI whistleblowers, location verification for AI chips, and prohibiting states from regulating AI.&lt;/p&gt;&lt;p&gt; Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.&lt;/p&gt;&lt;p&gt; The Center for AI Safety is also excited to announce the Summer session of our AI Safety, Ethics, and Society course, running from June 23 to September 14. The course, based on our recently published textbook, is open to participants from all disciplines and countries, and is designed to accommodate full-time work or study.&lt;/p&gt;&lt;p&gt; Applications for the Summer 2025 course are now open. The final application deadline is May 30th. Visit the course website to learn more and apply.&lt;/p&gt;&lt;p&gt;&lt;strong&gt; Trump Administration Rescinds AI Diffusion [...]&lt;/strong&gt;&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Outline:&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;(01:12) Trump Administration Rescinds AI Diffusion Rule, Allows Chip Sales to Gulf States&lt;/p&gt;&lt;p&gt;(04:14) Bills on Whistleblower Protections, Chip Location Verification, and State Preemption&lt;/p&gt;&lt;p&gt;(06:56) In Other News&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;
          &lt;p&gt;&lt;b&gt;First published:&lt;/b&gt;&lt;br/&gt;
          May 20th, 2025 &lt;/p&gt;
        
        &lt;p&gt;&lt;b&gt;Source:&lt;/b&gt;&lt;br/&gt;
        &lt;a href="https://newsletter.safe.ai/p/ai-safety-newsletter-55-trump-administration?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Source+URL+in+episode+description&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;https://newsletter.safe.ai/p/ai-safety-newsletter-55-trump-administration&lt;/a&gt; &lt;/p&gt;
        &lt;p&gt;---&lt;/p&gt;
      &lt;p&gt;Want more? Check out our &lt;a href="https://newsletter.mlsafety.org/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Episode+description+footer" target="_blank" rel="noreferrer"&gt;ML Safety Newsletter&lt;/a&gt; for technical safety research.&lt;/p&gt;
      
        &lt;p&gt;Narrated by &lt;a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Narrated+by+TYPE+III+AUDIO&amp;utm_term=center_for_ai_safety&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;
       &lt;p&gt;---&lt;/p&gt;&lt;div style="max-width: 100%";&gt;&lt;p&gt;&lt;strong&gt;Images from the article:&lt;/strong&gt;&lt;/p&gt;&lt;a href="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F45cc31a2-d027-43bd-9f4f-2b26b23e051b_1600x1066.png" target="_blank"&gt;&lt;img src="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F45cc31a2-d027-43bd-9f4f-2b26b23e051b_1600x1066.png" alt="President Trump with the Emirati president, Sheikh Mohammed bin Zayed, at the AI campus’ unveiling. (Source __T3A_LINK_IN_POST__.)" style="max-width: 100%;" /&gt;&lt;/a&gt;&lt;p&gt;&lt;em&gt;Apple Podcasts and Spotify do not show images in the episode description. Try &lt;a href="https://pocketcasts.com/" target="_blank" rel="noreferrer"&gt;Pocket Casts&lt;/a&gt;, or another podcast app.&lt;/em&gt;&lt;/p&gt;&lt;/div&gt;</description>
      <pubDate>Tue, 20 May 2025 14:56:05 GMT</pubDate>
      <guid isPermaLink="false">8575f184-7000-4641-9a47-bd8698f4b9eb</guid>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
      <enclosure url="https://dl.type3.audio/episode/8575f184-7000-4641-9a47-bd8698f4b9eb.mp3?request_source=rss&amp;client_id=center_for_ai_safety&amp;feed_id=newsletter__safe_ai&amp;type=ai_narration&amp;author=Corin%2520Katzke%252C%2520Dan%2520Hendrycks&amp;title=AISN%20%2355%3A%20Trump%20Administration%20Rescinds%20AI%20Diffusion%20Rule%2C%20Allows%20Chip%20Sales%20to%20Gulf%20States&amp;source_url=https%3A%2F%2Fnewsletter.safe.ai%2Fp%2Fai-safety-newsletter-55-trump-administration&amp;created_at=2025-05-20T14%3A56%3A00.355025%2B00%3A00&amp;duration=558" length="0" type="audio/mpeg"/>
      <link>https://newsletter.safe.ai/p/ai-safety-newsletter-55-trump-administration</link>
      <itunes:duration>558</itunes:duration>
    </item>
    <item>
      <title>AISN #54: OpenAI Updates Restructure Plan</title>
      <description>&lt;p&gt; Plus, AI Safety Collaboration in Singapore.&lt;/p&gt; &lt;p&gt; In this edition: OpenAI claims an updated restructure plan would preserve nonprofit control; A global coalition meets in Singapore to propose a research agenda for AI safety.&lt;/p&gt;&lt;p&gt; Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.&lt;/p&gt;&lt;p&gt;&lt;strong&gt; OpenAI Updates Restructure Plan&lt;/strong&gt;&lt;/p&gt;&lt;p&gt; On May 5th, OpenAI announced a new restructure plan. The announcement walks back a December 2024 proposal that would have had OpenAI's nonprofit—which oversees the company's for-profit operations—sell its controlling shares to the for-profit side of the company. That plan drew sharp criticism from former employees and civil‑society groups and prompted a lawsuit from co‑founder Elon Musk, who argued OpenAI was abandoning its charitable mission.&lt;/p&gt;&lt;p&gt; OpenAI claims the new plan preserves nonprofit control, but is light on specifics. Like the original plan, OpenAI's new plan would have OpenAI Global LLC become a public‑benefit corporation (PBC). However, instead of the nonprofit selling its [...]&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Outline:&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;(00:31) OpenAI Updates Restructure Plan&lt;/p&gt;&lt;p&gt;(03:19) AI Safety Collaboration in Singapore&lt;/p&gt;&lt;p&gt;(05:42) In Other News&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;
          &lt;p&gt;&lt;b&gt;First published:&lt;/b&gt;&lt;br/&gt;
          May 13th, 2025 &lt;/p&gt;
        
        &lt;p&gt;&lt;b&gt;Source:&lt;/b&gt;&lt;br/&gt;
        &lt;a href="https://newsletter.safe.ai/p/ai-safety-newsletter-54-openai-updates?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Source+URL+in+episode+description&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;https://newsletter.safe.ai/p/ai-safety-newsletter-54-openai-updates&lt;/a&gt; &lt;/p&gt;
        &lt;p&gt;---&lt;/p&gt;
      &lt;p&gt;Want more? Check out our &lt;a href="https://newsletter.mlsafety.org/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Episode+description+footer" target="_blank" rel="noreferrer"&gt;ML Safety Newsletter&lt;/a&gt; for technical safety research.&lt;/p&gt;
      
        &lt;p&gt;Narrated by &lt;a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Narrated+by+TYPE+III+AUDIO&amp;utm_term=center_for_ai_safety&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;
       &lt;p&gt;---&lt;/p&gt;&lt;div style="max-width: 100%";&gt;&lt;p&gt;&lt;strong&gt;Images from the article:&lt;/strong&gt;&lt;/p&gt;&lt;a href="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F41e07002-c5fd-4c60-a259-24780e32f211_1600x1064.png" target="_blank"&gt;&lt;img src="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F41e07002-c5fd-4c60-a259-24780e32f211_1600x1064.png" alt="Singapore’s Minister for Digital Development and Information speaks at the conference. Source." style="max-width: 100%;" /&gt;&lt;/a&gt;&lt;p&gt;&lt;em&gt;Apple Podcasts and Spotify do not show images in the episode description. Try &lt;a href="https://pocketcasts.com/" target="_blank" rel="noreferrer"&gt;Pocket Casts&lt;/a&gt;, or another podcast app.&lt;/em&gt;&lt;/p&gt;&lt;/div&gt;</description>
      <pubDate>Tue, 13 May 2025 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">6da2e53c-8912-4de0-ba8b-1f2ee8e81a28</guid>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
      <enclosure url="https://dl.type3.audio/episode/6da2e53c-8912-4de0-ba8b-1f2ee8e81a28.mp3?request_source=rss&amp;client_id=center_for_ai_safety&amp;feed_id=newsletter__safe_ai&amp;type=ai_narration&amp;author=Corin%2520Katzke%252C%2520Dan%2520Hendrycks&amp;title=AISN%20%2354%3A%20OpenAI%20Updates%20Restructure%20Plan&amp;source_url=https%3A%2F%2Fnewsletter.safe.ai%2Fp%2Fai-safety-newsletter-54-openai-updates&amp;created_at=2025-05-17T15%3A30%3A37.607108%2B00%3A00&amp;duration=520" length="0" type="audio/mpeg"/>
      <link>https://newsletter.safe.ai/p/ai-safety-newsletter-54-openai-updates</link>
      <itunes:duration>520</itunes:duration>
    </item>
    <item>
      <title>AISN #53: An Open Letter Attempts to Block OpenAI Restructuring</title>
      <description>&lt;p&gt; Plus, SafeBench Winners.&lt;/p&gt; &lt;p&gt; Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.&lt;/p&gt;&lt;p&gt; In this edition: Experts and ex-employees urge the Attorneys General of California and Delaware to block OpenAI's for-profit restructure; CAIS announces the winners of its safety benchmarking competition.&lt;/p&gt;&lt;p&gt; Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.&lt;/p&gt;&lt;p&gt;&lt;strong&gt; An Open Letter Attempts to Block OpenAI Restructuring&lt;/strong&gt;&lt;/p&gt;&lt;p&gt; A group of former OpenAI employees and independent experts published an open letter urging the Attorneys General (AGs) of California (where OpenAI operates) and Delaware (where OpenAI is incorporated) to block OpenAI's planned restructuring into a for-profit entity. The letter argues the move would fundamentally undermine the organization's charitable mission by jeopardizing the governance safeguards designed to protect control over AGI from profit motives.&lt;/p&gt;&lt;p&gt; OpenAI was founded with the charitable purpose to ensure that artificial general intelligence benefits all of humanity. OpenAI's original nonprofit structure, and later its capped-profit model, were designed to control profit motives in the development of AGI, which OpenAI defines as "highly autonomous systems that outperform humans at most economically valuable work." The structure was designed to prevent [...]&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Outline:&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;(00:32) An Open Letter Attempts to Block OpenAI Restructuring&lt;/p&gt;&lt;p&gt;(04:23) SafeBench Winners&lt;/p&gt;&lt;p&gt;(08:58) Other News&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;
          &lt;p&gt;&lt;b&gt;First published:&lt;/b&gt;&lt;br/&gt;
          April 29th, 2025 &lt;/p&gt;
        
        &lt;p&gt;&lt;b&gt;Source:&lt;/b&gt;&lt;br/&gt;
        &lt;a href="https://newsletter.safe.ai/p/an-open-letter-attempts-to-block?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Source+URL+in+episode+description&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;https://newsletter.safe.ai/p/an-open-letter-attempts-to-block&lt;/a&gt; &lt;/p&gt;
        &lt;p&gt;---&lt;/p&gt;
      &lt;p&gt;Want more? Check out our &lt;a href="https://newsletter.mlsafety.org/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Episode+description+footer" target="_blank" rel="noreferrer"&gt;ML Safety Newsletter&lt;/a&gt; for technical safety research.&lt;/p&gt;
      
        &lt;p&gt;Narrated by &lt;a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Narrated+by+TYPE+III+AUDIO&amp;utm_term=center_for_ai_safety&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;
       &lt;p&gt;---&lt;/p&gt;&lt;div style="max-width: 100%";&gt;&lt;p&gt;&lt;strong&gt;Images from the article:&lt;/strong&gt;&lt;/p&gt;&lt;a href="https://substackcdn.com/image/fetch/$s_!-8ts!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa9c22c79-f9b2-4fb5-af77-5626e122434f_1600x1394.png" target="_blank"&gt;&lt;img src="https://substackcdn.com/image/fetch/$s_!-8ts!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa9c22c79-f9b2-4fb5-af77-5626e122434f_1600x1394.png" alt="Table comparing governance safeguards between today and proposed restructuring across six categories." style="max-width: 100%;" /&gt;&lt;/a&gt;&lt;p&gt;&lt;em&gt;Apple Podcasts and Spotify do not show images in the episode description. Try &lt;a href="https://pocketcasts.com/" target="_blank" rel="noreferrer"&gt;Pocket Casts&lt;/a&gt;, or another podcast app.&lt;/em&gt;&lt;/p&gt;&lt;/div&gt;</description>
      <pubDate>Tue, 29 Apr 2025 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">0e333985-76e5-4254-a085-855f897504ee</guid>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
      <enclosure url="https://dl.type3.audio/episode/0e333985-76e5-4254-a085-855f897504ee.mp3?request_source=rss&amp;client_id=center_for_ai_safety&amp;feed_id=newsletter__safe_ai&amp;type=ai_narration&amp;author=Corin%2520Katzke%252C%2520Dan%2520Hendrycks&amp;title=AISN%20%2353%3A%20An%20Open%20Letter%20Attempts%20to%20Block%20OpenAI%20Restructuring&amp;source_url=https%3A%2F%2Fnewsletter.safe.ai%2Fp%2Fan-open-letter-attempts-to-block&amp;created_at=2025-11-28T03%3A00%3A09.031579%2B00%3A00&amp;duration=639" length="0" type="audio/mpeg"/>
      <link>https://newsletter.safe.ai/p/an-open-letter-attempts-to-block</link>
      <itunes:duration>639</itunes:duration>
    </item>
    <item>
      <title>AISN #52: An Expert Virology Benchmark</title>
      <description>&lt;p&gt; Plus, AI-Enabled Coups.&lt;/p&gt; &lt;p&gt; In this edition: AI now outperforms human experts in specialized virology knowledge in a new benchmark; A new report explores the risk of AI-enabled coups.&lt;/p&gt;&lt;p&gt; Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.&lt;/p&gt;&lt;p&gt;&lt;strong&gt; An Expert Virology Benchmark&lt;/strong&gt;&lt;/p&gt;&lt;p&gt; A team of researchers (primarily from SecureBio and CAIS) has developed the Virology Capabilities Test (VCT), a benchmark that measures an AI system's ability to troubleshoot complex virology laboratory protocols. Results on this benchmark suggest that AI has surpassed human experts in practical virology knowledge.&lt;/p&gt;&lt;p&gt; VCT measures practical virology knowledge, which has high dual-use potential. While AI virologists could accelerate beneficial research in virology and infectious disease prevention, bad actors could misuse the same capabilities to develop dangerous pathogens. Like the WMDP benchmark, the VCT is designed to evaluate practical dual-use scientific knowledge—in this case, virology.&lt;/p&gt;&lt;picture&gt;&lt;/picture&gt;&lt;p&gt; The benchmark consists of 322 multimodal questions [...]&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Outline:&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;(00:29) An Expert Virology Benchmark&lt;/p&gt;&lt;p&gt;(04:04) AI-Enabled Coups&lt;/p&gt;&lt;p&gt;(07:58) Other news&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;
          &lt;p&gt;&lt;b&gt;First published:&lt;/b&gt;&lt;br/&gt;
          April 22nd, 2025 &lt;/p&gt;
        
        &lt;p&gt;&lt;b&gt;Source:&lt;/b&gt;&lt;br/&gt;
        &lt;a href="https://newsletter.safe.ai/p/ai-safety-newsletter-52-an-expert?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Source+URL+in+episode+description&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;https://newsletter.safe.ai/p/ai-safety-newsletter-52-an-expert&lt;/a&gt; &lt;/p&gt;
        &lt;p&gt;---&lt;/p&gt;
      &lt;p&gt;Want more? Check out our &lt;a href="https://newsletter.mlsafety.org/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Episode+description+footer" target="_blank" rel="noreferrer"&gt;ML Safety Newsletter&lt;/a&gt; for technical safety research.&lt;/p&gt;
      
        &lt;p&gt;Narrated by &lt;a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Narrated+by+TYPE+III+AUDIO&amp;utm_term=center_for_ai_safety&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;
       &lt;p&gt;---&lt;/p&gt;&lt;div style="max-width: 100%";&gt;&lt;p&gt;&lt;strong&gt;Images from the article:&lt;/strong&gt;&lt;/p&gt;&lt;a href="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F65c50d63-f694-4e50-b713-d11384af9822_1482x704.png" target="_blank"&gt;&lt;img src="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F65c50d63-f694-4e50-b713-d11384af9822_1482x704.png" alt="Flowchart showing risk factors and mitigations for AI-enabled coups." style="max-width: 100%;" /&gt;&lt;/a&gt;&lt;hr style="margin-top: 24px; margin-bottom: 24px;" /&gt;&lt;a href="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb8a6551e-901f-4a72-9af1-2db6e168ce3b_1508x1156.png" target="_blank"&gt;&lt;img src="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb8a6551e-901f-4a72-9af1-2db6e168ce3b_1508x1156.png" alt="Flow diagram showing three key risk factors for AI-enabled coups." style="max-width: 100%;" /&gt;&lt;/a&gt;&lt;hr style="margin-top: 24px; margin-bottom: 24px;" /&gt;&lt;a href="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F70e50853-3b57-4275-92b3-08c437938175_1600x1223.png" target="_blank"&gt;&lt;img src="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F70e50853-3b57-4275-92b3-08c437938175_1600x1223.png" alt="Flow chart showing scenarios for "Example scenarios for AI-enabled military coups," with pre-coup and seizing power stages." style="max-width: 100%;" /&gt;&lt;/a&gt;&lt;hr style="margin-top: 24px; margin-bottom: 24px;" /&gt;&lt;a href="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5ed14d08-eef1-46ce-9e37-f697fdf5932e_1600x963.png" target="_blank"&gt;&lt;img src="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5ed14d08-eef1-46ce-9e37-f697fdf5932e_1600x963.png" alt="Graph showing "AI Progress on VCT" comparing language models' performance over time.

The graph plots various AI models' capabilities against a median expert virologist benchmark (shown at 50th percentile), with dates from July 2023 to April 2025 on the x-axis. Models like GPT-4, Sonnet series, and Gemini show increasing performance scores." style="max-width: 100%;" /&gt;&lt;/a&gt;&lt;hr style="margin-top: 24px; margin-bottom: 24px;" /&gt;&lt;a href="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F570a4466-7195-40b5-bdae-0bf3853676fc_1600x1441.png" target="_blank"&gt;&lt;img src="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F570a4466-7195-40b5-bdae-0bf3853676fc_1600x1441.png" alt=""The VCT benchmark" diagram showing virology knowledge mapped by practicality and misuse potential.

The graph plots virology topics on two axes: vertical (conceptual to practical) and horizontal (low to high misuse potential), with a blue-outlined target area indicating benchmark focus." style="max-width: 100%;" /&gt;&lt;/a&gt;&lt;p&gt;&lt;em&gt;Apple Podcasts and Spotify do not show images in the episode description. Try &lt;a href="https://pocketcasts.com/" target="_blank" rel="noreferrer"&gt;Pocket Casts&lt;/a&gt;, or another podcast app.&lt;/em&gt;&lt;/p&gt;&lt;/div&gt;</description>
      <pubDate>Tue, 22 Apr 2025 16:08:58 GMT</pubDate>
      <guid isPermaLink="false">9ba84c1d-e768-4192-a88b-f48f40dc46bc</guid>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
      <enclosure url="https://dl.type3.audio/episode/9ba84c1d-e768-4192-a88b-f48f40dc46bc.mp3?request_source=rss&amp;client_id=center_for_ai_safety&amp;feed_id=newsletter__safe_ai&amp;type=ai_narration&amp;author=Corin%2520Katzke%252C%2520Dan%2520Hendrycks&amp;title=AISN%20%2352%3A%20An%20Expert%20Virology%20Benchmark&amp;source_url=https%3A%2F%2Fnewsletter.safe.ai%2Fp%2Fai-safety-newsletter-52-an-expert&amp;created_at=2025-04-22T16%3A08%3A51.457929%2B00%3A00&amp;duration=610" length="0" type="audio/mpeg"/>
      <link>https://newsletter.safe.ai/p/ai-safety-newsletter-52-an-expert</link>
      <itunes:duration>610</itunes:duration>
    </item>
    <item>
      <title>AISN #51: AI Frontiers</title>
      <description>&lt;p&gt; Plus, AI 2027.&lt;/p&gt; &lt;p&gt; In this newsletter, we cover the launch of AI Frontiers, a new forum for expert commentary on the future of AI. We also discuss AI 2027, a detailed scenario describing how artificial superintelligence might emerge in just a few years.&lt;/p&gt;&lt;p&gt; Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.&lt;/p&gt;&lt;p&gt;&lt;strong&gt; AI Frontiers&lt;/strong&gt;&lt;/p&gt;&lt;p&gt; Last week, CAIS introduced AI Frontiers, a new publication dedicated to gathering expert views on AI's most pressing questions. AI's impacts are wide-ranging, affecting jobs, health, national security, and beyond. Navigating these challenges requires a forum for varied viewpoints and expertise.&lt;/p&gt;&lt;p&gt; In this story, we’d like to highlight the publication's initial articles to give you a taste of the kind of coverage you can expect from AI Frontiers.&lt;/p&gt;&lt;picture&gt;&lt;/picture&gt;&lt;p&gt; Why Racing to Artificial Superintelligence Would Undermine America's National Security. Researchers Corin Katzke (also an author of this newsletter) and Gideon Futerman [...]&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Outline:&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;(00:33) AI Frontiers&lt;/p&gt;&lt;p&gt;(05:01) AI 2027&lt;/p&gt;&lt;p&gt;(10:02) Other News&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;
          &lt;p&gt;&lt;b&gt;First published:&lt;/b&gt;&lt;br/&gt;
          April 15th, 2025 &lt;/p&gt;
        
        &lt;p&gt;&lt;b&gt;Source:&lt;/b&gt;&lt;br/&gt;
        &lt;a href="https://newsletter.safe.ai/p/ai-safety-newsletter-51-ai-frontiers?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Source+URL+in+episode+description&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;https://newsletter.safe.ai/p/ai-safety-newsletter-51-ai-frontiers&lt;/a&gt; &lt;/p&gt;
        &lt;p&gt;---&lt;/p&gt;
      &lt;p&gt;Want more? Check out our &lt;a href="https://newsletter.mlsafety.org/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Episode+description+footer" target="_blank" rel="noreferrer"&gt;ML Safety Newsletter&lt;/a&gt; for technical safety research.&lt;/p&gt;
      
        &lt;p&gt;Narrated by &lt;a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Narrated+by+TYPE+III+AUDIO&amp;utm_term=center_for_ai_safety&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;
       &lt;p&gt;---&lt;/p&gt;&lt;div style="max-width: 100%";&gt;&lt;p&gt;&lt;strong&gt;Images from the article:&lt;/strong&gt;&lt;/p&gt;&lt;a href="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa392466a-8605-45ba-bbbb-5c2629ab4bbc_1600x953.png" target="_blank"&gt;&lt;img src="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa392466a-8605-45ba-bbbb-5c2629ab4bbc_1600x953.png" alt="Graph showing potential AI development trajectories from 2026 to 2028." style="max-width: 100%;" /&gt;&lt;/a&gt;&lt;hr style="margin-top: 24px; margin-bottom: 24px;" /&gt;&lt;a href="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4c822d5a-51ef-4ee9-9d10-39a611095132_1558x548.png" target="_blank"&gt;&lt;img src="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4c822d5a-51ef-4ee9-9d10-39a611095132_1558x548.png" alt="AI Frontiers logo on deep red background." style="max-width: 100%;" /&gt;&lt;/a&gt;&lt;p&gt;&lt;em&gt;Apple Podcasts and Spotify do not show images in the episode description. Try &lt;a href="https://pocketcasts.com/" target="_blank" rel="noreferrer"&gt;Pocket Casts&lt;/a&gt;, or another podcast app.&lt;/em&gt;&lt;/p&gt;&lt;/div&gt;</description>
      <pubDate>Tue, 15 Apr 2025 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">796ffeac-3a83-4309-ab94-3ceccf2485ee</guid>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
      <enclosure url="https://dl.type3.audio/episode/796ffeac-3a83-4309-ab94-3ceccf2485ee.mp3?request_source=rss&amp;client_id=center_for_ai_safety&amp;feed_id=newsletter__safe_ai&amp;type=ai_narration&amp;author=Corin%2520Katzke%252C%2520Dan%2520Hendrycks&amp;title=AISN%20%2351%3A%20AI%20Frontiers&amp;source_url=https%3A%2F%2Fnewsletter.safe.ai%2Fp%2Fai-safety-newsletter-51-ai-frontiers&amp;created_at=2025-04-15T15%3A00%3A21.094964%2B00%3A00&amp;duration=729" length="0" type="audio/mpeg"/>
      <link>https://newsletter.safe.ai/p/ai-safety-newsletter-51-ai-frontiers</link>
      <itunes:duration>729</itunes:duration>
    </item>
    <item>
      <title>AISN #50: AI Action Plan Responses</title>
      <description>&lt;p&gt; Plus, Detecting Misbehavior in Reasoning Models.&lt;/p&gt; &lt;p&gt; In this newsletter, we cover AI companies’ responses to the federal government's request for information on the development of an AI Action Plan. We also discuss an OpenAI paper on detecting misbehavior in reasoning models by monitoring their chains of thought.&lt;/p&gt;&lt;p&gt; Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.&lt;/p&gt;&lt;p&gt; On January 23, President Trump signed an executive order giving his administration 180 days to develop an “AI Action Plan” to “enhance America's global AI dominance in order to promote human flourishing, economic competitiveness, and national security.”&lt;/p&gt;&lt;p&gt; Despite the rhetoric of the order, the Trump administration has yet to articulate many policy positions with respect to AI development and safety. In a recent interview, Ben Buchanan—Biden's AI advisor—interpreted the executive order as giving the administration time to develop its AI policies. The AI Action Plan will therefore likely [...]&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;
          &lt;p&gt;&lt;b&gt;First published:&lt;/b&gt;&lt;br/&gt;
          March 31st, 2025 &lt;/p&gt;
        
        &lt;p&gt;&lt;b&gt;Source:&lt;/b&gt;&lt;br/&gt;
        &lt;a href="https://newsletter.safe.ai/p/ai-safety-newsletter-50-ai-action?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Source+URL+in+episode+description&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;https://newsletter.safe.ai/p/ai-safety-newsletter-50-ai-action&lt;/a&gt; &lt;/p&gt;
        &lt;p&gt;---&lt;/p&gt;
      &lt;p&gt;Want more? Check out our &lt;a href="https://newsletter.mlsafety.org/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Episode+description+footer" target="_blank" rel="noreferrer"&gt;ML Safety Newsletter&lt;/a&gt; for technical safety research.&lt;/p&gt;
      
        &lt;p&gt;Narrated by &lt;a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Narrated+by+TYPE+III+AUDIO&amp;utm_term=center_for_ai_safety&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;
       &lt;p&gt;---&lt;/p&gt;&lt;div style="max-width: 100%";&gt;&lt;p&gt;&lt;strong&gt;Images from the article:&lt;/strong&gt;&lt;/p&gt;&lt;a href="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2fcfe4f8-9b5c-4ce4-9611-683a441c230b_1600x956.png" target="_blank"&gt;&lt;img src="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2fcfe4f8-9b5c-4ce4-9611-683a441c230b_1600x956.png" alt="Chat conversation showing discussion about software testing and code implementation strategies.

The green and red messages discuss analyzing polynomial functions and verifying test results." style="max-width: 100%;" /&gt;&lt;/a&gt;&lt;hr style="margin-top: 24px; margin-bottom: 24px;" /&gt;&lt;a href="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6544cd82-9ba4-472a-8183-d108be2c86ac_1537x675.png" target="_blank"&gt;&lt;img src="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6544cd82-9ba4-472a-8183-d108be2c86ac_1537x675.png" alt="Three graphs comparing baseline and CoT pressure training data over training epochs, showing different cheating scenarios.

The left graph shows no cheating, the middle shows detected cheating, and the right shows undetected cheating attempts during agent training." style="max-width: 100%;" /&gt;&lt;/a&gt;&lt;p&gt;&lt;em&gt;Apple Podcasts and Spotify do not show images in the episode description. Try &lt;a href="https://pocketcasts.com/" target="_blank" rel="noreferrer"&gt;Pocket Casts&lt;/a&gt;, or another podcast app.&lt;/em&gt;&lt;/p&gt;&lt;/div&gt;</description>
      <pubDate>Mon, 31 Mar 2025 14:54:30 GMT</pubDate>
      <guid isPermaLink="false">ae809b7d-25e4-442f-82a8-75a513ffa397</guid>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
      <enclosure url="https://dl.type3.audio/episode/ae809b7d-25e4-442f-82a8-75a513ffa397.mp3?request_source=rss&amp;client_id=center_for_ai_safety&amp;feed_id=newsletter__safe_ai&amp;type=ai_narration&amp;author=Corin%2520Katzke%252C%2520Dan%2520Hendrycks&amp;title=AISN%20%2350%3A%20AI%20Action%20Plan%20Responses&amp;source_url=https%3A%2F%2Fnewsletter.safe.ai%2Fp%2Fai-safety-newsletter-50-ai-action&amp;created_at=2025-03-31T14%3A54%3A22.375021%2B00%3A00&amp;duration=745" length="0" type="audio/mpeg"/>
      <link>https://newsletter.safe.ai/p/ai-safety-newsletter-50-ai-action</link>
      <itunes:duration>745</itunes:duration>
    </item>
    <item>
      <title>AISN #49: AI Action Plan Responses</title>
      <description>&lt;p&gt; Plus, Detecting Misbehavior in Reasoning Models.&lt;/p&gt; &lt;p&gt; In this newsletter, we cover AI companies’ responses to the federal government's request for information on the development of an AI Action Plan. We also discuss an OpenAI paper on detecting misbehavior in reasoning models by monitoring their chains of thought.&lt;/p&gt;&lt;p&gt; Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.&lt;/p&gt;&lt;p&gt; On January 23, President Trump signed an executive order giving his administration 180 days to develop an “AI Action Plan” to “enhance America's global AI dominance in order to promote human flourishing, economic competitiveness, and national security.”&lt;/p&gt;&lt;p&gt; Despite the rhetoric of the order, the Trump administration has yet to articulate many policy positions with respect to AI development and safety. In a recent interview, Ben Buchanan—Biden's AI advisor—interpreted the executive order as giving the administration time to develop its AI policies. The AI Action Plan will therefore likely [...]&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;
          &lt;p&gt;&lt;b&gt;First published:&lt;/b&gt;&lt;br/&gt;
          March 31st, 2025 &lt;/p&gt;
        
        &lt;p&gt;&lt;b&gt;Source:&lt;/b&gt;&lt;br/&gt;
        &lt;a href="https://newsletter.safe.ai/p/ai-safety-newsletter-49-ai-action?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Source+URL+in+episode+description&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;https://newsletter.safe.ai/p/ai-safety-newsletter-49-ai-action&lt;/a&gt; &lt;/p&gt;
        &lt;p&gt;---&lt;/p&gt;
      &lt;p&gt;Want more? Check out our &lt;a href="https://newsletter.mlsafety.org/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Episode+description+footer" target="_blank" rel="noreferrer"&gt;ML Safety Newsletter&lt;/a&gt; for technical safety research.&lt;/p&gt;
      
        &lt;p&gt;Narrated by &lt;a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Narrated+by+TYPE+III+AUDIO&amp;utm_term=center_for_ai_safety&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;
       &lt;p&gt;---&lt;/p&gt;&lt;div style="max-width: 100%";&gt;&lt;p&gt;&lt;strong&gt;Images from the article:&lt;/strong&gt;&lt;/p&gt;&lt;a href="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6544cd82-9ba4-472a-8183-d108be2c86ac_1537x675.png" target="_blank"&gt;&lt;img src="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6544cd82-9ba4-472a-8183-d108be2c86ac_1537x675.png" alt="Three graphs comparing baseline and CoT pressure training data over training epochs, showing different cheating scenarios.

The left graph shows no cheating, the middle shows detected cheating, and the right shows undetected cheating attempts during agent training." style="max-width: 100%;" /&gt;&lt;/a&gt;&lt;hr style="margin-top: 24px; margin-bottom: 24px;" /&gt;&lt;a href="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2fcfe4f8-9b5c-4ce4-9611-683a441c230b_1600x956.png" target="_blank"&gt;&lt;img src="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2fcfe4f8-9b5c-4ce4-9611-683a441c230b_1600x956.png" alt="Chat conversation showing discussion about software testing and code implementation strategies.

The green and red messages discuss analyzing polynomial functions and verifying test results." style="max-width: 100%;" /&gt;&lt;/a&gt;&lt;p&gt;&lt;em&gt;Apple Podcasts and Spotify do not show images in the episode description. Try &lt;a href="https://pocketcasts.com/" target="_blank" rel="noreferrer"&gt;Pocket Casts&lt;/a&gt;, or another podcast app.&lt;/em&gt;&lt;/p&gt;&lt;/div&gt;</description>
      <pubDate>Mon, 31 Mar 2025 14:51:55 GMT</pubDate>
      <guid isPermaLink="false">9edf0684-6c4d-43f6-8924-06015f9204a9</guid>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
      <enclosure url="https://dl.type3.audio/episode/9edf0684-6c4d-43f6-8924-06015f9204a9.mp3?request_source=rss&amp;client_id=center_for_ai_safety&amp;feed_id=newsletter__safe_ai&amp;type=ai_narration&amp;author=Corin%2520Katzke%252C%2520Dan%2520Hendrycks&amp;title=AISN%20%2349%3A%20AI%20Action%20Plan%20Responses&amp;source_url=https%3A%2F%2Fnewsletter.safe.ai%2Fp%2Fai-safety-newsletter-49-ai-action&amp;created_at=2025-03-31T14%3A51%3A47.260576%2B00%3A00&amp;duration=745" length="0" type="audio/mpeg"/>
      <link>https://newsletter.safe.ai/p/ai-safety-newsletter-49-ai-action</link>
      <itunes:duration>745</itunes:duration>
    </item>
    <item>
      <title>AISN</title>
      <description>&lt;p&gt; Plus, Measuring AI Honesty.&lt;/p&gt; &lt;p&gt; Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required. In this newsletter, we discuss two recent papers: a policy paper on national security strategy, and a technical paper on measuring honesty in AI systems.&lt;/p&gt;&lt;p&gt; Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.&lt;/p&gt;&lt;p&gt;&lt;strong&gt; Superintelligence Strategy&lt;/strong&gt;&lt;/p&gt;&lt;p&gt; CAIS director Dan Hendrycks, former Google CEO Eric Schmidt, and Scale AI CEO Alexandr Wang have authored a new paper, Superintelligence Strategy. The paper (and an in-depth expert version) argues that the development of superintelligence—AI systems that surpass humans in nearly every domain—is inescapably a matter of national security.&lt;/p&gt;&lt;p&gt; In this story, we introduce the paper's three-pronged strategy for national security in the age of advanced AI: deterrence, nonproliferation, and competitiveness.&lt;/p&gt;&lt;picture&gt;&lt;/picture&gt;&lt;p&gt;&lt;strong&gt; Deterrence&lt;/strong&gt;&lt;/p&gt;&lt;p&gt; The simultaneous power and danger of superintelligence presents [...]&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Outline:&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;(00:20) Superintelligence Strategy&lt;/p&gt;&lt;p&gt;(01:09) Deterrence&lt;/p&gt;&lt;p&gt;(02:41) Nonproliferation&lt;/p&gt;&lt;p&gt;(04:04) Competitiveness&lt;/p&gt;&lt;p&gt;(05:33) Measuring AI Honesty&lt;/p&gt;&lt;p&gt;(09:24) Links&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;
          &lt;p&gt;&lt;b&gt;First published:&lt;/b&gt;&lt;br/&gt;
          March 6th, 2025 &lt;/p&gt;
        
        &lt;p&gt;&lt;b&gt;Source:&lt;/b&gt;&lt;br/&gt;
        &lt;a href="https://newsletter.safe.ai/p/ai-safety-newsletter-49-superintelligence?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Source+URL+in+episode+description&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;https://newsletter.safe.ai/p/ai-safety-newsletter-49-superintelligence&lt;/a&gt; &lt;/p&gt;
        &lt;p&gt;---&lt;/p&gt;
      &lt;p&gt;Want more? Check out our &lt;a href="https://newsletter.mlsafety.org/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Episode+description+footer" target="_blank" rel="noreferrer"&gt;ML Safety Newsletter&lt;/a&gt; for technical safety research.&lt;/p&gt;
      
        &lt;p&gt;Narrated by &lt;a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Narrated+by+TYPE+III+AUDIO&amp;utm_term=center_for_ai_safety&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;
       &lt;p&gt;---&lt;/p&gt;&lt;div style="max-width: 100%";&gt;&lt;p&gt;&lt;strong&gt;Images from the article:&lt;/strong&gt;&lt;/p&gt;&lt;a href="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd4a71e77-48c9-49a6-a757-8cdbc28d19e8_1600x720.png" target="_blank"&gt;&lt;img src="https://substackcdn.com/image/fetch/w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd4a71e77-48c9-49a6-a757-8cdbc28d19e8_1600x720.png" alt="undefined" style="max-width: 100%;" /&gt;&lt;/a&gt;&lt;hr style="margin-top: 24px; margin-bottom: 24px;" /&gt;&lt;a href="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b74ae32-76b8-430f-92c9-2cf86e1ba710_1600x900.png" target="_blank"&gt;&lt;img src="https://substackcdn.com/image/fetch/w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b74ae32-76b8-430f-92c9-2cf86e1ba710_1600x900.png" alt="undefined" style="max-width: 100%;" /&gt;&lt;/a&gt;&lt;hr style="margin-top: 24px; margin-bottom: 24px;" /&gt;&lt;a href="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ac21dab-6473-4436-880b-da868c9e5d9b_1600x738.png" target="_blank"&gt;&lt;img src="https://substackcdn.com/image/fetch/w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ac21dab-6473-4436-880b-da868c9e5d9b_1600x738.png" alt="undefined" style="max-width: 100%;" /&gt;&lt;/a&gt;&lt;hr style="margin-top: 24px; margin-bottom: 24px;" /&gt;&lt;a href="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37b8b6f7-3ac8-41e2-a5b4-3cc7ed902c3e_1600x725.png" target="_blank"&gt;&lt;img src="https://substackcdn.com/image/fetch/w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37b8b6f7-3ac8-41e2-a5b4-3cc7ed902c3e_1600x725.png" alt="undefined" style="max-width: 100%;" /&gt;&lt;/a&gt;&lt;hr style="margin-top: 24px; margin-bottom: 24px;" /&gt;&lt;a href="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4455070d-25de-4786-8540-3b221b8976dd_1600x876.png" target="_blank"&gt;&lt;img src="https://substackcdn.com/image/fetch/w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4455070d-25de-4786-8540-3b221b8976dd_1600x876.png" alt="undefined" style="max-width: 100%;" /&gt;&lt;/a&gt;&lt;hr style="margin-top: 24px; margin-bottom: 24px;" /&gt;&lt;a href="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb9ac746a-e95a-47f6-9d7a-2bb63ddcf744_1600x768.png" target="_blank"&gt;&lt;img src="https://substackcdn.com/image/fetch/w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb9ac746a-e95a-47f6-9d7a-2bb63ddcf744_1600x768.png" alt="undefined" style="max-width: 100%;" /&gt;&lt;/a&gt;&lt;p&gt;&lt;em&gt;Apple Podcasts and Spotify do not show images in the episode description. Try &lt;a href="https://pocketcasts.com/" target="_blank" rel="noreferrer"&gt;Pocket Casts&lt;/a&gt;, or another podcast app.&lt;/em&gt;&lt;/p&gt;&lt;/div&gt;</description>
      <pubDate>Thu, 06 Mar 2025 16:06:00 GMT</pubDate>
      <guid isPermaLink="false">31d33949-2c49-48aa-85a3-10129c655569</guid>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
      <enclosure url="https://dl.type3.audio/episode/31d33949-2c49-48aa-85a3-10129c655569.mp3?request_source=rss&amp;client_id=center_for_ai_safety&amp;feed_id=newsletter__safe_ai&amp;type=ai_narration&amp;author=Corin%2520Katzke%252C%2520Dan%2520Hendrycks&amp;title=AISN&amp;source_url=https%3A%2F%2Fnewsletter.safe.ai%2Fp%2Fai-safety-newsletter-49-superintelligence&amp;created_at=2025-03-06T16%3A05%3A52.564283%2B00%3A00&amp;duration=691" length="0" type="audio/mpeg"/>
      <link>https://newsletter.safe.ai/p/ai-safety-newsletter-49-superintelligence</link>
      <itunes:duration>691</itunes:duration>
    </item>
    <item>
      <title>Superintelligence Strategy: Expert Version</title>
      <description>&lt;p&gt;&lt;span style=“color: rgb(34, 34, 34);”&gt;Superintelligence is destabilizing since it threatens other states’ survival—it could be weaponized, or states may lose control of it. Attempts to build superintelligence may face threats by rival states—creating a deterrence regime called Mutual Assured AI Malfunction (MAIM). In this paper, Dan Hendrycks, Eric Schmidt, and Alexandr Wang detail a strategy—focused on deterrence, nonproliferation, and competitiveness—for nations to navigate the risks of superintelligence.&lt;/span&gt;&lt;/p&gt;</description>
      <pubDate>Wed, 05 Mar 2025 22:02:37 GMT</pubDate>
      <guid isPermaLink="false">605697d3-0384-4a19-9abb-583f80c36044</guid>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
      <enclosure url="https://dl.type3.audio/episodes/uploaded-audio/35864a5b-8fc1-40f8-a77c-01fbac2b6fb3.mp3?request_source=rss&amp;client_id=center_for_ai_safety&amp;feed_id=newsletter__safe_ai&amp;type=upload&amp;author=null&amp;title=Superintelligence%20Strategy%3A%20Expert%20Version&amp;source_url=https%3A%2F%2Fwww.nationalsecurity.ai%2F%3Fexpert&amp;created_at=2025-03-05T22%3A02%3A37.476451%2B00%3A00&amp;duration=null" length="0" type="audio/mpeg"/>
      <link>https://www.nationalsecurity.ai/?expert</link>
    </item>
    <item>
      <title>Superintelligence Strategy: Standard Version</title>
      <description>&lt;p&gt;Superintelligence is destabilizing since it threatens other states’ survival—it could be weaponized, or states may lose control of it. Attempts to build superintelligence may face threats by rival states—creating a deterrence regime called Mutual Assured AI Malfunction (MAIM). In this paper, Dan Hendrycks, Eric Schmidt, and Alexandr Wang detail a strategy—focused on deterrence, nonproliferation, and competitiveness—for nations to navigate the risks of superintelligence.&lt;/p&gt;</description>
      <pubDate>Wed, 05 Mar 2025 22:02:30 GMT</pubDate>
      <guid isPermaLink="false">1ec975b5-c22a-4208-be40-6d9cacbb14f6</guid>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
      <enclosure url="https://dl.type3.audio/episodes/uploaded-audio/e25a03d1-ef9d-4531-b44f-113da216d180.mp3?request_source=rss&amp;client_id=center_for_ai_safety&amp;feed_id=newsletter__safe_ai&amp;type=upload&amp;author=null&amp;title=Superintelligence%20Strategy%3A%20Standard%20Version&amp;source_url=https%3A%2F%2Fwww.nationalsecurity.ai%2F%3Fstandard&amp;created_at=2025-03-05T22%3A02%3A31.225257%2B00%3A00&amp;duration=null" length="0" type="audio/mpeg"/>
      <link>https://www.nationalsecurity.ai/?standard</link>
    </item>
    <item>
      <title>AISN #48: Utility Engineering and EnigmaEval</title>
      <description>&lt;p&gt; Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.&lt;/p&gt;&lt;p&gt; In this newsletter, we explore two recent papers from CAIS. We’d also like to highlight that CAIS is hiring for editorial and writing roles, including for a new online platform for journalism and analysis regarding AI's impacts on national security, politics, and economics.&lt;/p&gt;&lt;p&gt;&lt;strong&gt; Utility Engineering&lt;/strong&gt;&lt;/p&gt;&lt;p&gt; A common view is that large language models (LLMs) are highly capable but fundamentally passive tools, shaping their responses based on training data without intrinsic goals or values. However, a new paper from the Center for AI Safety challenges this assumption, showing that LLMs exhibit coherent and structured value systems.&lt;/p&gt;&lt;p&gt; Structured preferences emerge with scale. The paper introduces Utility Engineering, a framework for analyzing and controlling AI [...]&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Outline:&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;(00:26) Utility Engineering&lt;/p&gt;&lt;p&gt;(04:48) EnigmaEval&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;
          &lt;p&gt;&lt;b&gt;First published:&lt;/b&gt;&lt;br/&gt;
          February 18th, 2025 &lt;/p&gt;
        
        &lt;p&gt;&lt;b&gt;Source:&lt;/b&gt;&lt;br/&gt;
        &lt;a href="https://newsletter.safe.ai/p/ai-safety-newsletter-48-utility-engineering?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Source+URL+in+episode+description&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;https://newsletter.safe.ai/p/ai-safety-newsletter-48-utility-engineering&lt;/a&gt; &lt;/p&gt;
        &lt;p&gt;---&lt;/p&gt;
      &lt;p&gt;Want more? Check out our &lt;a href="https://newsletter.mlsafety.org/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Episode+description+footer" target="_blank" rel="noreferrer"&gt;ML Safety Newsletter&lt;/a&gt; for technical safety research.&lt;/p&gt;
      
        &lt;p&gt;Narrated by &lt;a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Narrated+by+TYPE+III+AUDIO&amp;utm_term=center_for_ai_safety&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;
       &lt;p&gt;---&lt;/p&gt;&lt;div style="max-width: 100%";&gt;&lt;p&gt;&lt;strong&gt;Images from the article:&lt;/strong&gt;&lt;/p&gt;&lt;a href="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7f8e4ae6-7a37-4377-9f3d-a41efb1cbd7b_1072x782.jpeg" target="_blank"&gt;&lt;img src="https://substackcdn.com/image/fetch/w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7f8e4ae6-7a37-4377-9f3d-a41efb1cbd7b_1072x782.jpeg" alt="undefined" style="max-width: 100%;" /&gt;&lt;/a&gt;&lt;hr style="margin-top: 24px; margin-bottom: 24px;" /&gt;&lt;a href="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe67fb642-4cce-463b-aed5-26777d393977_1600x588.jpeg" target="_blank"&gt;&lt;img src="https://substackcdn.com/image/fetch/w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe67fb642-4cce-463b-aed5-26777d393977_1600x588.jpeg" alt="undefined" style="max-width: 100%;" /&gt;&lt;/a&gt;&lt;hr style="margin-top: 24px; margin-bottom: 24px;" /&gt;&lt;a href="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7ea44b62-5e2b-43de-b9de-02ee70db25ef_1600x576.png" target="_blank"&gt;&lt;img src="https://substackcdn.com/image/fetch/w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7ea44b62-5e2b-43de-b9de-02ee70db25ef_1600x576.png" alt="undefined" style="max-width: 100%;" /&gt;&lt;/a&gt;&lt;hr style="margin-top: 24px; margin-bottom: 24px;" /&gt;&lt;a href="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8fbfb7e4-413d-4552-ad61-2dd0ccd7d309_1600x1223.png" target="_blank"&gt;&lt;img src="https://substackcdn.com/image/fetch/w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8fbfb7e4-413d-4552-ad61-2dd0ccd7d309_1600x1223.png" alt="undefined" style="max-width: 100%;" /&gt;&lt;/a&gt;&lt;p&gt;&lt;em&gt;Apple Podcasts and Spotify do not show images in the episode description. Try &lt;a href="https://pocketcasts.com/" target="_blank" rel="noreferrer"&gt;Pocket Casts&lt;/a&gt;, or another podcast app.&lt;/em&gt;&lt;/p&gt;&lt;/div&gt;</description>
      <pubDate>Tue, 18 Feb 2025 17:31:09 GMT</pubDate>
      <guid isPermaLink="false">78c05760-2641-4c1a-b0ae-32c470c98eed</guid>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
      <enclosure url="https://dl.type3.audio/episode/78c05760-2641-4c1a-b0ae-32c470c98eed.mp3?request_source=rss&amp;client_id=center_for_ai_safety&amp;feed_id=newsletter__safe_ai&amp;type=ai_narration&amp;author=Corin%2520Katzke%252C%2520Dan%2520Hendrycks&amp;title=AISN%20%2348%3A%20Utility%20Engineering%20and%20EnigmaEval&amp;source_url=https%3A%2F%2Fnewsletter.safe.ai%2Fp%2Fai-safety-newsletter-48-utility-engineering&amp;created_at=2025-02-18T17%3A31%3A01.979348%2B00%3A00&amp;duration=536" length="0" type="audio/mpeg"/>
      <link>https://newsletter.safe.ai/p/ai-safety-newsletter-48-utility-engineering</link>
      <itunes:duration>536</itunes:duration>
    </item>
    <item>
      <title>AISN #47: Reasoning Models</title>
      <description>&lt;p&gt; Plus, State-Sponsored AI Cyberattacks.&lt;/p&gt; &lt;p&gt; Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.&lt;/p&gt;&lt;p&gt;&lt;strong&gt; Reasoning Models&lt;/strong&gt;&lt;/p&gt;&lt;p&gt; DeepSeek-R1 has been one of the most significant model releases since ChatGPT. After its release, the DeepSeek's app quickly rose to the top of Apple's most downloaded chart and NVIDIA saw a 17% stock decline. In this story, we cover DeepSeek-R1, OpenAI's o3-mini and Deep Research, and the policy implications of reasoning models.&lt;/p&gt;&lt;p&gt; DeepSeek-R1 is a frontier reasoning model. DeepSeek-R1 builds on the company's previous model, DeepSeek-V3, by adding reasoning capabilities through reinforcement learning training. R1 exhibits frontier-level capabilities in mathematics, coding, and scientific reasoning—comparable to OpenAI's o1. DeepSeek-R1 also scored 9.4% on Humanity's Last Exam—at the time of its release, the highest of any publicly available system.&lt;/p&gt;&lt;picture&gt;&lt;/picture&gt;&lt;p&gt; DeepSeek reports spending only about $6 million on the computing power needed to train V3—however, that number doesn’t include the full [...]&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Outline:&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;(00:13) Reasoning Models&lt;/p&gt;&lt;p&gt;(04:58) State-Sponsored AI Cyberattacks&lt;/p&gt;&lt;p&gt;(06:51) Links&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;
          &lt;p&gt;&lt;b&gt;First published:&lt;/b&gt;&lt;br/&gt;
          February 6th, 2025 &lt;/p&gt;
        
        &lt;p&gt;&lt;b&gt;Source:&lt;/b&gt;&lt;br/&gt;
        &lt;a href="https://newsletter.safe.ai/p/ai-safety-newsletter-47-reasoning?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Source+URL+in+episode+description&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;https://newsletter.safe.ai/p/ai-safety-newsletter-47-reasoning&lt;/a&gt; &lt;/p&gt;
        &lt;p&gt;---&lt;/p&gt;
      &lt;p&gt;Want more? Check out our &lt;a href="https://newsletter.mlsafety.org/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Episode+description+footer" target="_blank" rel="noreferrer"&gt;ML Safety Newsletter&lt;/a&gt; for technical safety research.&lt;/p&gt;
      
        &lt;p&gt;Narrated by &lt;a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Narrated+by+TYPE+III+AUDIO&amp;utm_term=center_for_ai_safety&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;
       &lt;p&gt;---&lt;/p&gt;&lt;div style="max-width: 100%";&gt;&lt;p&gt;&lt;strong&gt;Images from the article:&lt;/strong&gt;&lt;/p&gt;&lt;a href="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F872ba487-5b6a-484d-a542-4173781925fd_1600x1170.png" target="_blank"&gt;&lt;img src="https://substackcdn.com/image/fetch/w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F872ba487-5b6a-484d-a542-4173781925fd_1600x1170.png" alt="undefined" style="max-width: 100%;" /&gt;&lt;/a&gt;&lt;p&gt;&lt;em&gt;Apple Podcasts and Spotify do not show images in the episode description. Try &lt;a href="https://pocketcasts.com/" target="_blank" rel="noreferrer"&gt;Pocket Casts&lt;/a&gt;, or another podcast app.&lt;/em&gt;&lt;/p&gt;&lt;/div&gt;</description>
      <pubDate>Thu, 06 Feb 2025 17:38:10 GMT</pubDate>
      <guid isPermaLink="false">8262d58c-b892-4cf4-9808-73c761fe5252</guid>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
      <enclosure url="https://dl.type3.audio/episode/8262d58c-b892-4cf4-9808-73c761fe5252.mp3?request_source=rss&amp;client_id=center_for_ai_safety&amp;feed_id=newsletter__safe_ai&amp;type=ai_narration&amp;author=Corin%2520Katzke%252C%2520Dan%2520Hendrycks&amp;title=AISN%20%2347%3A%20Reasoning%20Models&amp;source_url=https%3A%2F%2Fnewsletter.safe.ai%2Fp%2Fai-safety-newsletter-47-reasoning&amp;created_at=2025-02-06T17%3A38%3A02.82962%2B00%3A00&amp;duration=540" length="0" type="audio/mpeg"/>
      <link>https://newsletter.safe.ai/p/ai-safety-newsletter-47-reasoning</link>
      <itunes:duration>540</itunes:duration>
    </item>
    <item>
      <title>AISN #46: The Transition</title>
      <description>&lt;p&gt; Plus, Humanity's Last Exam, and the AI Safety, Ethics, and Society Course.&lt;/p&gt; &lt;p&gt; Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.&lt;/p&gt;&lt;p&gt;&lt;strong&gt; The Transition&lt;/strong&gt;&lt;/p&gt;&lt;p&gt; The transition from the Biden to Trump administrations saw a flurry of executive activity on AI policy, with Biden signing several last-minute executive orders and Trump revoking Biden's 2023 executive order on AI risk. In this story, we review the state of play.&lt;/p&gt;&lt;picture&gt;&lt;/picture&gt;&lt;p&gt; Trump signing first-day executive orders. Source.&lt;/p&gt;&lt;p&gt; The AI Diffusion Framework. The final weeks of the Biden Administration saw three major actions related to AI policy. First, the Bureau of Industry and Security released its Framework for Artificial Intelligence Diffusion, which updates the US’ AI-related export controls. The rule establishes three tiers of countries 1) US allies, 2) most other countries, and 3) arms-embargoed countries.&lt;/p&gt;&lt;ol&gt; &lt;li&gt; &lt;p&gt; Companies headquartered in tier-1 countries can freely deploy AI chips in other [...]&lt;/p&gt;&lt;/li&gt;&lt;/ol&gt; &lt;p&gt;---&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Outline:&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;(00:16) The Transition&lt;/p&gt;&lt;p&gt;(04:38) CAIS and Scale AI Introduce Humanitys Last Exam&lt;/p&gt;&lt;p&gt;(08:03) AI Safety, Ethics, and Society Course&lt;/p&gt;&lt;p&gt;(09:21) Links&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;
          &lt;p&gt;&lt;b&gt;First published:&lt;/b&gt;&lt;br/&gt;
          January 23rd, 2025 &lt;/p&gt;
        
        &lt;p&gt;&lt;b&gt;Source:&lt;/b&gt;&lt;br/&gt;
        &lt;a href="https://newsletter.safe.ai/p/ai-safety-newsletter-46-the-transition?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Source+URL+in+episode+description&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;https://newsletter.safe.ai/p/ai-safety-newsletter-46-the-transition&lt;/a&gt; &lt;/p&gt;
        &lt;p&gt;---&lt;/p&gt;
      &lt;p&gt;Want more? Check out our &lt;a href="https://newsletter.mlsafety.org/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Episode+description+footer" target="_blank" rel="noreferrer"&gt;ML Safety Newsletter&lt;/a&gt; for technical safety research.&lt;/p&gt;
      
        &lt;p&gt;Narrated by &lt;a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Narrated+by+TYPE+III+AUDIO&amp;utm_term=center_for_ai_safety&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;
       &lt;p&gt;---&lt;/p&gt;&lt;div style="max-width: 100%";&gt;&lt;p&gt;&lt;strong&gt;Images from the article:&lt;/strong&gt;&lt;/p&gt;&lt;a href="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe3fc9f4a-a082-4c93-b867-14cd09b3e4a2_1600x900.png" target="_blank"&gt;&lt;img src="https://substackcdn.com/image/fetch/w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe3fc9f4a-a082-4c93-b867-14cd09b3e4a2_1600x900.png" alt="undefined" style="max-width: 100%;" /&gt;&lt;/a&gt;&lt;hr style="margin-top: 24px; margin-bottom: 24px;" /&gt;&lt;a href="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbbdc95e7-ef06-4c09-9d98-efc76100d9dc_1374x796.png" target="_blank"&gt;&lt;img src="https://substackcdn.com/image/fetch/w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbbdc95e7-ef06-4c09-9d98-efc76100d9dc_1374x796.png" alt="undefined" style="max-width: 100%;" /&gt;&lt;/a&gt;&lt;hr style="margin-top: 24px; margin-bottom: 24px;" /&gt;&lt;a href="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6add9d20-4b98-42be-82b8-c0140557e590_1055x1600.png" target="_blank"&gt;&lt;img src="https://substackcdn.com/image/fetch/w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6add9d20-4b98-42be-82b8-c0140557e590_1055x1600.png" alt="undefined" style="max-width: 100%;" /&gt;&lt;/a&gt;&lt;hr style="margin-top: 24px; margin-bottom: 24px;" /&gt;&lt;a href="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8badbf9d-83a5-42df-aa19-c71cb7fb0594_1600x470.png" target="_blank"&gt;&lt;img src="https://substackcdn.com/image/fetch/w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8badbf9d-83a5-42df-aa19-c71cb7fb0594_1600x470.png" alt="undefined" style="max-width: 100%;" /&gt;&lt;/a&gt;&lt;hr style="margin-top: 24px; margin-bottom: 24px;" /&gt;&lt;a href="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb5a0d1e-563e-4ccb-91f4-5d3c92ff6cae_1122x318.png" target="_blank"&gt;&lt;img src="https://substackcdn.com/image/fetch/w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb5a0d1e-563e-4ccb-91f4-5d3c92ff6cae_1122x318.png" alt="undefined" style="max-width: 100%;" /&gt;&lt;/a&gt;&lt;p&gt;&lt;em&gt;Apple Podcasts and Spotify do not show images in the episode description. Try &lt;a href="https://pocketcasts.com/" target="_blank" rel="noreferrer"&gt;Pocket Casts&lt;/a&gt;, or another podcast app.&lt;/em&gt;&lt;/p&gt;&lt;/div&gt;</description>
      <pubDate>Thu, 23 Jan 2025 17:09:14 GMT</pubDate>
      <guid isPermaLink="false">c101b23d-bfd4-4eeb-af0e-cb17c6455fb3</guid>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
      <enclosure url="https://dl.type3.audio/episode/c101b23d-bfd4-4eeb-af0e-cb17c6455fb3.mp3?request_source=rss&amp;client_id=center_for_ai_safety&amp;feed_id=newsletter__safe_ai&amp;type=ai_narration&amp;author=Corin%2520Katzke%252C%2520Dan%2520Hendrycks&amp;title=AISN%20%2346%3A%20The%20Transition&amp;source_url=https%3A%2F%2Fnewsletter.safe.ai%2Fp%2Fai-safety-newsletter-46-the-transition&amp;created_at=2025-01-23T17%3A09%3A06.970477%2B00%3A00&amp;duration=680" length="0" type="audio/mpeg"/>
      <link>https://newsletter.safe.ai/p/ai-safety-newsletter-46-the-transition</link>
      <itunes:duration>680</itunes:duration>
    </item>
    <item>
      <title>AISN #45: Center for AI Safety 2024 Year in Review</title>
      <description>&lt;p&gt; As 2024 draws to a close, we want to thank you for your continued support for AI safety and review what we’ve been able to accomplish. In this special-edition newsletter, we highlight some of our most important projects from the year.&lt;/p&gt;&lt;p&gt; The mission of the Center for AI Safety is to reduce societal-scale risks from AI. We focus on three pillars of work: research, field-building, and advocacy.&lt;/p&gt;&lt;p&gt;&lt;strong&gt; Research&lt;/strong&gt;&lt;/p&gt;&lt;p&gt; CAIS conducts both technical and conceptual research on AI safety. Here are some highlights from our research in 2024:&lt;/p&gt;&lt;p&gt; Circuit Breakers. We published breakthrough research showing how circuit breakers can prevent AI models from behaving dangerously by interrupting crime-enabling outputs. In a jailbreaking competition with a prize pool of tens of thousands of dollars, it took twenty thousand attempts to jailbreak a model trained with circuit breakers. The paper was accepted to NeurIPS 2024.&lt;/p&gt;&lt;picture&gt;&lt;/picture&gt;&lt;p&gt; The WMDP Benchmark. We developed the Weapons [...]&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Outline:&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;(00:34) Research&lt;/p&gt;&lt;p&gt;(04:25) Advocacy&lt;/p&gt;&lt;p&gt;(06:44) Field-Building&lt;/p&gt;&lt;p&gt;(10:38) Looking Ahead&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;
          &lt;p&gt;&lt;b&gt;First published:&lt;/b&gt;&lt;br/&gt;
          December 19th, 2024 &lt;/p&gt;
        
        &lt;p&gt;&lt;b&gt;Source:&lt;/b&gt;&lt;br/&gt;
        &lt;a href="https://newsletter.safe.ai/p/aisn-45-center-for-ai-safety-2024?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Source+URL+in+episode+description&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;https://newsletter.safe.ai/p/aisn-45-center-for-ai-safety-2024&lt;/a&gt; &lt;/p&gt;
        &lt;p&gt;---&lt;/p&gt;
      &lt;p&gt;Want more? Check out our &lt;a href="https://newsletter.mlsafety.org/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Episode+description+footer" target="_blank" rel="noreferrer"&gt;ML Safety Newsletter&lt;/a&gt; for technical safety research.&lt;/p&gt;
      
        &lt;p&gt;Narrated by &lt;a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Narrated+by+TYPE+III+AUDIO&amp;utm_term=center_for_ai_safety&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;
       &lt;p&gt;---&lt;/p&gt;&lt;div style="max-width: 100%";&gt;&lt;p&gt;&lt;strong&gt;Images from the article:&lt;/strong&gt;&lt;/p&gt;&lt;a href="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2925c7c6-ee18-4ab9-8405-fca897d63024_1546x1048.png" target="_blank"&gt;&lt;img src="https://substackcdn.com/image/fetch/w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2925c7c6-ee18-4ab9-8405-fca897d63024_1546x1048.png" alt="undefined" style="max-width: 100%;" /&gt;&lt;/a&gt;&lt;hr style="margin-top: 24px; margin-bottom: 24px;" /&gt;&lt;a href="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd2461925-c1d1-49ae-bc5f-f5e8740a8079_1192x422.png" target="_blank"&gt;&lt;img src="https://substackcdn.com/image/fetch/w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd2461925-c1d1-49ae-bc5f-f5e8740a8079_1192x422.png" alt="undefined" style="max-width: 100%;" /&gt;&lt;/a&gt;&lt;hr style="margin-top: 24px; margin-bottom: 24px;" /&gt;&lt;a href="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faca531b4-89a6-4cb3-b01f-4a90529147f1_1600x728.png" target="_blank"&gt;&lt;img src="https://substackcdn.com/image/fetch/w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faca531b4-89a6-4cb3-b01f-4a90529147f1_1600x728.png" alt="undefined" style="max-width: 100%;" /&gt;&lt;/a&gt;&lt;hr style="margin-top: 24px; margin-bottom: 24px;" /&gt;&lt;a href="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F39af8cc5-f5b2-499d-9339-2cec4dba653b_1600x964.png" target="_blank"&gt;&lt;img src="https://substackcdn.com/image/fetch/w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F39af8cc5-f5b2-499d-9339-2cec4dba653b_1600x964.png" alt="undefined" style="max-width: 100%;" /&gt;&lt;/a&gt;&lt;p&gt;&lt;em&gt;Apple Podcasts and Spotify do not show images in the episode description. Try &lt;a href="https://pocketcasts.com/" target="_blank" rel="noreferrer"&gt;Pocket Casts&lt;/a&gt;, or another podcast app.&lt;/em&gt;&lt;/p&gt;&lt;/div&gt;</description>
      <pubDate>Thu, 19 Dec 2024 17:12:39 GMT</pubDate>
      <guid isPermaLink="false">1c54b2bf-b3cd-446c-a0ca-a6a9e8596413</guid>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
      <enclosure url="https://dl.type3.audio/episode/1c54b2bf-b3cd-446c-a0ca-a6a9e8596413.mp3?request_source=rss&amp;client_id=center_for_ai_safety&amp;feed_id=newsletter__safe_ai&amp;type=ai_narration&amp;author=Center%2520for%2520AI%2520Safety&amp;title=AISN%20%2345%3A%20Center%20for%20AI%20Safety%202024%20Year%20in%20Review&amp;source_url=https%3A%2F%2Fnewsletter.safe.ai%2Fp%2Faisn-45-center-for-ai-safety-2024&amp;created_at=2024-12-19T17%3A12%3A32.222367%2B00%3A00&amp;duration=691" length="0" type="audio/mpeg"/>
      <link>https://newsletter.safe.ai/p/aisn-45-center-for-ai-safety-2024</link>
      <itunes:duration>691</itunes:duration>
    </item>
    <item>
      <title>AISN #44: The Trump Circle on AI Safety</title>
      <description>&lt;p&gt; Plus, Chinese researchers used Llama to create a military tool for the PLA, a Google AI system discovered a zero-day cybersecurity vulnerability, and Complex Systems.&lt;/p&gt; &lt;p&gt; Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.&lt;/p&gt;&lt;p&gt;&lt;strong&gt; The Trump Circle on AI Safety&lt;/strong&gt;&lt;/p&gt;&lt;p&gt; The incoming Trump administration is likely to significantly alter the US government's approach to AI safety. For example, Trump is likely to immediately repeal Biden's Executive Order on AI.&lt;/p&gt;&lt;p&gt; However, some of Trump's circle appear to take AI safety seriously. The most prominent AI safety advocate close to Trump is Elon Musk, who earlier this year supported SB 1047. However, he is not alone. Below, we’ve gathered some promising perspectives from other members of Trump's circle and incoming administration.&lt;/p&gt;&lt;picture&gt;&lt;/picture&gt;&lt;p&gt; Trump and Musk at UFC 309. Photo Source.&lt;/p&gt;&lt;ol&gt; &lt;li&gt; &lt;p&gt; Robert F. Kennedy Jr, Trump's pick for Secretary of Health and Human Services, said in [...]&lt;/p&gt;&lt;/li&gt;&lt;/ol&gt; &lt;p&gt;---&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Outline:&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;(00:24) The Trump Circle on AI Safety&lt;/p&gt;&lt;p&gt;(02:41) Chinese Researchers Used Llama to Create a Military Tool for the PLA&lt;/p&gt;&lt;p&gt;(04:14) A Google AI System Discovered a Zero-Day Cybersecurity Vulnerability&lt;/p&gt;&lt;p&gt;(05:27) Complex Systems&lt;/p&gt;&lt;p&gt;(08:54) Links&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;
          &lt;p&gt;&lt;b&gt;First published:&lt;/b&gt;&lt;br/&gt;
          November 19th, 2024 &lt;/p&gt;
        
        &lt;p&gt;&lt;b&gt;Source:&lt;/b&gt;&lt;br/&gt;
        &lt;a href="https://newsletter.safe.ai/p/ai-safety-newsletter-44-the-trump?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Source+URL+in+episode+description&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;https://newsletter.safe.ai/p/ai-safety-newsletter-44-the-trump&lt;/a&gt; &lt;/p&gt;
        &lt;p&gt;---&lt;/p&gt;
      &lt;p&gt;Want more? Check out our &lt;a href="https://newsletter.mlsafety.org/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Episode+description+footer" target="_blank" rel="noreferrer"&gt;ML Safety Newsletter&lt;/a&gt; for technical safety research.&lt;/p&gt;
      
        &lt;p&gt;Narrated by &lt;a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Narrated+by+TYPE+III+AUDIO&amp;utm_term=center_for_ai_safety&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;
       &lt;p&gt;---&lt;/p&gt;&lt;div style="max-width: 100%";&gt;&lt;p&gt;&lt;strong&gt;Images from the article:&lt;/strong&gt;&lt;/p&gt;&lt;a href="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcbf2434c-97c4-457f-b0e4-2db36b107fc2_959x639.jpeg" target="_blank"&gt;&lt;img src="https://substackcdn.com/image/fetch/w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcbf2434c-97c4-457f-b0e4-2db36b107fc2_959x639.jpeg" alt="undefined" style="max-width: 100%;" /&gt;&lt;/a&gt;&lt;hr style="margin-top: 24px; margin-bottom: 24px;" /&gt;&lt;a href="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffb705c69-2697-4ca1-a554-ae7b75402a1d_1339x693.png" target="_blank"&gt;&lt;img src="https://substackcdn.com/image/fetch/w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffb705c69-2697-4ca1-a554-ae7b75402a1d_1339x693.png" alt="undefined" style="max-width: 100%;" /&gt;&lt;/a&gt;&lt;p&gt;&lt;em&gt;Apple Podcasts and Spotify do not show images in the episode description. Try &lt;a href="https://pocketcasts.com/" target="_blank" rel="noreferrer"&gt;Pocket Casts&lt;/a&gt;, or another podcast app.&lt;/em&gt;&lt;/p&gt;&lt;/div&gt;</description>
      <pubDate>Tue, 19 Nov 2024 16:08:17 GMT</pubDate>
      <guid isPermaLink="false">ca204a3b-8d26-4f81-acb6-35fd4294080c</guid>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
      <enclosure url="https://dl.type3.audio/episode/ca204a3b-8d26-4f81-acb6-35fd4294080c.mp3?request_source=rss&amp;client_id=center_for_ai_safety&amp;feed_id=newsletter__safe_ai&amp;type=ai_narration&amp;author=Corin%2520Katzke%252C%2520Julius%2520Simonelli%252C%2520Andrew%2520Zeng%252C%2520Dan%2520Hendrycks&amp;title=AISN%20%2344%3A%20The%20Trump%20Circle%20on%20AI%20Safety&amp;source_url=https%3A%2F%2Fnewsletter.safe.ai%2Fp%2Fai-safety-newsletter-44-the-trump&amp;created_at=2024-11-19T16%3A08%3A11.347167%2B00%3A00&amp;duration=682" length="0" type="audio/mpeg"/>
      <link>https://newsletter.safe.ai/p/ai-safety-newsletter-44-the-trump</link>
      <itunes:duration>682</itunes:duration>
    </item>
    <item>
      <title>AISN #43: White House Issues First National Security Memo on AI</title>
      <description>&lt;p&gt; Plus, AI and Job Displacement, and AI Takes Over the Nobels.&lt;/p&gt; &lt;p&gt; Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.&lt;/p&gt;&lt;p&gt;&lt;strong&gt; White House Issues First National Security Memo on AI&lt;/strong&gt;&lt;/p&gt;&lt;p&gt; On October 24, 2024, the White House issued the first National Security Memorandum (NSM) on Artificial Intelligence, accompanied by a Framework to Advance AI Governance and Risk Management in National Security.&lt;/p&gt;&lt;p&gt; The NSM identifies AI leadership as a national security priority. The memorandum states that competitors have employed economic and technological espionage to steal U.S. AI technology. To maintain a U.S. advantage in AI, the memorandum directs the National Economic Council to assess the U.S.'s competitive position in:&lt;/p&gt;&lt;ol&gt; &lt;li&gt; &lt;p&gt; Semiconductor design and manufacturing&lt;/p&gt;&lt;/li&gt;&lt;li&gt; &lt;p&gt; Availability of computational resources&lt;/p&gt;&lt;/li&gt;&lt;li&gt; &lt;p&gt; Access to workers highly skilled in AI&lt;/p&gt;&lt;/li&gt;&lt;li&gt; &lt;p&gt; Capital availability for AI development&lt;/p&gt;&lt;/li&gt;&lt;/ol&gt;&lt;p&gt; The Intelligence Community must make gathering intelligence on competitors' operations against the [...]&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Outline:&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;(00:18) White House Issues First National Security Memo on AI&lt;/p&gt;&lt;p&gt;(03:22) AI and Job Displacement&lt;/p&gt;&lt;p&gt;(09:13) AI Takes Over the Nobels&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;
          &lt;p&gt;&lt;b&gt;First published:&lt;/b&gt;&lt;br/&gt;
          October 28th, 2024 &lt;/p&gt;
        
        &lt;p&gt;&lt;b&gt;Source:&lt;/b&gt;&lt;br/&gt;
        &lt;a href="https://newsletter.safe.ai/p/ai-safety-newsletter-43-white-house?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Source+URL+in+episode+description&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;https://newsletter.safe.ai/p/ai-safety-newsletter-43-white-house&lt;/a&gt; &lt;/p&gt;
        &lt;p&gt;---&lt;/p&gt;
      &lt;p&gt;Want more? Check out our &lt;a href="https://newsletter.mlsafety.org/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Episode+description+footer" target="_blank" rel="noreferrer"&gt;ML Safety Newsletter&lt;/a&gt; for technical safety research.&lt;/p&gt;
      
        &lt;p&gt;Narrated by &lt;a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Narrated+by+TYPE+III+AUDIO&amp;utm_term=center_for_ai_safety&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;
       &lt;p&gt;---&lt;/p&gt;&lt;div style="max-width: 100%";&gt;&lt;p&gt;&lt;strong&gt;Images from the article:&lt;/strong&gt;&lt;/p&gt;&lt;a href="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fde24ea1d-e137-4392-98b3-9fa80ce102a7_1220x1166.png" target="_blank"&gt;&lt;img src="https://substackcdn.com/image/fetch/w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fde24ea1d-e137-4392-98b3-9fa80ce102a7_1220x1166.png" alt="undefined" style="max-width: 100%;" /&gt;&lt;/a&gt;&lt;hr style="margin-top: 24px; margin-bottom: 24px;" /&gt;&lt;a href="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa80a09bc-e4c3-4d08-96af-ff156cb5131e_2670x1982.png" target="_blank"&gt;&lt;img src="https://substackcdn.com/image/fetch/w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa80a09bc-e4c3-4d08-96af-ff156cb5131e_2670x1982.png" alt="undefined" style="max-width: 100%;" /&gt;&lt;/a&gt;&lt;p&gt;&lt;em&gt;Apple Podcasts and Spotify do not show images in the episode description. Try &lt;a href="https://pocketcasts.com/" target="_blank" rel="noreferrer"&gt;Pocket Casts&lt;/a&gt;, or another podcast app.&lt;/em&gt;&lt;/p&gt;&lt;/div&gt;</description>
      <pubDate>Mon, 28 Oct 2024 15:22:21 GMT</pubDate>
      <guid isPermaLink="false">e137f80a-4524-4a59-b1ec-dfa341eaf411</guid>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
      <enclosure url="https://dl.type3.audio/episode/e137f80a-4524-4a59-b1ec-dfa341eaf411.mp3?request_source=rss&amp;client_id=center_for_ai_safety&amp;feed_id=newsletter__safe_ai&amp;type=ai_narration&amp;author=Corin%2520Katzke%252C%2520Alexa%2520Pan%252C%2520Dan%2520Hendrycks&amp;title=AISN%20%2343%3A%20White%20House%20Issues%20First%20National%20Security%20Memo%20on%20AI&amp;source_url=https%3A%2F%2Fnewsletter.safe.ai%2Fp%2Fai-safety-newsletter-43-white-house&amp;created_at=2024-10-28T15%3A22%3A16.187088%2B00%3A00&amp;duration=895" length="0" type="audio/mpeg"/>
      <link>https://newsletter.safe.ai/p/ai-safety-newsletter-43-white-house</link>
      <itunes:duration>895</itunes:duration>
    </item>
    <item>
      <title>AISN #42: Newsom Vetoes SB 1047</title>
      <description>&lt;p&gt; Plus, OpenAI's o1, and AI Governance Summary.&lt;/p&gt; &lt;p&gt; Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.&lt;/p&gt;&lt;p&gt; Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.&lt;/p&gt;&lt;p&gt;&lt;strong&gt; Newsom Vetoes SB 1047&lt;/strong&gt;&lt;/p&gt;&lt;p&gt; On Sunday, Governor Newsom vetoed California's Senate Bill 1047 (SB 1047), the most ambitious legislation to-date aimed at regulating frontier AI models. The bill, introduced by Senator Scott Wiener and covered in a previous newsletter, would have required AI developers to test frontier models for hazardous capabilities and take steps to mitigate catastrophic risks. (CAIS Action Fund was a co-sponsor of SB 1047.)&lt;/p&gt;&lt;p&gt; Newsom states that SB 1047 is not comprehensive enough. In his letter to the California Senate, the governor argued that “SB 1047 does not take into account whether an AI system is deployed in high-risk environments, involves [...]&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Outline:&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;(00:18) Newsom Vetoes SB 1047&lt;/p&gt;&lt;p&gt;(01:55) OpenAI's o1&lt;/p&gt;&lt;p&gt;(06:44) AI Governance&lt;/p&gt;&lt;p&gt;(10:32) Links&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;
          &lt;p&gt;&lt;b&gt;First published:&lt;/b&gt;&lt;br/&gt;
          October 1st, 2024 &lt;/p&gt;
        
        &lt;p&gt;&lt;b&gt;Source:&lt;/b&gt;&lt;br/&gt;
        &lt;a href="https://newsletter.safe.ai/p/ai-safety-newsletter-42-newsom-vetoes?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Source+URL+in+episode+description&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;https://newsletter.safe.ai/p/ai-safety-newsletter-42-newsom-vetoes&lt;/a&gt; &lt;/p&gt;
        &lt;p&gt;---&lt;/p&gt;
      &lt;p&gt;Want more? Check out our &lt;a href="https://newsletter.mlsafety.org/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Episode+description+footer" target="_blank" rel="noreferrer"&gt;ML Safety Newsletter&lt;/a&gt; for technical safety research.&lt;/p&gt;
      
        &lt;p&gt;Narrated by &lt;a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Narrated+by+TYPE+III+AUDIO&amp;utm_term=center_for_ai_safety&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;
       &lt;p&gt;---&lt;/p&gt;&lt;div style="max-width: 100%";&gt;&lt;p&gt;&lt;strong&gt;Images from the article:&lt;/strong&gt;&lt;/p&gt;&lt;a href="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2ab7ef13-032e-49f4-abaf-878cbd92c902_1600x622.png" target="_blank"&gt;&lt;img src="https://substackcdn.com/image/fetch/w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2ab7ef13-032e-49f4-abaf-878cbd92c902_1600x622.png" alt="undefined" style="max-width: 100%;" /&gt;&lt;/a&gt;&lt;hr style="margin-top: 24px; margin-bottom: 24px;" /&gt;&lt;a href="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3e9a48f-d81c-4ee1-80ce-5be272592b71_1600x1100.png" target="_blank"&gt;&lt;img src="https://substackcdn.com/image/fetch/w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3e9a48f-d81c-4ee1-80ce-5be272592b71_1600x1100.png" alt="undefined" style="max-width: 100%;" /&gt;&lt;/a&gt;&lt;hr style="margin-top: 24px; margin-bottom: 24px;" /&gt;&lt;a href="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F597d95c5-d56f-498c-9861-1c8bcd9cb9e6_855x520.png" target="_blank"&gt;&lt;img src="https://substackcdn.com/image/fetch/w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F597d95c5-d56f-498c-9861-1c8bcd9cb9e6_855x520.png" alt="undefined" style="max-width: 100%;" /&gt;&lt;/a&gt;&lt;p&gt;&lt;em&gt;Apple Podcasts and Spotify do not show images in the episode description. Try &lt;a href="https://pocketcasts.com/" target="_blank" rel="noreferrer"&gt;Pocket Casts&lt;/a&gt;, or another podcast app.&lt;/em&gt;&lt;/p&gt;&lt;/div&gt;</description>
      <pubDate>Tue, 01 Oct 2024 16:17:13 GMT</pubDate>
      <guid isPermaLink="false">e2eec9cc-79a2-419a-879f-6d1fe68bd0e2</guid>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
      <enclosure url="https://dl.type3.audio/episode/e2eec9cc-79a2-419a-879f-6d1fe68bd0e2.mp3?request_source=rss&amp;client_id=center_for_ai_safety&amp;feed_id=newsletter__safe_ai&amp;type=ai_narration&amp;author=Corin%2520Katzke%252C%2520Julius%2520Simonelli%252C%2520Alexa%2520Pan%252C%2520Andrew%2520Zeng%252C%2520Dan%2520Hendrycks&amp;title=AISN%20%2342%3A%20Newsom%20Vetoes%20SB%201047&amp;source_url=https%3A%2F%2Fnewsletter.safe.ai%2Fp%2Fai-safety-newsletter-42-newsom-vetoes&amp;created_at=2024-10-01T16%3A17%3A06.531061%2B00%3A00&amp;duration=791" length="0" type="audio/mpeg"/>
      <link>https://newsletter.safe.ai/p/ai-safety-newsletter-42-newsom-vetoes</link>
      <itunes:duration>791</itunes:duration>
    </item>
    <item>
      <title>AISN #41: The Next Generation of Compute Scale</title>
      <description>&lt;p&gt; Plus, Ranking Models by Susceptibility to Jailbreaking, and Machine Ethics.&lt;/p&gt; &lt;p&gt; Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.&lt;/p&gt;&lt;p&gt;&lt;strong&gt; The Next Generation of Compute Scale&lt;/strong&gt;&lt;/p&gt;&lt;p&gt; AI development is on the cusp of a dramatic expansion in compute scale. Recent developments across multiple fronts—from chip manufacturing to power infrastructure—point to a future where AI models may dwarf today's largest systems. In this story, we examine key developments and their implications for the future of AI compute.&lt;/p&gt;&lt;p&gt; xAI and Tesla are building massive AI clusters. Elon Musk's xAI has brought its Memphis supercluster—“Colossus”—online. According to Musk, the cluster has 100k Nvidia H100s, making it the largest supercomputer in the world. Moreover, xAI plans to add 50k H200s in the next few months. For comparison, Meta's Llama 3 was trained on 16k H100s.&lt;/p&gt;&lt;p&gt; Meanwhile, Tesla's “Gigafactory Texas” is expanding to house an AI supercluster. Tesla's Gigafactory supercomputer [...]&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Outline:&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;(00:18) The Next Generation of Compute Scale&lt;/p&gt;&lt;p&gt;(04:36) Ranking Models by Susceptibility to Jailbreaking&lt;/p&gt;&lt;p&gt;(06:07) Machine Ethics&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;
          &lt;p&gt;&lt;b&gt;First published:&lt;/b&gt;&lt;br/&gt;
          September 11th, 2024 &lt;/p&gt;
        
        &lt;p&gt;&lt;b&gt;Source:&lt;/b&gt;&lt;br/&gt;
        &lt;a href="https://newsletter.safe.ai/p/ai-safety-newsletter-41-the-next?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Source+URL+in+episode+description&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;https://newsletter.safe.ai/p/ai-safety-newsletter-41-the-next&lt;/a&gt; &lt;/p&gt;
        &lt;p&gt;---&lt;/p&gt;
      &lt;p&gt;Want more? Check out our &lt;a href="https://newsletter.mlsafety.org/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Episode+description+footer" target="_blank" rel="noreferrer"&gt;ML Safety Newsletter&lt;/a&gt; for technical safety research.&lt;/p&gt;
      
        &lt;p&gt;Narrated by &lt;a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Narrated+by+TYPE+III+AUDIO&amp;utm_term=center_for_ai_safety&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;
       &lt;p&gt;---&lt;/p&gt;&lt;div style="max-width: 100%";&gt;&lt;p&gt;&lt;strong&gt;Images from the article:&lt;/strong&gt;&lt;/p&gt;&lt;a href="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d3f91f9-4ca4-4349-9968-700b7d2839af_1280x684.png" target="_blank"&gt;&lt;img src="https://substackcdn.com/image/fetch/w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d3f91f9-4ca4-4349-9968-700b7d2839af_1280x684.png" alt="undefined" style="max-width: 100%;" /&gt;&lt;/a&gt;&lt;p&gt;&lt;em&gt;Apple Podcasts and Spotify do not show images in the episode description. Try &lt;a href="https://pocketcasts.com/" target="_blank" rel="noreferrer"&gt;Pocket Casts&lt;/a&gt;, or another podcast app.&lt;/em&gt;&lt;/p&gt;&lt;/div&gt;</description>
      <pubDate>Wed, 11 Sep 2024 15:31:40 GMT</pubDate>
      <guid isPermaLink="false">0130d0da-b714-410d-97d7-32611a225504</guid>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
      <enclosure url="https://dl.type3.audio/episode/0130d0da-b714-410d-97d7-32611a225504.mp3?request_source=rss&amp;client_id=center_for_ai_safety&amp;feed_id=newsletter__safe_ai&amp;type=ai_narration&amp;author=Corin%2520Katzke%252C%2520Julius%2520Simonelli%252C%2520Andrew%2520Zeng%252C%2520Dan%2520Hendrycks&amp;title=AISN%20%2341%3A%20The%20Next%20Generation%20of%20Compute%20Scale&amp;source_url=https%3A%2F%2Fnewsletter.safe.ai%2Fp%2Fai-safety-newsletter-41-the-next&amp;created_at=2024-09-11T15%3A31%3A32.872657%2B00%3A00&amp;duration=719" length="0" type="audio/mpeg"/>
      <link>https://newsletter.safe.ai/p/ai-safety-newsletter-41-the-next</link>
      <itunes:duration>719</itunes:duration>
    </item>
    <item>
      <title>AISN #40: California AI Legislation</title>
      <description>&lt;p&gt; Plus, NVIDIA Delays Chip Production, and Do AI Safety Benchmarks Actually Measure Safety?.&lt;/p&gt; &lt;p&gt; Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.&lt;/p&gt;&lt;p&gt;&lt;strong&gt; SB 1047, the Most-Discussed California AI Legislation&lt;/strong&gt;&lt;/p&gt;&lt;p&gt; California's Senate Bill 1047 has sparked discussion over AI regulation. While state bills often fly under the radar, SB 1047 has garnered attention due to California's unique position in the tech landscape. If passed, SB 1047 would apply to all companies performing business in the state, potentially setting a precedent for AI governance more broadly.&lt;/p&gt;&lt;p&gt; This newsletter examines the current state of the bill, which has had various amendments in response to feedback from various stakeholders. We'll cover recent debates surrounding the bill, support from AI experts, opposition from the tech industry, and public opinion based on polling.&lt;/p&gt;&lt;p&gt; The bill mandates safety protocols, testing procedures, and reporting requirements for covered AI models. The bill was [...]&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Outline:&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;(00:18) SB 1047, the Most-Discussed California AI Legislation&lt;/p&gt;&lt;p&gt;(04:38) NVIDIA Delays Chip Production&lt;/p&gt;&lt;p&gt;(06:49) Safetywashing: Do AI Safety Benchmarks Actually Measure Safety Progress?&lt;/p&gt;&lt;p&gt;(10:22) Links&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;
          &lt;p&gt;&lt;b&gt;First published:&lt;/b&gt;&lt;br/&gt;
          August 21st, 2024 &lt;/p&gt;
        
        &lt;p&gt;&lt;b&gt;Source:&lt;/b&gt;&lt;br/&gt;
        &lt;a href="https://newsletter.safe.ai/p/aisn-40-california-ai-legislation?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Source+URL+in+episode+description&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;https://newsletter.safe.ai/p/aisn-40-california-ai-legislation&lt;/a&gt; &lt;/p&gt;
        &lt;p&gt;---&lt;/p&gt;
      &lt;p&gt;Want more? Check out our &lt;a href="https://newsletter.mlsafety.org/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Episode+description+footer" target="_blank" rel="noreferrer"&gt;ML Safety Newsletter&lt;/a&gt; for technical safety research.&lt;/p&gt;
      
        &lt;p&gt;Narrated by &lt;a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Narrated+by+TYPE+III+AUDIO&amp;utm_term=center_for_ai_safety&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;
       &lt;p&gt;---&lt;/p&gt;&lt;div style="max-width: 100%";&gt;&lt;p&gt;&lt;strong&gt;Images from the article:&lt;/strong&gt;&lt;/p&gt;&lt;a href="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5d2f21c7-15ee-456c-8ee1-4b6741519f9f_1600x842.png" target="_blank"&gt;&lt;img src="https://substackcdn.com/image/fetch/w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5d2f21c7-15ee-456c-8ee1-4b6741519f9f_1600x842.png" alt="undefined" style="max-width: 100%;" /&gt;&lt;/a&gt;&lt;p&gt;&lt;em&gt;Apple Podcasts and Spotify do not show images in the episode description. Try &lt;a href="https://pocketcasts.com/" target="_blank" rel="noreferrer"&gt;Pocket Casts&lt;/a&gt;, or another podcast app.&lt;/em&gt;&lt;/p&gt;&lt;/div&gt;</description>
      <pubDate>Wed, 21 Aug 2024 16:08:50 GMT</pubDate>
      <guid isPermaLink="false">e8f6d6e8-1ecb-4fe8-af52-132b4c68f3fd</guid>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
      <enclosure url="https://dl.type3.audio/episode/e8f6d6e8-1ecb-4fe8-af52-132b4c68f3fd.mp3?request_source=rss&amp;client_id=center_for_ai_safety&amp;feed_id=newsletter__safe_ai&amp;type=ai_narration&amp;author=Corin%2520Katzke%252C%2520Julius%2520Simonelli%252C%2520Alexa%2520Pan%252C%2520Dan%2520Hendrycks&amp;title=AISN%20%2340%3A%20California%20AI%20Legislation&amp;source_url=https%3A%2F%2Fnewsletter.safe.ai%2Fp%2Faisn-40-california-ai-legislation&amp;created_at=2024-08-21T16%3A08%3A43.006581%2B00%3A00&amp;duration=840" length="0" type="audio/mpeg"/>
      <link>https://newsletter.safe.ai/p/aisn-40-california-ai-legislation</link>
      <itunes:duration>840</itunes:duration>
    </item>
    <item>
      <title>AISN #39: Implications of a Trump Administration for AI Policy</title>
      <description>&lt;p&gt; Plus, Safety Engineering Overview.&lt;/p&gt; &lt;p&gt; Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.&lt;/p&gt;&lt;p&gt;&lt;strong&gt; Implications of a Trump administration for AI policy&lt;/strong&gt;&lt;/p&gt;&lt;p&gt; Trump named Ohio Senator J.D. Vance—an AI regulation skeptic—as his pick for vice president. This choice sheds light on the AI policy landscape under a future Trump administration. In this story, we cover: (1) Vance's views on AI policy, (2) views of key players in the administration, such as Trump's party, donors, and allies, and (3) why AI safety should remain bipartisan.&lt;/p&gt;&lt;p&gt; Vance has pushed for reducing AI regulations and making AI weights open. At a recent Senate hearing, Vance accused Big Tech companies of overstating risks from AI in order to justify regulations to stifle competition. This led tech policy experts to expect that Vance would favor looser AI regulations.&lt;/p&gt;&lt;p&gt; However, Vance has also praised Lina Khan, Chair of the Federal Trade [...]&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Outline:&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;(00:18) Implications of a Trump administration for AI policy&lt;/p&gt;&lt;p&gt;(04:57) Safety Engineering&lt;/p&gt;&lt;p&gt;(08:49) Links&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;
          &lt;p&gt;&lt;b&gt;First published:&lt;/b&gt;&lt;br/&gt;
          July 29th, 2024 &lt;/p&gt;
        
        &lt;p&gt;&lt;b&gt;Source:&lt;/b&gt;&lt;br/&gt;
        &lt;a href="https://newsletter.safe.ai/p/ai-safety-newsletter-39-implications?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Source+URL+in+episode+description&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;https://newsletter.safe.ai/p/ai-safety-newsletter-39-implications&lt;/a&gt; &lt;/p&gt;
        &lt;p&gt;---&lt;/p&gt;
      &lt;p&gt;Want more? Check out our &lt;a href="https://newsletter.mlsafety.org/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Episode+description+footer" target="_blank" rel="noreferrer"&gt;ML Safety Newsletter&lt;/a&gt; for technical safety research.&lt;/p&gt;
      
        &lt;p&gt;Narrated by &lt;a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Narrated+by+TYPE+III+AUDIO&amp;utm_term=center_for_ai_safety&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;
       &lt;p&gt;---&lt;/p&gt;&lt;div style="max-width: 100%";&gt;&lt;p&gt;&lt;strong&gt;Images from the article:&lt;/strong&gt;&lt;/p&gt;&lt;a href="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F902ac97b-07bb-4232-adb5-d17778692649_1600x1067.png" target="_blank"&gt;&lt;img src="https://substackcdn.com/image/fetch/w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F902ac97b-07bb-4232-adb5-d17778692649_1600x1067.png" alt="undefined" style="max-width: 100%;" /&gt;&lt;/a&gt;&lt;hr style="margin-top: 24px; margin-bottom: 24px;" /&gt;&lt;a href="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6041e326-c428-4327-bfd0-32066833d9ec_1600x584.png" target="_blank"&gt;&lt;img src="https://substackcdn.com/image/fetch/w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6041e326-c428-4327-bfd0-32066833d9ec_1600x584.png" alt="undefined" style="max-width: 100%;" /&gt;&lt;/a&gt;&lt;p&gt;&lt;em&gt;Apple Podcasts and Spotify do not show images in the episode description. Try &lt;a href="https://pocketcasts.com/" target="_blank" rel="noreferrer"&gt;Pocket Casts&lt;/a&gt;, or another podcast app.&lt;/em&gt;&lt;/p&gt;&lt;/div&gt;</description>
      <pubDate>Mon, 29 Jul 2024 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">6e32bfa4-3bcb-47ab-9273-7045c3fc828e</guid>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
      <enclosure url="https://dl.type3.audio/episode/6e32bfa4-3bcb-47ab-9273-7045c3fc828e.mp3?request_source=rss&amp;client_id=center_for_ai_safety&amp;feed_id=newsletter__safe_ai&amp;type=ai_narration&amp;author=Corin%2520Katzke%252C%2520Alexa%2520Pan%252C%2520Andrew%2520Zeng%252C%2520Julius%2520Simonelli%252C%2520Dan%2520Hendrycks&amp;title=AISN%20%2339%3A%20Implications%20of%20a%20Trump%20Administration%20for%20AI%20Policy&amp;source_url=https%3A%2F%2Fnewsletter.safe.ai%2Fp%2Fai-safety-newsletter-39-implications&amp;created_at=2024-08-03T02%3A00%3A13.486151%2B00%3A00&amp;duration=720" length="0" type="audio/mpeg"/>
      <link>https://newsletter.safe.ai/p/ai-safety-newsletter-39-implications</link>
      <itunes:duration>720</itunes:duration>
    </item>
    <item>
      <title>AISN #38: Supreme Court Decision Could Limit Federal Ability to Regulate AI</title>
      <description>&lt;p&gt; Plus, “Circuit Breakers” for AI systems, and updates on China's AI industry.&lt;/p&gt; &lt;p&gt; Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.&lt;/p&gt;&lt;p&gt;&lt;strong&gt; Supreme Court Decision Could Limit Federal Ability to Regulate AI&lt;/strong&gt;&lt;/p&gt;&lt;p&gt; In a recent decision, the Supreme Court overruled the 1984 precedent Chevron v. Natural Resources Defence Council. In this story, we discuss the decision's implications for regulating AI.&lt;/p&gt;&lt;p&gt; Chevron allowed agencies to flexibly apply expertise when regulating. The “Chevron doctrine” had required courts to defer to a federal agency's interpretation of a statute in the case that that statute was ambiguous and the agency's interpretation was reasonable. Its elimination curtails federal agencies’ ability to regulate—including, as this article from LawAI explains, their ability to regulate AI. &lt;/p&gt;&lt;p&gt; The Chevron doctrine expanded federal agencies’ ability to regulate in at least two ways. First, agencies could draw on their technical expertise to interpret ambiguous statutes [...]&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;
          &lt;p&gt;&lt;b&gt;First published:&lt;/b&gt;&lt;br/&gt;
          July 9th, 2024 &lt;/p&gt;
        
        &lt;p&gt;&lt;b&gt;Source:&lt;/b&gt;&lt;br/&gt;
        &lt;a href="https://newsletter.safe.ai/p/ai-safety-newsletter-38-supreme-court?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Source+URL+in+episode+description&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;https://newsletter.safe.ai/p/ai-safety-newsletter-38-supreme-court&lt;/a&gt; &lt;/p&gt;
        &lt;p&gt;---&lt;/p&gt;
      &lt;p&gt;Want more? Check out our &lt;a href="https://newsletter.mlsafety.org/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Episode+description+footer" target="_blank" rel="noreferrer"&gt;ML Safety Newsletter&lt;/a&gt; for technical safety research.&lt;/p&gt;
      
        &lt;p&gt;Narrated by &lt;a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Narrated+by+TYPE+III+AUDIO&amp;utm_term=center_for_ai_safety&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;
       &lt;p&gt;---&lt;/p&gt;&lt;div style="max-width: 100%";&gt;&lt;p&gt;&lt;strong&gt;Images from the article:&lt;/strong&gt;&lt;/p&gt;&lt;a href="https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd0247aba-ec24-4f37-93c5-730a6419aebb_1340x918.png" target="_blank"&gt;&lt;img src="https://substackcdn.com/image/fetch/w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd0247aba-ec24-4f37-93c5-730a6419aebb_1340x918.png" alt="undefined" style="max-width: 100%;" /&gt;&lt;/a&gt;&lt;p&gt;&lt;em&gt;Apple Podcasts and Spotify do not show images in the episode description. Try &lt;a href="https://pocketcasts.com/" target="_blank" rel="noreferrer"&gt;Pocket Casts&lt;/a&gt;, or another podcast app.&lt;/em&gt;&lt;/p&gt;&lt;/div&gt;</description>
      <pubDate>Tue, 09 Jul 2024 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">b2dfe538-8744-4e9f-8d5f-1b82d2c516bd</guid>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
      <enclosure url="https://dl.type3.audio/episode/b2dfe538-8744-4e9f-8d5f-1b82d2c516bd.mp3?request_source=rss&amp;client_id=center_for_ai_safety&amp;feed_id=newsletter__safe_ai&amp;type=ai_narration&amp;author=Corin%2520Katzke%252C%2520Alexa%2520Pan%252C%2520Julius%2520Simonelli%252C%2520Dan%2520Hendrycks&amp;title=AISN%20%2338%3A%20Supreme%20Court%20Decision%20Could%20Limit%20Federal%20Ability%20to%20Regulate%20AI&amp;source_url=https%3A%2F%2Fnewsletter.safe.ai%2Fp%2Fai-safety-newsletter-38-supreme-court&amp;created_at=2024-07-31T10%3A30%3A30.919859%2B00%3A00&amp;duration=631" length="0" type="audio/mpeg"/>
      <link>https://newsletter.safe.ai/p/ai-safety-newsletter-38-supreme-court</link>
      <itunes:duration>631</itunes:duration>
    </item>
    <item>
      <title>AISN #37: US Launches Antitrust Investigations</title>
      <description>&lt;p&gt;&lt;strong&gt; US Launches Antitrust Investigations&lt;/strong&gt;&lt;/p&gt;&lt;p&gt; The U.S. Government has launched antitrust investigations into Nvidia, OpenAI, and Microsoft. The U.S. Department of Justice (DOJ) and Federal Trade Commission (FTC) have agreed to investigate potential antitrust violations by the three companies, the New York Times reported. The DOJ will lead the investigation into Nvidia while the FTC will focus on OpenAI and Microsoft.&lt;/p&gt;&lt;p&gt; Antitrust investigations are conducted by government agencies to determine whether companies are engaging in anticompetitive practices that may harm consumers and stifle competition. &lt;/p&gt;&lt;p&gt; Nvidia investigated for GPU dominance. The New York Times reports that concerns have been raised about Nvidia's dominance in the GPU market, “including how the company's software locks [...]&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Outline:&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;(00:10) US Launches Antitrust Investigations&lt;/p&gt;&lt;p&gt;(02:58) Recent Criticisms of OpenAI and Anthropic&lt;/p&gt;&lt;p&gt;(05:40) Situational Awareness&lt;/p&gt;&lt;p&gt;(09:14) Links&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;
          &lt;p&gt;&lt;b&gt;First published:&lt;/b&gt;&lt;br/&gt;
          June 18th, 2024 &lt;/p&gt;
        
        &lt;p&gt;&lt;b&gt;Source:&lt;/b&gt;&lt;br/&gt;
        &lt;a href="https://newsletter.safe.ai/p/ai-safety-newsletter-37-us-launches?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Source+URL+in+episode+description&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;https://newsletter.safe.ai/p/ai-safety-newsletter-37-us-launches&lt;/a&gt; &lt;/p&gt;
        &lt;p&gt;---&lt;/p&gt;
      &lt;p&gt;Want more? Check out our &lt;a href="https://newsletter.mlsafety.org/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Episode+description+footer" target="_blank" rel="noreferrer"&gt;ML Safety Newsletter&lt;/a&gt; for technical safety research.&lt;/p&gt;
      
        &lt;p&gt;Narrated by &lt;a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Narrated+by+TYPE+III+AUDIO&amp;utm_term=center_for_ai_safety&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;</description>
      <pubDate>Tue, 18 Jun 2024 15:00:35 GMT</pubDate>
      <guid isPermaLink="false">4ec1d31d-a5ce-46ba-8ec3-05f2e424b0c0</guid>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
      <enclosure url="https://dl.type3.audio/episode/a66e08ee-594b-49e0-b318-3620d493dad4.mp3?request_source=rss&amp;client_id=center_for_ai_safety&amp;feed_id=newsletter__safe_ai&amp;type=ai_narration&amp;author=Corin%2520Katzke&amp;title=AISN%20%2337%3A%20US%20Launches%20Antitrust%20Investigations&amp;source_url=https%3A%2F%2Fnewsletter.safe.ai%2Fp%2Fai-safety-newsletter-37-us-launches&amp;created_at=2024-06-18T15%3A00%3A27.022484%2B00%3A00&amp;duration=662" length="7936128" type="audio/mpeg"/>
      <link>https://newsletter.safe.ai/p/ai-safety-newsletter-37-us-launches</link>
      <itunes:duration>662</itunes:duration>
    </item>
    <item>
      <title>AISN #36: Voluntary Commitments are Insufficient</title>
      <description>&lt;p&gt;&lt;strong&gt; Voluntary Commitments are Insufficient&lt;/strong&gt;&lt;/p&gt;&lt;p&gt; AI companies agree to RSPs in Seoul. Following the second AI Global Summit held in Seoul, the UK and Republic of Korea governments announced that 16 major technology organizations, including Amazon, Google, Meta, Microsoft, OpenAI, and xAI have agreed to a new set of Frontier AI Safety Commitments. &lt;/p&gt;&lt;p&gt; Some commitments from the agreement include:&lt;/p&gt;&lt;ol&gt; &lt;li&gt; &lt;p&gt; Assessing risks posed by AI models and systems throughout the AI lifecycle.&lt;/p&gt;&lt;/li&gt;&lt;li&gt; &lt;p&gt; Setting thresholds for severe risks, defining when a model or system would pose intolerable risk if not adequately mitigated.&lt;/p&gt;&lt;/li&gt;&lt;li&gt; &lt;p&gt; Keeping risks within defined thresholds, such as by modifying system behaviors and implementing robust security controls.&lt;/p&gt;&lt;/li&gt;&lt;li&gt; &lt;p&gt; Potentially halting development or deployment if risks cannot be sufficiently mitigated. &lt;/p&gt;&lt;/li&gt;&lt;/ol&gt;&lt;p&gt; These commitments [...]&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Outline:&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;(00:03) Voluntary Commitments are Insufficient&lt;/p&gt;&lt;p&gt;(02:45) Senate AI Policy Roadmap&lt;/p&gt;&lt;p&gt;(05:18) Chapter 1: Overview of Catastrophic Risks&lt;/p&gt;&lt;p&gt;(07:56) Links&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;
          &lt;p&gt;&lt;b&gt;First published:&lt;/b&gt;&lt;br/&gt;
          May 30th, 2024 &lt;/p&gt;
        
        &lt;p&gt;&lt;b&gt;Source:&lt;/b&gt;&lt;br/&gt;
        &lt;a href="https://newsletter.safe.ai/p/ai-safety-newsletter-35-voluntary?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Source+URL+in+episode+description&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;https://newsletter.safe.ai/p/ai-safety-newsletter-35-voluntary&lt;/a&gt; &lt;/p&gt;
        &lt;p&gt;---&lt;/p&gt;
      &lt;p&gt;Want more? Check out our &lt;a href="https://newsletter.mlsafety.org/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Episode+description+footer" target="_blank" rel="noreferrer"&gt;ML Safety Newsletter&lt;/a&gt; for technical safety research.&lt;/p&gt;
      
        &lt;p&gt;Narrated by &lt;a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Narrated+by+TYPE+III+AUDIO&amp;utm_term=center_for_ai_safety&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;</description>
      <pubDate>Thu, 30 May 2024 14:30:17 GMT</pubDate>
      <guid isPermaLink="false">78473505-af06-4815-ba04-f33f56c1f42a</guid>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
      <enclosure url="https://dl.type3.audio/episode/205eb323-f496-4582-afcc-4f14875235fe.mp3?request_source=rss&amp;client_id=center_for_ai_safety&amp;feed_id=newsletter__safe_ai&amp;type=ai_narration&amp;author=Corin%2520Katzke&amp;title=AISN%20%2336%3A%20Voluntary%20Commitments%20are%20Insufficient&amp;source_url=https%3A%2F%2Fnewsletter.safe.ai%2Fp%2Fai-safety-newsletter-35-voluntary&amp;created_at=2024-05-30T14%3A30%3A09.505796%2B00%3A00&amp;duration=609" length="7303392" type="audio/mpeg"/>
      <link>https://newsletter.safe.ai/p/ai-safety-newsletter-35-voluntary</link>
      <itunes:duration>609</itunes:duration>
    </item>
    <item>
      <title>AISN #35: Lobbying on AI Regulation</title>
      <description>&lt;p&gt;&lt;strong&gt; OpenAI and Google Announce New Multimodal Models&lt;/strong&gt;&lt;/p&gt;&lt;p&gt; In the current paradigm of AI development, there are long delays between the release of successive models. Progress is largely driven by increases in computing power, and training models with more computing power requires building large new data centers. &lt;/p&gt;&lt;p&gt; More than a year after the release of GPT-4, OpenAI has yet to release GPT-4.5 or GPT-5, which would presumably be trained on 10x or 100x more compute than GPT-4, respectively. These models might be released over the next year or two, and could represent large spikes in AI capabilities.&lt;/p&gt;&lt;p&gt; But OpenAI did announce a new model last week, called GPT-4o. The “o” stands for “omni,” referring to the fact that the model can use text, images, videos [...]&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Outline:&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;(00:03) OpenAI and Google Announce New Multimodal Models&lt;/p&gt;&lt;p&gt;(02:36) The Surge in AI Lobbying&lt;/p&gt;&lt;p&gt;(05:29) How Should Copyright Law Apply to AI Training Data?&lt;/p&gt;&lt;p&gt;(10:10) Links&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;
          &lt;p&gt;&lt;b&gt;First published:&lt;/b&gt;&lt;br/&gt;
          May 16th, 2024 &lt;/p&gt;
        
        &lt;p&gt;&lt;b&gt;Source:&lt;/b&gt;&lt;br/&gt;
        &lt;a href="https://newsletter.safe.ai/p/ai-safety-newsletter-35-lobbying?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Source+URL+in+episode+description&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;https://newsletter.safe.ai/p/ai-safety-newsletter-35-lobbying&lt;/a&gt; &lt;/p&gt;
        &lt;p&gt;---&lt;/p&gt;
      &lt;p&gt;Want more? Check out our &lt;a href="https://newsletter.mlsafety.org/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Episode+description+footer" target="_blank" rel="noreferrer"&gt;ML Safety Newsletter&lt;/a&gt; for technical safety research.&lt;/p&gt;
      
        &lt;p&gt;Narrated by &lt;a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Narrated+by+TYPE+III+AUDIO&amp;utm_term=center_for_ai_safety&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;</description>
      <pubDate>Thu, 16 May 2024 14:30:52 GMT</pubDate>
      <guid isPermaLink="false">b7cf8dcf-06c1-4ac6-9336-90e4a4dc8d9b</guid>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
      <enclosure url="https://dl.type3.audio/episode/cf2cffc4-4c51-4f6d-a9b0-277241751f72.mp3?request_source=rss&amp;client_id=center_for_ai_safety&amp;feed_id=newsletter__safe_ai&amp;type=ai_narration&amp;author=Aidan%2520O'Gara&amp;title=AISN%20%2335%3A%20Lobbying%20on%20AI%20Regulation&amp;source_url=https%3A%2F%2Fnewsletter.safe.ai%2Fp%2Fai-safety-newsletter-35-lobbying&amp;created_at=2024-05-16T14%3A30%3A44.270719%2B00%3A00&amp;duration=729" length="8742528" type="audio/mpeg"/>
      <link>https://newsletter.safe.ai/p/ai-safety-newsletter-35-lobbying</link>
      <itunes:duration>729</itunes:duration>
    </item>
    <item>
      <title>AISN #34: New Military AI Systems</title>
      <description>&lt;p&gt; Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.&lt;/p&gt;&lt;p&gt;&lt;strong&gt; AI Labs Fail to Uphold Safety Commitments to UK AI Safety Institute&lt;/strong&gt;&lt;/p&gt;&lt;p&gt; In November, leading AI labs committed to sharing their models before deployment to be tested by the UK AI Safety Institute. But reporting from Politico shows that these commitments have fallen through. &lt;/p&gt;&lt;p&gt; OpenAI, Anthropic, and Meta have all failed to share their models with the UK AISI before deployment. Only Google DeepMind, headquartered in London, has given pre-deployment access to UK AISI. &lt;/p&gt;&lt;p&gt; Anthropic released the most powerful publicly available language model, Claude 3, without any window for pre-release testing by the UK AISI. When asked for comment, Anthropic co-founder Jack Clark said, “Pre-deployment testing is a nice idea but very difficult to implement.”&lt;/p&gt;&lt;p&gt; When asked about their concerns with pre-deployment testing [...]&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Outline:&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;(00:03) AI Labs Fail to Uphold Safety Commitments to UK AI Safety Institute&lt;/p&gt;&lt;p&gt;(02:17) New Bipartisan AI Policy Proposals in the US Senate&lt;/p&gt;&lt;p&gt;(06:35) Military AI in Israel and the US&lt;/p&gt;&lt;p&gt;(11:44) New Online Course on AI Safety from CAIS&lt;/p&gt;&lt;p&gt;(12:38) Links&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;
          &lt;p&gt;&lt;b&gt;First published:&lt;/b&gt;&lt;br/&gt;
          May 1st, 2024 &lt;/p&gt;
        
        &lt;p&gt;&lt;b&gt;Source:&lt;/b&gt;&lt;br/&gt;
        &lt;a href="https://newsletter.safe.ai/p/ai-safety-newsletter-34-new-military?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Source+URL+in+episode+description&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;https://newsletter.safe.ai/p/ai-safety-newsletter-34-new-military&lt;/a&gt; &lt;/p&gt;
        &lt;p&gt;---&lt;/p&gt;
      &lt;p&gt;Want more? Check out our &lt;a href="https://newsletter.mlsafety.org/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Episode+description+footer" target="_blank" rel="noreferrer"&gt;ML Safety Newsletter&lt;/a&gt; for technical safety research.&lt;/p&gt;
      
        &lt;p&gt;Narrated by &lt;a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Narrated+by+TYPE+III+AUDIO&amp;utm_term=center_for_ai_safety&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;</description>
      <pubDate>Wed, 01 May 2024 17:01:01 GMT</pubDate>
      <guid isPermaLink="false">b46a86c7-f571-410e-9858-f14489b91b12</guid>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
      <enclosure url="https://dl.type3.audio/episode/39e8cb15-22a6-415b-8ef7-c42684d3b12b.mp3?request_source=rss&amp;client_id=center_for_ai_safety&amp;feed_id=newsletter__safe_ai&amp;type=ai_narration&amp;author=Aidan%2520O'Gara&amp;title=AISN%20%2334%3A%20New%20Military%20AI%20Systems&amp;source_url=https%3A%2F%2Fnewsletter.safe.ai%2Fp%2Fai-safety-newsletter-34-new-military&amp;created_at=2024-05-01T17%3A00%3A52.52736%2B00%3A00&amp;duration=1022" length="12259008" type="audio/mpeg"/>
      <link>https://newsletter.safe.ai/p/ai-safety-newsletter-34-new-military</link>
      <itunes:duration>1022</itunes:duration>
    </item>
    <item>
      <title>AISN #33: Reassessing AI and Biorisk</title>
      <description>&lt;p&gt; Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.&lt;/p&gt;&lt;p&gt; This week, we cover:&lt;/p&gt;&lt;ol&gt; &lt;li&gt; &lt;p&gt; Consolidation in the corporate AI landscape, as smaller startups join forces with larger funders. &lt;/p&gt;&lt;/li&gt;&lt;li&gt; &lt;p&gt; Several countries have announced new investments in AI, including Singapore, Canada, and Saudi Arabia. &lt;/p&gt;&lt;/li&gt;&lt;li&gt; &lt;p&gt; Congress's budget for 2024 provides some but not all of the requested funding for AI policy. The White House's 2025 proposal makes more ambitious requests for AI funding.&lt;/p&gt;&lt;/li&gt;&lt;li&gt; &lt;p&gt; How will AI affect biological weapons risk? We reexamine this question in light of new experiments from RAND, OpenAI, and others. &lt;/p&gt;&lt;/li&gt;&lt;/ol&gt;&lt;p&gt;&lt;strong&gt; AI Startups Seek Support From Large Financial Backers&lt;/strong&gt;&lt;/p&gt;&lt;p&gt; As AI development demands ever-increasing compute resources, only well-resourced developers can compete at the frontier. In practice, this means that AI startups must either partner with the world's [...]&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Outline:&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;(00:45) AI Startups Seek Support From Large Financial Backers&lt;/p&gt;&lt;p&gt;(03:47) National AI Investments&lt;/p&gt;&lt;p&gt;(05:16) Federal Spending on AI&lt;/p&gt;&lt;p&gt;(08:35) An Updated Assessment of AI and Biorisk&lt;/p&gt;&lt;p&gt;(15:35) $250K in Prizes: SafeBench Competition Announcement&lt;/p&gt;&lt;p&gt;(16:08) Links&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;
          &lt;p&gt;&lt;b&gt;First published:&lt;/b&gt;&lt;br/&gt;
          April 11th, 2024 &lt;/p&gt;
        
        &lt;p&gt;&lt;b&gt;Source:&lt;/b&gt;&lt;br/&gt;
        &lt;a href="https://newsletter.safe.ai/p/ai-safety-newsletter-33-reassessing?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Source+URL+in+episode+description&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;https://newsletter.safe.ai/p/ai-safety-newsletter-33-reassessing&lt;/a&gt; &lt;/p&gt;
        &lt;p&gt;---&lt;/p&gt;
      &lt;p&gt;Want more? Check out our &lt;a href="https://newsletter.mlsafety.org/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Episode+description+footer" target="_blank" rel="noreferrer"&gt;ML Safety Newsletter&lt;/a&gt; for technical safety research.&lt;/p&gt;
      
        &lt;p&gt;Narrated by &lt;a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Narrated+by+TYPE+III+AUDIO&amp;utm_term=center_for_ai_safety&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;</description>
      <pubDate>Thu, 11 Apr 2024 18:01:03 GMT</pubDate>
      <guid isPermaLink="false">50843f2a-ad63-4bd1-90db-77e5ef2c926a</guid>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
      <enclosure url="https://dl.type3.audio/episode/9803be65-fa68-464c-b279-722c0aad7a7b.mp3?request_source=rss&amp;client_id=center_for_ai_safety&amp;feed_id=newsletter__safe_ai&amp;type=ai_narration&amp;author=Center%2520for%2520AI%2520Safety&amp;title=AISN%20%2333%3A%20Reassessing%20AI%20and%20Biorisk&amp;source_url=https%3A%2F%2Fnewsletter.safe.ai%2Fp%2Fai-safety-newsletter-33-reassessing&amp;created_at=2024-04-11T18%3A00%3A54.33226%2B00%3A00&amp;duration=1227" length="14717376" type="audio/mpeg"/>
      <link>https://newsletter.safe.ai/p/ai-safety-newsletter-33-reassessing</link>
      <itunes:duration>1227</itunes:duration>
    </item>
    <item>
      <title>AISN #32: Measuring and Reducing Hazardous Knowledge in LLMs</title>
      <description>&lt;p&gt; Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.&lt;/p&gt;&lt;p&gt;&lt;strong&gt; Measuring and Reducing Hazardous Knowledge&lt;/strong&gt;&lt;/p&gt;&lt;p&gt; The recent White House Executive Order on Artificial Intelligence highlights risks of LLMs in facilitating the development of bioweapons, chemical weapons, and cyberweapons.&lt;/p&gt;&lt;p&gt; To help measure these dangerous capabilities, CAIS has partnered with Scale AI to create WMDP: the Weapons of Mass Destruction Proxy, an open source benchmark with more than 4,000 multiple choice questions that serve as proxies for hazardous knowledge across biology, chemistry, and cyber. &lt;/p&gt;&lt;p&gt; This benchmark not only helps the world understand the relative dual-use capabilities of different LLMs, but it also creates a path forward for model builders to remove harmful information from their models through machine unlearning techniques. &lt;/p&gt;&lt;picture&gt;&lt;/picture&gt;&lt;p&gt; Measuring hazardous knowledge in bio, chem, and cyber. Current evaluations of dangerous AI capabilities have [...]&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Outline:&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;(00:03) Measuring and Reducing Hazardous Knowledge&lt;/p&gt;&lt;p&gt;(04:35) Language models are getting better at forecasting&lt;/p&gt;&lt;p&gt;(07:51) Proposals for Private Regulatory Markets&lt;/p&gt;&lt;p&gt;(14:25) Links&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;
          &lt;p&gt;&lt;b&gt;First published:&lt;/b&gt;&lt;br/&gt;
          March 7th, 2024 &lt;/p&gt;
        
        &lt;p&gt;&lt;b&gt;Source:&lt;/b&gt;&lt;br/&gt;
        &lt;a href="https://newsletter.safe.ai/p/ai-safety-newsletter-32-measuring?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Source+URL+in+episode+description&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;https://newsletter.safe.ai/p/ai-safety-newsletter-32-measuring&lt;/a&gt; &lt;/p&gt;
        &lt;p&gt;---&lt;/p&gt;
      &lt;p&gt;Want more? Check out our &lt;a href="https://newsletter.mlsafety.org/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Episode+description+footer" target="_blank" rel="noreferrer"&gt;ML Safety Newsletter&lt;/a&gt; for technical safety research.&lt;/p&gt;
      
        &lt;p&gt;Narrated by &lt;a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Narrated+by+TYPE+III+AUDIO&amp;utm_term=center_for_ai_safety&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;</description>
      <pubDate>Thu, 07 Mar 2024 16:00:40 GMT</pubDate>
      <guid isPermaLink="false">3774c692-940f-438a-b22c-51d9be42f936</guid>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
      <enclosure url="https://dl.type3.audio/episode/20686c6c-5eb6-4f62-b307-915f52a158c2.mp3?request_source=rss&amp;client_id=center_for_ai_safety&amp;feed_id=newsletter__safe_ai&amp;type=ai_narration&amp;author=Center%2520for%2520AI%2520Safety&amp;title=AISN%20%2332%3A%20Measuring%20and%20Reducing%20Hazardous%20Knowledge%20in%20LLMs&amp;source_url=https%3A%2F%2Fnewsletter.safe.ai%2Fp%2Fai-safety-newsletter-32-measuring&amp;created_at=2024-03-07T16%3A00%3A31.370973%2B00%3A00&amp;duration=1076" length="12906432" type="audio/mpeg"/>
      <link>https://newsletter.safe.ai/p/ai-safety-newsletter-32-measuring</link>
      <itunes:duration>1076</itunes:duration>
    </item>
    <item>
      <title>AISN #31: A New AI Policy Bill in California</title>
      <description>&lt;p&gt; Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.&lt;/p&gt;&lt;p&gt; This week, we’ll discuss: &lt;/p&gt;&lt;ol&gt; &lt;li&gt; &lt;p&gt; A new proposed AI bill in California which requires frontier AI developers to adopt safety and security protocols, and clarifies that developers bear legal liability if their AI systems cause unreasonable risks or critical harms to public safety. &lt;/p&gt;&lt;/li&gt;&lt;li&gt; &lt;p&gt; Precedents for AI governance from healthcare and biosecurity. &lt;/p&gt;&lt;/li&gt;&lt;li&gt; &lt;p&gt; The EU AI Act and job opportunities at their enforcement agency, the AI Office. &lt;/p&gt;&lt;/li&gt;&lt;/ol&gt;&lt;p&gt;&lt;strong&gt; A New Bill on AI Policy in California&lt;/strong&gt;&lt;/p&gt;&lt;p&gt; Several leading AI companies have public plans for how they’ll invest in safety and security as they develop more dangerous AI systems. A new bill in California's state legislature would codify this practice as a legal requirement, and clarify the legal liability faced by developers [...]&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Outline:&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;(00:33) A New Bill on AI Policy in California&lt;/p&gt;&lt;p&gt;(04:38) Precedents for AI Policy: Healthcare and Biosecurity&lt;/p&gt;&lt;p&gt;(07:56) Enforcing the EU AI Act&lt;/p&gt;&lt;p&gt;(08:55) Links&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;
          &lt;p&gt;&lt;b&gt;First published:&lt;/b&gt;&lt;br/&gt;
          February 21st, 2024 &lt;/p&gt;
        
        &lt;p&gt;&lt;b&gt;Source:&lt;/b&gt;&lt;br/&gt;
        &lt;a href="https://newsletter.safe.ai/p/aisn-31-a-new-ai-policy-bill-in-california?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Source+URL+in+episode+description&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;https://newsletter.safe.ai/p/aisn-31-a-new-ai-policy-bill-in-california&lt;/a&gt; &lt;/p&gt;
        &lt;p&gt;---&lt;/p&gt;
      &lt;p&gt;Want more? Check out our &lt;a href="https://newsletter.mlsafety.org/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Episode+description+footer" target="_blank" rel="noreferrer"&gt;ML Safety Newsletter&lt;/a&gt; for technical safety research.&lt;/p&gt;
      
        &lt;p&gt;Narrated by &lt;a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Narrated+by+TYPE+III+AUDIO&amp;utm_term=center_for_ai_safety&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;</description>
      <pubDate>Wed, 21 Feb 2024 19:00:24 GMT</pubDate>
      <guid isPermaLink="false">f64dbaf7-551d-493b-9c27-15356f0be3e4</guid>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
      <enclosure url="https://dl.type3.audio/episode/f64dbaf7-551d-493b-9c27-15356f0be3e4.mp3?request_source=rss&amp;client_id=center_for_ai_safety&amp;feed_id=newsletter__safe_ai&amp;type=ai_narration&amp;author=Center%2520for%2520AI%2520Safety&amp;title=AISN%20%2331%3A%20A%20New%20AI%20Policy%20Bill%20in%20California&amp;source_url=https%3A%2F%2Fnewsletter.safe.ai%2Fp%2Faisn-31-a-new-ai-policy-bill-in-california&amp;created_at=2024-02-21T19%3A00%3A24.467826%2B00%3A00&amp;duration=804" length="9638784" type="audio/mpeg"/>
      <link>https://newsletter.safe.ai/p/aisn-31-a-new-ai-policy-bill-in-california</link>
      <itunes:duration>804</itunes:duration>
    </item>
    <item>
      <title>AISN #30: Investments in Compute and Military AI</title>
      <description>&lt;p&gt; Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.&lt;/p&gt;&lt;p&gt;&lt;strong&gt; Compute Investments Continue To Grow&lt;/strong&gt;&lt;/p&gt;&lt;p&gt; Pausing AI development has been proposed as a policy for ensuring safety. For example, an open letter last year from the Future of Life Institute called for a six-month pause on training AI systems more powerful than GPT-4. &lt;/p&gt;&lt;p&gt; But one interesting fact about frontier AI development is that it comes with natural pauses that can last many months or years. After releasing a frontier model, it takes time for AI developers to construct new compute clusters with larger numbers of more advanced computer chips. The supply of compute is currently unable to keep up with demand, meaning some AI developers cannot buy enough chips for their needs. &lt;/p&gt;&lt;p&gt; This explains why OpenAI was reportedly limited by GPUs last year. [...]&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Outline:&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;(00:06) Compute Investments Continue To Grow&lt;/p&gt;&lt;p&gt;(03:48) Developments in Military AI&lt;/p&gt;&lt;p&gt;(07:19) Japan and Singapore Support AI Safety&lt;/p&gt;&lt;p&gt;(08:57) Links&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;
          &lt;p&gt;&lt;b&gt;First published:&lt;/b&gt;&lt;br/&gt;
          January 24th, 2024 &lt;/p&gt;
        
        &lt;p&gt;&lt;b&gt;Source:&lt;/b&gt;&lt;br/&gt;
        &lt;a href="https://newsletter.safe.ai/p/aisn-30-investments-in-compute-and?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Source+URL+in+episode+description&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;https://newsletter.safe.ai/p/aisn-30-investments-in-compute-and&lt;/a&gt; &lt;/p&gt;
        &lt;p&gt;---&lt;/p&gt;
      &lt;p&gt;Want more? Check out our &lt;a href="https://newsletter.mlsafety.org/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Episode+description+footer" target="_blank" rel="noreferrer"&gt;ML Safety Newsletter&lt;/a&gt; for technical safety research.&lt;/p&gt;
      
        &lt;p&gt;Narrated by &lt;a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Narrated+by+TYPE+III+AUDIO&amp;utm_term=center_for_ai_safety&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;</description>
      <pubDate>Wed, 24 Jan 2024 17:00:18 GMT</pubDate>
      <guid isPermaLink="false">55de06d0-3bfe-439c-aec6-9f28cef524df</guid>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
      <enclosure url="https://dl.type3.audio/episode/55de06d0-3bfe-439c-aec6-9f28cef524df.mp3?request_source=rss&amp;client_id=center_for_ai_safety&amp;feed_id=newsletter__safe_ai&amp;type=ai_narration&amp;author=Center%2520for%2520AI%2520Safety&amp;title=AISN%20%2330%3A%20Investments%20in%20Compute%20and%20Military%20AI&amp;source_url=https%3A%2F%2Fnewsletter.safe.ai%2Fp%2Faisn-30-investments-in-compute-and&amp;created_at=2024-01-24T17%3A00%3A19.156657%2B00%3A00&amp;duration=685" length="8213472" type="audio/mpeg"/>
      <link>https://newsletter.safe.ai/p/aisn-30-investments-in-compute-and</link>
      <itunes:duration>685</itunes:duration>
    </item>
    <item>
      <title>AISN #29: Progress on the EU AI Act</title>
      <description>&lt;p&gt; Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.&lt;/p&gt;&lt;p&gt;&lt;strong&gt; A Provisional Agreement on the EU AI Act&lt;/strong&gt;&lt;/p&gt;&lt;p&gt; On December 8th, the EU Parliament, Council, and Commission reached a provisional agreement on the EU AI Act. The agreement regulates the deployment of AI in high risk applications such as hiring and credit pricing, and it bans private companies from building and deploying AI for unacceptable applications such as social credit scoring and individualized predictive policing. &lt;/p&gt;&lt;p&gt; Despite lobbying by some AI startups against regulation of foundation models, the agreement contains risk assessment and mitigation requirements for all general purpose AI systems. Specific requirements apply to AI systems trained with &amp;gt;1025 FLOP such as Google's Gemini and OpenAI's GPT-4. &lt;/p&gt;&lt;picture&gt;&lt;/picture&gt;&lt;p&gt; Minimum basic transparency requirements for all GPAI. The provisional agreement regulates foundation models — using the [...]&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Outline:&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;(00:06) A Provisional Agreement on the EU AI Act&lt;/p&gt;&lt;p&gt;(04:55) Questions about Research Standards in AI Safety&lt;/p&gt;&lt;p&gt;(06:48) The New York Times sues OpenAI and Microsoft for Copyright Infringement&lt;/p&gt;&lt;p&gt;(10:34) Links&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;
          &lt;p&gt;&lt;b&gt;First published:&lt;/b&gt;&lt;br/&gt;
          January 4th, 2024 &lt;/p&gt;
        
        &lt;p&gt;&lt;b&gt;Source:&lt;/b&gt;&lt;br/&gt;
        &lt;a href="https://newsletter.safe.ai/p/aisn-29-progress-on-the-eu-ai-act?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Source+URL+in+episode+description&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;https://newsletter.safe.ai/p/aisn-29-progress-on-the-eu-ai-act&lt;/a&gt; &lt;/p&gt;
        &lt;p&gt;---&lt;/p&gt;
      &lt;p&gt;Want more? Check out our &lt;a href="https://newsletter.mlsafety.org/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Episode+description+footer" target="_blank" rel="noreferrer"&gt;ML Safety Newsletter&lt;/a&gt; for technical safety research.&lt;/p&gt;
      
        &lt;p&gt;Narrated by &lt;a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Narrated+by+TYPE+III+AUDIO&amp;utm_term=center_for_ai_safety&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;</description>
      <pubDate>Thu, 04 Jan 2024 16:00:37 GMT</pubDate>
      <guid isPermaLink="false">0946f66a-a27d-4a18-b0ee-56f8749703cc</guid>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
      <enclosure url="https://dl.type3.audio/episode/0946f66a-a27d-4a18-b0ee-56f8749703cc.mp3?request_source=rss&amp;client_id=center_for_ai_safety&amp;feed_id=newsletter__safe_ai&amp;type=ai_narration&amp;author=Center%2520for%2520AI%2520Safety&amp;title=AISN%20%2329%3A%20Progress%20on%20the%20EU%20AI%20Act&amp;source_url=https%3A%2F%2Fnewsletter.safe.ai%2Fp%2Faisn-29-progress-on-the-eu-ai-act&amp;created_at=2024-01-04T16%3A00%3A37.714837%2B00%3A00&amp;duration=734" length="8802432" type="audio/mpeg"/>
      <link>https://newsletter.safe.ai/p/aisn-29-progress-on-the-eu-ai-act</link>
      <itunes:duration>734</itunes:duration>
    </item>
    <item>
      <title>The Landscape of US AI Legislation</title>
      <description>&lt;p&gt; Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.&lt;/p&gt;&lt;p&gt; This week we’re looking closely at AI legislative efforts in the United States, including:&lt;/p&gt;&lt;ol&gt; &lt;li&gt; &lt;p&gt; Senator Schumer's AI Insight Forum&lt;/p&gt;&lt;/li&gt;&lt;li&gt; &lt;p&gt; The Blumenthal-Hawley framework for AI governance&lt;/p&gt;&lt;/li&gt;&lt;li&gt; &lt;p&gt; Agencies proposed to govern digital platforms&lt;/p&gt;&lt;/li&gt;&lt;li&gt; &lt;p&gt; State and local laws against AI surveillance&lt;/p&gt;&lt;/li&gt;&lt;li&gt; &lt;p&gt; The National AI Research Resource (NAIRR)&lt;/p&gt;&lt;/li&gt;&lt;/ol&gt;&lt;p&gt;&lt;strong&gt; Senator Schumer's AI Insight Forum&lt;/strong&gt;&lt;/p&gt;&lt;p&gt; The CEOs of more than a dozen major AI companies gathered in Washington on Wednesday for a hearing with the Senate. Organized by Democratic Majority Leader Chuck Schumer and a bipartisan group of Senators, this was the first of many hearings in their AI Insight Forum. &lt;/p&gt;&lt;p&gt; After the hearing, Senator Schumer said, “I asked everyone in the room, ‘Is government needed to play a role in regulating AI?’ and [...]&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Outline:&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;(00:30) Senator Schumer's AI Insight Forum&lt;/p&gt;&lt;p&gt;(01:20) The Blumenthal-Hawley Framework&lt;/p&gt;&lt;p&gt;(03:09) Agencies Proposed to Govern Digital Platforms&lt;/p&gt;&lt;p&gt;(04:46) Deepfakes and Watermarking Legislation&lt;/p&gt;&lt;p&gt;(06:12) State and Local Laws Against AI Surveillance&lt;/p&gt;&lt;p&gt;(06:52) National AI Research Resource (NAIRR)&lt;/p&gt;&lt;p&gt;(08:18) Links&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;
          &lt;p&gt;&lt;b&gt;First published:&lt;/b&gt;&lt;br/&gt;
          September 19th, 2023 &lt;/p&gt;
        
        &lt;p&gt;&lt;b&gt;Source:&lt;/b&gt;&lt;br/&gt;
        &lt;a href="https://newsletter.safe.ai/p/the-landscape-of-us-ai-legislation?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Source+URL+in+episode+description&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;https://newsletter.safe.ai/p/the-landscape-of-us-ai-legislation&lt;/a&gt; &lt;/p&gt;
        &lt;p&gt;---&lt;/p&gt;
      &lt;p&gt;Want more? Check out our &lt;a href="https://newsletter.mlsafety.org/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Episode+description+footer" target="_blank" rel="noreferrer"&gt;ML Safety Newsletter&lt;/a&gt; for technical safety research.&lt;/p&gt;
      
        &lt;p&gt;Narrated by &lt;a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Narrated+by+TYPE+III+AUDIO&amp;utm_term=center_for_ai_safety&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;</description>
      <pubDate>Fri, 29 Dec 2023 03:00:49 GMT</pubDate>
      <guid isPermaLink="false">d6d7b275-3f7a-4a8a-acf9-d308d0bc72dc</guid>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
      <enclosure url="https://dl.type3.audio/episode/d6d7b275-3f7a-4a8a-acf9-d308d0bc72dc.mp3?request_source=rss&amp;client_id=center_for_ai_safety&amp;feed_id=newsletter__safe_ai&amp;type=ai_narration&amp;author=Center%2520for%2520AI%2520Safety&amp;title=The%20Landscape%20of%20US%20AI%20Legislation&amp;source_url=https%3A%2F%2Fnewsletter.safe.ai%2Fp%2Fthe-landscape-of-us-ai-legislation&amp;created_at=2023-12-29T03%3A00%3A49.414974%2B00%3A00&amp;duration=597" length="7154208" type="audio/mpeg"/>
      <link>https://newsletter.safe.ai/p/the-landscape-of-us-ai-legislation</link>
      <itunes:duration>597</itunes:duration>
    </item>
    <item>
      <title>AISN #28: Center for AI Safety 2023 Year in Review</title>
      <description>&lt;p&gt; As 2023 comes to a close, we want to thank you for your continued support for AI safety. This has been a big year for AI and for the Center for AI Safety. In this special-edition newsletter, we highlight some of our most important projects from the year. Thank you for being part of our community and our work.&lt;/p&gt;&lt;p&gt;&lt;strong&gt; Center for AI Safety's 2023 Year in Review &lt;/strong&gt;&lt;/p&gt;&lt;p&gt; The Center for AI Safety (CAIS) is on a mission to reduce societal-scale risks from AI. We believe this requires research and regulation. These both need to happen quickly (due to unknown timelines on AI progress) and in tandem (because either one is insufficient on its own). To achieve this, we pursue three pillars of work: research, field-building, and advocacy.&lt;/p&gt;&lt;p&gt;&lt;strong&gt; Research&lt;/strong&gt;&lt;/p&gt;&lt;p&gt; CAIS conducts both technical and conceptual research on AI safety. We pursue multiple overlapping strategies which can be layered together [...]&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Outline:&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;(00:27) Center for AI Safety's 2023 Year in Review&lt;/p&gt;&lt;p&gt;(00:56) Research&lt;/p&gt;&lt;p&gt;(03:37) Field-Building&lt;/p&gt;&lt;p&gt;(07:35) Advocacy&lt;/p&gt;&lt;p&gt;(10:04) Looking Ahead&lt;/p&gt;&lt;p&gt;(10:23) Support Our Work&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;
          &lt;p&gt;&lt;b&gt;First published:&lt;/b&gt;&lt;br/&gt;
          December 21st, 2023 &lt;/p&gt;
        
        &lt;p&gt;&lt;b&gt;Source:&lt;/b&gt;&lt;br/&gt;
        &lt;a href="https://newsletter.safe.ai/p/aisn-28-center-for-ai-safety-2023?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Source+URL+in+episode+description&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;https://newsletter.safe.ai/p/aisn-28-center-for-ai-safety-2023&lt;/a&gt; &lt;/p&gt;
        &lt;p&gt;---&lt;/p&gt;
      &lt;p&gt;Want more? Check out our &lt;a href="https://newsletter.mlsafety.org/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Episode+description+footer" target="_blank" rel="noreferrer"&gt;ML Safety Newsletter&lt;/a&gt; for technical safety research.&lt;/p&gt;
      
        &lt;p&gt;Narrated by &lt;a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Narrated+by+TYPE+III+AUDIO&amp;utm_term=center_for_ai_safety&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;</description>
      <pubDate>Thu, 21 Dec 2023 19:30:59 GMT</pubDate>
      <guid isPermaLink="false">7c875e24-5893-4971-bf05-75dced016414</guid>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
      <enclosure url="https://dl.type3.audio/episode/7c875e24-5893-4971-bf05-75dced016414.mp3?request_source=rss&amp;client_id=center_for_ai_safety&amp;feed_id=newsletter__safe_ai&amp;type=ai_narration&amp;author=Center%2520for%2520AI%2520Safety&amp;title=AISN%20%2328%3A%20Center%20for%20AI%20Safety%202023%20Year%20in%20Review&amp;source_url=https%3A%2F%2Fnewsletter.safe.ai%2Fp%2Faisn-28-center-for-ai-safety-2023&amp;created_at=2023-12-21T19%3A31%3A00.055453%2B00%3A00&amp;duration=668" length="8011008" type="audio/mpeg"/>
      <link>https://newsletter.safe.ai/p/aisn-28-center-for-ai-safety-2023</link>
      <itunes:duration>668</itunes:duration>
    </item>
    <item>
      <title>AISN #27: Defensive Accelerationism</title>
      <description>&lt;p&gt; Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.&lt;/p&gt;&lt;p&gt;&lt;strong&gt; Defensive Accelerationism&amp;nbsp;&lt;/strong&gt;&lt;/p&gt;&lt;p&gt; Vitalik Buterin, the creator of Ethereum, recently wrote an essay on the risks and opportunities of AI and other technologies. He responds to Marc Andreessen's manifesto on techno-optimism and the growth of the effective accelerationism (e/acc) movement, and offers a more nuanced perspective. &lt;/p&gt;&lt;p&gt; Technology is often great for humanity, the essay argues, but AI could be an exception to that rule. Rather than giving governments control of AI so they can protect us, Buterin argues that we should build defensive technologies that provide security against catastrophic risks in a decentralized society. Cybersecurity, biosecurity, resilient physical infrastructure, and a robust information ecosystem are some of the technologies Buterin believes we should build to protect ourselves from AI risks. &lt;/p&gt;&lt;p&gt; Technology has risks, but [...]&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Outline:&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;(00:06) Defensive Accelerationism&lt;/p&gt;&lt;p&gt;(03:55) Retrospective on the OpenAI Board Saga&lt;/p&gt;&lt;p&gt;(07:58) Klobuchar and Thune's “light-touch” Senate bill&lt;/p&gt;&lt;p&gt;(10:23) Links&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;
          &lt;p&gt;&lt;b&gt;First published:&lt;/b&gt;&lt;br/&gt;
          December 7th, 2023 &lt;/p&gt;
        
        &lt;p&gt;&lt;b&gt;Source:&lt;/b&gt;&lt;br/&gt;
        &lt;a href="https://newsletter.safe.ai/p/aisn-27-defensive-accelerationism?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Source+URL+in+episode+description&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;https://newsletter.safe.ai/p/aisn-27-defensive-accelerationism&lt;/a&gt; &lt;/p&gt;
        &lt;p&gt;---&lt;/p&gt;
      &lt;p&gt;Want more? Check out our &lt;a href="https://newsletter.mlsafety.org/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Episode+description+footer" target="_blank" rel="noreferrer"&gt;ML Safety Newsletter&lt;/a&gt; for technical safety research.&lt;/p&gt;
      
        &lt;p&gt;Narrated by &lt;a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Narrated+by+TYPE+III+AUDIO&amp;utm_term=center_for_ai_safety&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;</description>
      <pubDate>Thu, 07 Dec 2023 16:00:24 GMT</pubDate>
      <guid isPermaLink="false">e8965cdb-fee1-4126-a0ef-38bee00a4b5f</guid>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
      <enclosure url="https://dl.type3.audio/episode/e8965cdb-fee1-4126-a0ef-38bee00a4b5f.mp3?request_source=rss&amp;client_id=center_for_ai_safety&amp;feed_id=newsletter__safe_ai&amp;type=ai_narration&amp;author=Center%2520for%2520AI%2520Safety&amp;title=AISN%20%2327%3A%20Defensive%20Accelerationism&amp;source_url=https%3A%2F%2Fnewsletter.safe.ai%2Fp%2Faisn-27-defensive-accelerationism&amp;created_at=2023-12-07T16%3A00%3A24.802819%2B00%3A00&amp;duration=730" length="8751456" type="audio/mpeg"/>
      <link>https://newsletter.safe.ai/p/aisn-27-defensive-accelerationism</link>
      <itunes:duration>730</itunes:duration>
    </item>
    <item>
      <title>AISN #26: National Institutions for AI Safety</title>
      <description>&lt;p&gt;Also, Results From the UK Summit, and New Releases From OpenAI and xAI.&lt;/p&gt; &lt;p&gt;Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.&lt;/p&gt;&lt;p&gt;This week's key stories include: &lt;/p&gt;&lt;ol&gt;&lt;li&gt;&lt;p&gt;The UK, US, and Singapore have announced national AI safety institutions. &lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;The UK AI Safety Summit concluded with a consensus statement, the creation of an expert panel to study AI risks, and a commitment to meet again in six months. &lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;xAI, OpenAI, and a new Chinese startup released new models this week. &lt;/p&gt;&lt;/li&gt;&lt;/ol&gt;&lt;p&gt;&lt;strong&gt;UK, US, and Singapore Establish National AI Safety Institutions&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;Before regulating a new technology, governments often need time to gather information and consider their policy options. But during that time, the technology may diffuse through society, making it more difficult for governments to intervene. This process, termed the Collingridge Dilemma, is a fundamental challenge in technology policy.&lt;/p&gt;&lt;p&gt;But recently [...]&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Outline:&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;(00:36) UK, US, and Singapore Establish National AI Safety Institutions&lt;/p&gt;&lt;p&gt;(03:53) UK Summit Ends with Consensus Statement and Future Commitments&lt;/p&gt;&lt;p&gt;(05:39) New Models From xAI, OpenAI, and a New Chinese Startup&lt;/p&gt;&lt;p&gt;(09:28) Links&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;
          &lt;p&gt;&lt;b&gt;First published:&lt;/b&gt;&lt;br/&gt;
          November 15th, 2023 &lt;/p&gt;
        
        &lt;p&gt;&lt;b&gt;Source:&lt;/b&gt;&lt;br/&gt;
        &lt;a href="https://newsletter.safe.ai/p/national-institutions-for-ai-safety?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Source+URL+in+episode+description&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;https://newsletter.safe.ai/p/national-institutions-for-ai-safety&lt;/a&gt; &lt;/p&gt;
        &lt;p&gt;---&lt;/p&gt;
      &lt;p&gt;Want more? Check out our &lt;a href="https://newsletter.mlsafety.org/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Episode+description+footer" target="_blank" rel="noreferrer"&gt;ML Safety Newsletter&lt;/a&gt; for technical safety research.&lt;/p&gt;
      
        &lt;p&gt;Narrated by &lt;a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Narrated+by+TYPE+III+AUDIO&amp;utm_term=center_for_ai_safety&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;</description>
      <pubDate>Wed, 15 Nov 2023 16:00:20 GMT</pubDate>
      <guid isPermaLink="false">17c7260b-1da9-4afa-a675-e01bcf07d518</guid>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
      <enclosure url="https://dl.type3.audio/episode/17c7260b-1da9-4afa-a675-e01bcf07d518.mp3?request_source=rss&amp;client_id=center_for_ai_safety&amp;feed_id=newsletter__safe_ai&amp;type=ai_narration&amp;author=Aidan%2520O'Gara&amp;title=AISN%20%2326%3A%20National%20Institutions%20for%20AI%20Safety&amp;source_url=https%3A%2F%2Fnewsletter.safe.ai%2Fp%2Fnational-institutions-for-ai-safety&amp;created_at=2023-11-15T16%3A00%3A20.483419%2B00%3A00&amp;duration=749" length="8988192" type="audio/mpeg"/>
      <link>https://newsletter.safe.ai/p/national-institutions-for-ai-safety</link>
      <itunes:duration>749</itunes:duration>
    </item>
    <item>
      <title>AISN #25: White House Executive Order on AI, UK AI Safety Summit, and Progress on Voluntary Evaluations of AI Risks.</title>
      <description>&lt;p&gt;Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;White House Executive Order on AI&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;While Congress has not voted on significant AI legislation this year, the White House has left their mark on AI policy. In June, they secured voluntary commitments on safety from leading AI companies. Now, the White House has released a new executive order on AI. It addresses a wide range of issues, and specifically targets catastrophic AI risks such as cyberattacks and biological weapons. &lt;/p&gt;&lt;p&gt;Companies must disclose large training runs. Under the executive order, companies that intend to train “dual-use foundation models” using significantly more computing power than GPT-4 must take several precautions. First, they must notify the White House before training begins. Then [...]&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Outline:&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;(00:13) White House Executive Order on AI&lt;/p&gt;&lt;p&gt;(03:56) Kicking Off The UK AI Safety Summit&lt;/p&gt;&lt;p&gt;(06:18) Progress on Voluntary Evaluations of AI Risks&lt;/p&gt;&lt;p&gt;(08:52) Links&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;
          &lt;p&gt;&lt;b&gt;First published:&lt;/b&gt;&lt;br/&gt;
          October 31st, 2023 &lt;/p&gt;
        
        &lt;p&gt;&lt;b&gt;Source:&lt;/b&gt;&lt;br/&gt;
        &lt;a href="https://newsletter.safe.ai/p/ai-safety-newsletter-25?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Source+URL+in+episode+description&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;https://newsletter.safe.ai/p/ai-safety-newsletter-25&lt;/a&gt; &lt;/p&gt;
        &lt;p&gt;---&lt;/p&gt;
      &lt;p&gt;Want more? Check out our &lt;a href="https://newsletter.mlsafety.org/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Episode+description+footer" target="_blank" rel="noreferrer"&gt;ML Safety Newsletter&lt;/a&gt; for technical safety research.&lt;/p&gt;
      
        &lt;p&gt;Narrated by &lt;a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Narrated+by+TYPE+III+AUDIO&amp;utm_term=center_for_ai_safety&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;</description>
      <pubDate>Tue, 31 Oct 2023 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">77abf730-1071-430f-b126-682b105771dc</guid>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
      <enclosure url="https://dl.type3.audio/episode/77abf730-1071-430f-b126-682b105771dc.mp3?request_source=rss&amp;client_id=center_for_ai_safety&amp;feed_id=newsletter__safe_ai&amp;type=ai_narration&amp;author=Center%2520for%2520AI%2520Safety&amp;title=AISN%20%2325%3A%20White%20House%20Executive%20Order%20on%20AI%2C%20UK%20AI%20Safety%20Summit%2C%20and%20Progress%20on%20Voluntary%20Evaluations%20of%20AI%20Risks.&amp;source_url=https%3A%2F%2Fnewsletter.safe.ai%2Fp%2Fai-safety-newsletter-25&amp;created_at=2023-10-31T19%3A30%3A12.77143%2B00%3A00&amp;duration=697" length="8360064" type="audio/mpeg"/>
      <link>https://newsletter.safe.ai/p/ai-safety-newsletter-25</link>
      <itunes:duration>697</itunes:duration>
    </item>
    <item>
      <title>AISN #24: Kissinger Urges US-China Cooperation on AI, China’s New AI Law, US Export Controls, International Institutions, and Open Source AI.</title>
      <description>&lt;p&gt;Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;China's New AI Law, US Export Controls, and Calls for Bilateral Cooperation&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;China details how AI providers can fulfill their legal obligations. The Chinese government has passed several laws on AI. They’ve regulated recommendation algorithms and taken steps to mitigate the risk of deepfakes. Most recently, they issued a new law governing generative AI. It's less stringent than earlier draft version, but the law remains more comprehensive in AI regulation than any laws passed in the US, UK, or European Union. &lt;/p&gt;&lt;p&gt;The law creates legal obligations for AI providers to respect intellectual property rights, avoid discrimination, and uphold socialist values. But as with many AI policy proposals, these are [...]&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Outline:&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;(00:15) China's New AI Law, US Export Controls, and Calls for Bilateral Cooperation&lt;/p&gt;&lt;p&gt;(04:58) Proposed International Institutions for AI&lt;/p&gt;&lt;p&gt;(08:15) Open Source AI: Risks and Opportunities&lt;/p&gt;&lt;p&gt;(11:25) Links&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;
          &lt;p&gt;&lt;b&gt;First published:&lt;/b&gt;&lt;br/&gt;
          October 18th, 2023 &lt;/p&gt;
        
        &lt;p&gt;&lt;b&gt;Source:&lt;/b&gt;&lt;br/&gt;
        &lt;a href="https://newsletter.safe.ai/p/ai-safety-newsletter-24?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Source+URL+in+episode+description&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;https://newsletter.safe.ai/p/ai-safety-newsletter-24&lt;/a&gt; &lt;/p&gt;
        &lt;p&gt;---&lt;/p&gt;
      &lt;p&gt;Want more? Check out our &lt;a href="https://newsletter.mlsafety.org/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Episode+description+footer" target="_blank" rel="noreferrer"&gt;ML Safety Newsletter&lt;/a&gt; for technical safety research.&lt;/p&gt;
      
        &lt;p&gt;Narrated by &lt;a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Narrated+by+TYPE+III+AUDIO&amp;utm_term=center_for_ai_safety&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;</description>
      <pubDate>Wed, 18 Oct 2023 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">5f31bd93-9e57-4931-acce-417361a8d95b</guid>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
      <enclosure url="https://dl.type3.audio/episode/5f31bd93-9e57-4931-acce-417361a8d95b.mp3?request_source=rss&amp;client_id=center_for_ai_safety&amp;feed_id=newsletter__safe_ai&amp;type=ai_narration&amp;author=Center%2520for%2520AI%2520Safety&amp;title=AISN%20%2324%3A%20Kissinger%20Urges%20US-China%20Cooperation%20on%20AI%2C%20China's%20New%20AI%20Law%2C%20US%20Export%20Controls%2C%20International%20Institutions%2C%20and%20Open%20Source%20AI.&amp;source_url=https%3A%2F%2Fnewsletter.safe.ai%2Fp%2Fai-safety-newsletter-24&amp;created_at=2023-10-18T17%3A01%3A00.809167%2B00%3A00&amp;duration=780" length="9355968" type="audio/mpeg"/>
      <link>https://newsletter.safe.ai/p/ai-safety-newsletter-24</link>
      <itunes:duration>780</itunes:duration>
    </item>
    <item>
      <title>AISN #23: New OpenAI Models, News from Anthropic, and Representation Engineering.</title>
      <description>&lt;p&gt;Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;OpenAI releases GPT-4 with Vision and DALL·E-3, announces Red Teaming Network&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;GPT-4 with vision and voice. When GPT-4 was initially announced in March, OpenAI demonstrated its ability to process and discuss images such as diagrams or photographs. This feature has now been integrated into GPT-4V. Users can now input images in addition to text, and the model will respond to both. Users can also speak to GPT-4V, and the model will respond verbally.&lt;/p&gt;&lt;p&gt;GPT-4V may be more vulnerable to misuse via jailbreaks and adversarial attacks. Previous research has shown that multimodal models, which can process multiple forms of input such as both text and images, are more vulnerable to adversarial attacks than text-only models. GPT-4V's System Card includes some experiments [...]&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Outline:&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;(00:11) OpenAI releases GPT-4 with Vision and DALL·E-3, announces Red Teaming Network&lt;/p&gt;&lt;p&gt;(02:39) Writer's Guild of America Receives Protections Against AI Automation&lt;/p&gt;&lt;p&gt;(03:42) Anthropic receives $1.25B investment from Amazon, and announces several new policies&lt;/p&gt;&lt;p&gt;(06:21) Representation Engineering: A Top-Down Approach to AI Transparency&lt;/p&gt;&lt;p&gt;(07:57) Links&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;
          &lt;p&gt;&lt;b&gt;First published:&lt;/b&gt;&lt;br/&gt;
          October 4th, 2023 &lt;/p&gt;
        
        &lt;p&gt;&lt;b&gt;Source:&lt;/b&gt;&lt;br/&gt;
        &lt;a href="https://newsletter.safe.ai/p/ai-safety-newsletter-23?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Source+URL+in+episode+description&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;https://newsletter.safe.ai/p/ai-safety-newsletter-23&lt;/a&gt; &lt;/p&gt;
        &lt;p&gt;---&lt;/p&gt;
      &lt;p&gt;Want more? Check out our &lt;a href="https://newsletter.mlsafety.org/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Episode+description+footer" target="_blank" rel="noreferrer"&gt;ML Safety Newsletter&lt;/a&gt; for technical safety research.&lt;/p&gt;
      
        &lt;p&gt;Narrated by &lt;a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Narrated+by+TYPE+III+AUDIO&amp;utm_term=center_for_ai_safety&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;</description>
      <pubDate>Wed, 04 Oct 2023 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">f8ff36db-0d19-496a-8a63-021820f2e834</guid>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
      <enclosure url="https://dl.type3.audio/episode/a9e24acf-e392-4b89-a7ca-ebe7008ab1d4.mp3?request_source=rss&amp;client_id=center_for_ai_safety&amp;feed_id=newsletter__safe_ai&amp;type=ai_narration&amp;author=Center%2520for%2520AI%2520Safety&amp;title=AISN%20%2323%3A%20New%20OpenAI%20Models%2C%20News%20from%20Anthropic%2C%20and%20Representation%20Engineering.&amp;source_url=https%3A%2F%2Fnewsletter.safe.ai%2Fp%2Fai-safety-newsletter-23&amp;created_at=2023-10-04T17%3A00%3A47.646836%2B00%3A00&amp;duration=575" length="6895872" type="audio/mpeg"/>
      <link>https://newsletter.safe.ai/p/ai-safety-newsletter-23</link>
      <itunes:duration>575</itunes:duration>
    </item>
    <item>
      <title>AISN #21: Google DeepMind’s GPT-4 Competitor, Military Investments in Autonomous Drones, The UK AI Safety Summit, and Case Studies in AI Policy.</title>
      <description>&lt;p&gt;Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Google DeepMind’s GPT-4 Competitor&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;Computational power is a key driver of AI progress, and a new report suggests that Google’s upcoming GPT-4 competitor will be trained on unprecedented amounts of compute. &lt;/p&gt;&lt;p&gt;The model, currently named Gemini, may be trained by the end of this year with 5x more computational power than GPT-4. By the end of next year, the report projects that Google will have the ability to train a model with 20x more compute than GPT-4. &lt;/p&gt;&lt;p&gt;For reference, the compute difference between GPT-3 and GPT-4 was 100x. If these projections are true, Google’s new models could create a meaningful spike relative to current AI capabilities. &lt;/p&gt;&lt;p&gt;Google’s position [...]&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Outline:&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;(00:14) Google DeepMind’s GPT-4 Competitor&lt;/p&gt;&lt;p&gt;(02:41) US Military Invests in Thousands of Autonomous Drones&lt;/p&gt;&lt;p&gt;(04:37) United Kingdom Prepares for Global AI Safety Summit&lt;/p&gt;&lt;p&gt;(06:15) Case Studies in AI Policy&lt;/p&gt;&lt;p&gt;(08:55) Links&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;
          &lt;p&gt;&lt;b&gt;First published:&lt;/b&gt;&lt;br/&gt;
          September 5th, 2023 &lt;/p&gt;
        
        &lt;p&gt;&lt;b&gt;Source:&lt;/b&gt;&lt;br/&gt;
        &lt;a href="https://newsletter.safe.ai/p/ai-safety-newsletter-21?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Source+URL+in+episode+description&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;https://newsletter.safe.ai/p/ai-safety-newsletter-21&lt;/a&gt; &lt;/p&gt;
        &lt;p&gt;---&lt;/p&gt;
      &lt;p&gt;Want more? Check out our &lt;a href="https://newsletter.mlsafety.org/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Episode+description+footer" target="_blank" rel="noreferrer"&gt;ML Safety Newsletter&lt;/a&gt; for technical safety research.&lt;/p&gt;
      
        &lt;p&gt;Narrated by &lt;a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Narrated+by+TYPE+III+AUDIO&amp;utm_term=center_for_ai_safety&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;</description>
      <pubDate>Tue, 05 Sep 2023 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">6556e4b6-91dc-45bc-b45a-a7d7acd46237</guid>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
      <enclosure url="https://dl.type3.audio/episode/6556e4b6-91dc-45bc-b45a-a7d7acd46237.mp3?request_source=rss&amp;client_id=center_for_ai_safety&amp;feed_id=newsletter__safe_ai&amp;type=ai_narration&amp;author=Center%2520for%2520AI%2520Safety&amp;title=AISN%20%2321%3A%20Google%20DeepMind%E2%80%99s%20GPT-4%20Competitor%2C%20Military%20Investments%20in%20Autonomous%20Drones%2C%20The%20UK%20AI%20Safety%20Summit%2C%20and%20Case%20Studies%20in%20AI%20Policy.&amp;source_url=https%3A%2F%2Fnewsletter.safe.ai%2Fp%2Fai-safety-newsletter-21&amp;created_at=2023-09-05T15%3A00%3A34.840986%2B00%3A00&amp;duration=592" length="3551040" type="audio/mpeg"/>
      <link>https://newsletter.safe.ai/p/ai-safety-newsletter-21</link>
      <itunes:duration>592</itunes:duration>
    </item>
    <item>
      <title>AISN #20: LLM Proliferation, AI Deception, and Continuing Drivers of AI Capabilities.</title>
      <description>&lt;p&gt;&lt;strong&gt;AI Deception: Examples, Risks, Solutions&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;AI deception is the topic of a new paper from researchers at and affiliated with the Center for AI Safety. It surveys empirical examples of AI deception, then explores societal risks and potential solutions.&lt;/p&gt;&lt;p&gt;The paper defines deception as “the systematic production of false beliefs in others as a means to accomplish some outcome other than the truth.” Importantly, this definition doesn't necessarily imply that AIs have beliefs or intentions. Instead, it focuses on patterns of behavior that regularly cause false beliefs and would be considered deceptive if exhibited by humans.&lt;/p&gt;&lt;p&gt;Deception by Meta’s CICERO AI. Meta developed the AI system CICERO to play Diplomacy, a game where players build and betray alliances in [...]&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Outline:&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;(00:11) AI Deception: Examples, Risks, Solutions&lt;/p&gt;&lt;p&gt;(04:35) Proliferation of Large Language Models&lt;/p&gt;&lt;p&gt;(09:25) Continuing Drivers of AI Capabilities&lt;/p&gt;&lt;p&gt;(14:30) Links&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;
          &lt;p&gt;&lt;b&gt;First published:&lt;/b&gt;&lt;br/&gt;
          August 29th, 2023 &lt;/p&gt;
        
        &lt;p&gt;&lt;b&gt;Source:&lt;/b&gt;&lt;br/&gt;
        &lt;a href="https://newsletter.safe.ai/p/ai-safety-newsletter-20?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Source+URL+in+episode+description&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;https://newsletter.safe.ai/p/ai-safety-newsletter-20&lt;/a&gt; &lt;/p&gt;
        &lt;p&gt;---&lt;/p&gt;
      &lt;p&gt;Want more? Check out our &lt;a href="https://newsletter.mlsafety.org/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Episode+description+footer" target="_blank" rel="noreferrer"&gt;ML Safety Newsletter&lt;/a&gt; for technical safety research.&lt;/p&gt;
      
        &lt;p&gt;Narrated by &lt;a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Narrated+by+TYPE+III+AUDIO&amp;utm_term=center_for_ai_safety&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;</description>
      <pubDate>Tue, 29 Aug 2023 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">b7c373f5-6e1d-4363-ae21-1f9222176184</guid>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
      <enclosure url="https://dl.type3.audio/episode/b7c373f5-6e1d-4363-ae21-1f9222176184.mp3?request_source=rss&amp;client_id=center_for_ai_safety&amp;feed_id=newsletter__safe_ai&amp;type=ai_narration&amp;author=Center%2520for%2520AI%2520Safety&amp;title=AISN%20%2320%3A%20LLM%20Proliferation%2C%20AI%20Deception%2C%20and%20Continuing%20Drivers%20of%20AI%20Capabilities.&amp;source_url=https%3A%2F%2Fnewsletter.safe.ai%2Fp%2Fai-safety-newsletter-20&amp;created_at=2023-08-29T15%3A00%3A21.259309%2B00%3A00&amp;duration=937" length="5618016" type="audio/mpeg"/>
      <link>https://newsletter.safe.ai/p/ai-safety-newsletter-20</link>
      <itunes:duration>937</itunes:duration>
    </item>
    <item>
      <title>[Paper] “An Overview of Catastrophic AI Risks” by Dan Hendrycks, Mantas Mazeika and Thomas Woodside</title>
      <description>&lt;p class="c2"&gt;Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty [...]&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;
          &lt;p&gt;&lt;b&gt;First published:&lt;/b&gt;&lt;br/&gt;
          June 21st, 2023 &lt;/p&gt;
        
        &lt;p&gt;&lt;b&gt;Source:&lt;/b&gt;&lt;br/&gt;
        &lt;a href="https://arxiv.org/abs/2306.12001?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Source+URL+in+episode+description&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;https://arxiv.org/abs/2306.12001&lt;/a&gt; &lt;/p&gt;
        &lt;p&gt;---&lt;/p&gt;
      &lt;p&gt;Want more? Check out our &lt;a href="https://newsletter.mlsafety.org/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Episode+description+footer" target="_blank" rel="noreferrer"&gt;ML Safety Newsletter&lt;/a&gt; for technical safety research.&lt;/p&gt;
      
        &lt;p&gt;Narrated by &lt;a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Narrated+by+TYPE+III+AUDIO&amp;utm_term=center_for_ai_safety&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;</description>
      <pubDate>Mon, 21 Aug 2023 09:00:01 GMT</pubDate>
      <guid isPermaLink="false">029550d0-aaa8-4c34-8dd3-c25635ddaff4</guid>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
      <enclosure url="https://dl.type3.audio/episodes/uploaded-audio/126df7f5-36f0-4bed-86ad-c6e84f379465.mp3?request_source=rss&amp;client_id=center_for_ai_safety&amp;feed_id=newsletter__safe_ai&amp;type=ai_narration&amp;author=Dan%2520Hendrycks%252C%2520Mantas%2520Mazeika%2520and%2520Thomas%2520Woodside&amp;title=%5BPaper%5D%20%22An%20Overview%20of%20Catastrophic%20AI%20Risks%22%20by%20Dan%20Hendrycks%2C%20Mantas%20Mazeika%20and%20Thomas%20Woodside&amp;source_url=https%3A%2F%2Farxiv.org%2Fabs%2F2306.12001&amp;created_at=2023-08-21T09%3A10%3A51.567179%2B00%3A00&amp;duration=11009" length="66053952" type="audio/mpeg"/>
      <link>https://arxiv.org/abs/2306.12001</link>
      <itunes:duration>11009</itunes:duration>
    </item>
    <item>
      <title>[Paper] “X-Risk Analysis for AI Research” by Dan Hendrycks and Mantas Mazeika</title>
      <description>&lt;p class="c23"&gt;Artificial intelligence (AI) has the potential to greatly improve society, but as with any powerful technology, it comes with heightened risks and responsibilities. Current AI research lacks a systematic discussion of how to manage long-tail risks from AI systems, including speculative long-term risks. Keeping in mind the potential benefits of AI, there is some concern that building ever more intelligent and powerful AI systems could eventually result in systems that are more powerful than us; some say this is like playing with fire and speculate that this could create existential risks (x-risks). To add precision and ground these discussions, we provide a guide for how to analyze AI x-risk, which consists of three parts: First, we review how systems can be made safer today, drawing on time-tested concepts from hazard analysis and systems safety that have been designed to steer large processes in safer directions. Next, we discuss strategies [...]&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;
          &lt;p&gt;&lt;b&gt;First published:&lt;/b&gt;&lt;br/&gt;
          October 22nd, 2022 &lt;/p&gt;
        
        &lt;p&gt;&lt;b&gt;Source:&lt;/b&gt;&lt;br/&gt;
        &lt;a href="https://arxiv.org/abs/2206.05862?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Source+URL+in+episode+description&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;https://arxiv.org/abs/2206.05862&lt;/a&gt; &lt;/p&gt;
        &lt;p&gt;---&lt;/p&gt;
      &lt;p&gt;Want more? Check out our &lt;a href="https://newsletter.mlsafety.org/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Episode+description+footer" target="_blank" rel="noreferrer"&gt;ML Safety Newsletter&lt;/a&gt; for technical safety research.&lt;/p&gt;
      
        &lt;p&gt;Narrated by &lt;a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Narrated+by+TYPE+III+AUDIO&amp;utm_term=center_for_ai_safety&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;</description>
      <pubDate>Mon, 21 Aug 2023 09:00:00 GMT</pubDate>
      <guid isPermaLink="false">a5f4fec7-6b51-48ae-9597-41ba069caf76</guid>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
      <enclosure url="https://dl.type3.audio/episode/a5f4fec7-6b51-48ae-9597-41ba069caf76.mp3?request_source=rss&amp;client_id=center_for_ai_safety&amp;feed_id=newsletter__safe_ai&amp;type=ai_narration&amp;author=Dan%2520Hendrycks%2520and%2520Mantas%2520Mazeika&amp;title=%5BPaper%5D%20%22X-Risk%20Analysis%20for%20AI%20Research%22%20by%20Dan%20Hendrycks%20and%20Mantas%20Mazeika&amp;source_url=https%3A%2F%2Farxiv.org%2Fabs%2F2206.05862&amp;created_at=2023-08-21T07%3A48%3A53.568591%2B00%3A00&amp;duration=2384" length="14301792" type="audio/mpeg"/>
      <link>https://arxiv.org/abs/2206.05862</link>
      <itunes:duration>2384</itunes:duration>
    </item>
    <item>
      <title>[Paper] “Unsolved Problems in ML Safety” by Dan Hendrycks, Nicholas Carlini, John Schulman and Jacob Steinhardt</title>
      <description>&lt;p class="c71 c80"&gt;Machine learning (ML) systems are rapidly increasing in size, are acquiring new capabilities, and are increasingly deployed in high-stakes settings. As with other powerful technologies, safety for ML should be a leading research priority. In response to emerging safety challenges in ML, such as those introduced by recent large-scale models, we provide a new roadmap for ML Safety and refine the technical problems that the field needs to address. We present four problems ready for research, namely withstanding hazards (“Robustness”), identifying hazards (“Monitoring”), steering ML systems (“Alignment”), and reducing deployment hazards (“Systemic Safety”). Throughout, we clarify each problem’s motivation and provide concrete research directions.&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;
          &lt;p&gt;&lt;b&gt;First published:&lt;/b&gt;&lt;br/&gt;
          June 16th, 2022 &lt;/p&gt;
        
        &lt;p&gt;&lt;b&gt;Source:&lt;/b&gt;&lt;br/&gt;
        &lt;a href="https://arxiv.org/abs/2109.13916?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Source+URL+in+episode+description&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;https://arxiv.org/abs/2109.13916&lt;/a&gt; &lt;/p&gt;
        &lt;p&gt;---&lt;/p&gt;
      &lt;p&gt;Want more? Check out our &lt;a href="https://newsletter.mlsafety.org/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Episode+description+footer" target="_blank" rel="noreferrer"&gt;ML Safety Newsletter&lt;/a&gt; for technical safety research.&lt;/p&gt;
      
        &lt;p&gt;Narrated by &lt;a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Narrated+by+TYPE+III+AUDIO&amp;utm_term=center_for_ai_safety&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;</description>
      <pubDate>Mon, 21 Aug 2023 09:00:00 GMT</pubDate>
      <guid isPermaLink="false">2509437c-a010-4e3d-b8dd-54b2e0530a0c</guid>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
      <enclosure url="https://dl.type3.audio/episode/2509437c-a010-4e3d-b8dd-54b2e0530a0c.mp3?request_source=rss&amp;client_id=center_for_ai_safety&amp;feed_id=newsletter__safe_ai&amp;type=ai_narration&amp;author=Dan%2520Hendrycks%252C%2520Nicholas%2520Carlini%252C%2520John%2520Schulman%2520and%2520Jacob%2520Steinhardt&amp;title=%5BPaper%5D%20%22Unsolved%20Problems%20in%20ML%20Safety%22%20by%20Dan%20Hendrycks%2C%20Nicholas%20Carlini%2C%20John%20Schulman%20and%20Jacob%20Steinhardt&amp;source_url=https%3A%2F%2Farxiv.org%2Fabs%2F2109.13916&amp;created_at=2023-08-21T07%3A25%3A25.485817%2B00%3A00&amp;duration=3194" length="19160928" type="audio/mpeg"/>
      <link>https://arxiv.org/abs/2109.13916</link>
      <itunes:duration>3194</itunes:duration>
    </item>
    <item>
      <title>AISN #19: US-China Competition on AI Chips, Measuring Language Agent Developments, Economic Analysis of Language Model Propaganda, and White House AI Cyber Challenge.</title>
      <description>&lt;p&gt;&lt;strong&gt;US-China Competition on AI Chips&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;Modern AI systems are trained on advanced computer chips which are designed and fabricated by only a handful of companies in the world. The US and China have been competing for access to these chips for years. Last October, the Biden administration partnered with international allies to severely limit China’s access to leading AI chips.&lt;/p&gt;&lt;p&gt;Recently, there have been several interesting developments on AI chips. China has made several efforts to preserve their chip access, including smuggling, buying chips that are just under the legal limit of performance, and investing in their domestic chip industry. Meanwhile, the United States has struggled [...]&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Outline:&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;(00:15) US-China Competition on AI Chips&lt;/p&gt;&lt;p&gt;(04:09) Measuring Language Agents Developments&lt;/p&gt;&lt;p&gt;(06:07) An Economic Analysis of Language Model Propaganda&lt;/p&gt;&lt;p&gt;(08:11) White House Competition Applying AI to Cybersecurity&lt;/p&gt;&lt;p&gt;(09:40) Links&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;
          &lt;p&gt;&lt;b&gt;First published:&lt;/b&gt;&lt;br/&gt;
          August 15th, 2023 &lt;/p&gt;
        
        &lt;p&gt;&lt;b&gt;Source:&lt;/b&gt;&lt;br/&gt;
        &lt;a href="https://newsletter.safe.ai/p/ai-safety-newsletter-19?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Source+URL+in+episode+description&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;https://newsletter.safe.ai/p/ai-safety-newsletter-19&lt;/a&gt; &lt;/p&gt;
        &lt;p&gt;---&lt;/p&gt;
      &lt;p&gt;Want more? Check out our &lt;a href="https://newsletter.mlsafety.org/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Episode+description+footer" target="_blank" rel="noreferrer"&gt;ML Safety Newsletter&lt;/a&gt; for technical safety research.&lt;/p&gt;
      
        &lt;p&gt;Narrated by &lt;a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Narrated+by+TYPE+III+AUDIO&amp;utm_term=center_for_ai_safety&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;</description>
      <pubDate>Tue, 15 Aug 2023 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">a820be37-71e9-4c94-835f-c5079ddd4997</guid>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
      <enclosure url="https://dl.type3.audio/episode/a820be37-71e9-4c94-835f-c5079ddd4997.mp3?request_source=rss&amp;client_id=center_for_ai_safety&amp;feed_id=newsletter__safe_ai&amp;type=ai_narration&amp;author=Center%2520for%2520AI%2520Safety&amp;title=AISN%20%2319%3A%20US-China%20Competition%20on%20AI%20Chips%2C%20Measuring%20Language%20Agent%20Developments%2C%20Economic%20Analysis%20of%20Language%20Model%20Propaganda%2C%20and%20White%20House%20AI%20Cyber%20Challenge.&amp;source_url=https%3A%2F%2Fnewsletter.safe.ai%2Fp%2Fai-safety-newsletter-19&amp;created_at=2023-08-16T11%3A20%3A43.832498%2B00%3A00&amp;duration=651" length="3901248" type="audio/mpeg"/>
      <link>https://newsletter.safe.ai/p/ai-safety-newsletter-19</link>
      <itunes:duration>651</itunes:duration>
    </item>
    <item>
      <title>AISN #18: Challenges of Reinforcement Learning from Human Feedback, Microsoft’s Security Breach, and Conceptual Research on AI Safety.</title>
      <description>&lt;p&gt;&lt;strong&gt;Challenges of Reinforcement Learning from Human Feedback&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;If you’ve used ChatGPT, you might’ve noticed the “thumbs up” and “thumbs down” buttons next to each of its answers. Pressing these buttons provides data that OpenAI uses to improve their models through a technique called reinforcement learning from human feedback (RLHF).&lt;/p&gt;&lt;p&gt;RLHF is popular for teaching models about human preferences, but it faces fundamental limitations. Different people have different preferences, but instead of modeling the diversity of human values, RLHF trains models to earn the approval of whoever happens to give feedback. Furthermore, as AI systems become more capable, they can learn to deceive human evaluators into giving undue approval.&lt;/p&gt;&lt;p&gt;Here we discuss a new [...]&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Outline:&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;(00:13) Challenges of Reinforcement Learning from Human Feedback&lt;/p&gt;&lt;p&gt;(05:26) Microsoft’s Security Breach&lt;/p&gt;&lt;p&gt;(06:59) Conceptual Research on AI Safety&lt;/p&gt;&lt;p&gt;(09:25) Links&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;
          &lt;p&gt;&lt;b&gt;First published:&lt;/b&gt;&lt;br/&gt;
          August 8th, 2023 &lt;/p&gt;
        
        &lt;p&gt;&lt;b&gt;Source:&lt;/b&gt;&lt;br/&gt;
        &lt;a href="https://newsletter.safe.ai/p/ai-safety-newsletter-18?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Source+URL+in+episode+description&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;https://newsletter.safe.ai/p/ai-safety-newsletter-18&lt;/a&gt; &lt;/p&gt;
        &lt;p&gt;---&lt;/p&gt;
      &lt;p&gt;Want more? Check out our &lt;a href="https://newsletter.mlsafety.org/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Episode+description+footer" target="_blank" rel="noreferrer"&gt;ML Safety Newsletter&lt;/a&gt; for technical safety research.&lt;/p&gt;
      
        &lt;p&gt;Narrated by &lt;a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Narrated+by+TYPE+III+AUDIO&amp;utm_term=center_for_ai_safety&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;</description>
      <pubDate>Tue, 08 Aug 2023 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">d928b8f2-efb1-4076-a0fa-1cfe2656ec40</guid>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
      <enclosure url="https://dl.type3.audio/episode/d928b8f2-efb1-4076-a0fa-1cfe2656ec40.mp3?request_source=rss&amp;client_id=center_for_ai_safety&amp;feed_id=newsletter__safe_ai&amp;type=ai_narration&amp;author=Center%2520for%2520AI%2520Safety&amp;title=AISN%20%2318%3A%20Challenges%20of%20Reinforcement%20Learning%20from%20Human%20Feedback%2C%20Microsoft%E2%80%99s%20Security%20Breach%2C%20and%20Conceptual%20Research%20on%20AI%20Safety.&amp;source_url=https%3A%2F%2Fnewsletter.safe.ai%2Fp%2Fai-safety-newsletter-18&amp;created_at=2023-08-16T11%3A20%3A57.583577%2B00%3A00&amp;duration=662" length="3970368" type="audio/mpeg"/>
      <link>https://newsletter.safe.ai/p/ai-safety-newsletter-18</link>
      <itunes:duration>662</itunes:duration>
    </item>
    <item>
      <title>AISN #17: Automatically Circumventing LLM Guardrails, the Frontier Model Forum, and Senate Hearing on AI Oversight.</title>
      <description>&lt;p&gt;&lt;strong&gt;Automatically Circumventing LLM Guardrails&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;Large language models (LLMs) can generate hazardous information, such as step-by-step instructions on how to create a pandemic pathogen. To combat the risk of malicious use, companies typically build safety guardrails intended to prevent LLMs from misbehaving. &lt;/p&gt;&lt;p&gt;But these safety controls are almost useless against a new attack developed by researchers at Carnegie Mellon University and the Center for AI Safety. By studying the vulnerabilities in open source models such as Meta’s LLaMA 2, the researchers can automatically generate a nearly unlimited supply of “adversarial suffixes,” which are words and characters that cause any model’s safety controls to fail. &lt;/p&gt;&lt;p&gt;This discovery calls into question the fundamental limits of safety [...]&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Outline:&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;(00:12) Automatically Circumventing LLM Guardrails&lt;/p&gt;&lt;p&gt;(05:40) AI Labs Announce the Frontier Model Forum&lt;/p&gt;&lt;p&gt;(07:54) Senate Hearing on AI Oversight&lt;/p&gt;&lt;p&gt;(14:42) Links&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;
          &lt;p&gt;&lt;b&gt;First published:&lt;/b&gt;&lt;br/&gt;
          August 1st, 2023 &lt;/p&gt;
        
        &lt;p&gt;&lt;b&gt;Source:&lt;/b&gt;&lt;br/&gt;
        &lt;a href="https://newsletter.safe.ai/p/ai-safety-newsletter-17?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Source+URL+in+episode+description&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;https://newsletter.safe.ai/p/ai-safety-newsletter-17&lt;/a&gt; &lt;/p&gt;
        &lt;p&gt;---&lt;/p&gt;
      &lt;p&gt;Want more? Check out our &lt;a href="https://newsletter.mlsafety.org/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Episode+description+footer" target="_blank" rel="noreferrer"&gt;ML Safety Newsletter&lt;/a&gt; for technical safety research.&lt;/p&gt;
      
        &lt;p&gt;Narrated by &lt;a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Narrated+by+TYPE+III+AUDIO&amp;utm_term=center_for_ai_safety&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;</description>
      <pubDate>Tue, 01 Aug 2023 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">b84ca1a9-98cb-4628-9845-9c2057236c3c</guid>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
      <enclosure url="https://dl.type3.audio/episode/b84ca1a9-98cb-4628-9845-9c2057236c3c.mp3?request_source=rss&amp;client_id=center_for_ai_safety&amp;feed_id=newsletter__safe_ai&amp;type=ai_narration&amp;author=Center%2520for%2520AI%2520Safety&amp;title=AISN%20%2317%3A%20Automatically%20Circumventing%20LLM%20Guardrails%2C%20the%20Frontier%20Model%20Forum%2C%20and%20Senate%20Hearing%20on%20AI%20Oversight.&amp;source_url=https%3A%2F%2Fnewsletter.safe.ai%2Fp%2Fai-safety-newsletter-17&amp;created_at=2023-08-16T11%3A21%3A16.865387%2B00%3A00&amp;duration=944" length="5662080" type="audio/mpeg"/>
      <link>https://newsletter.safe.ai/p/ai-safety-newsletter-17</link>
      <itunes:duration>944</itunes:duration>
    </item>
    <item>
      <title>AISN #16: White House Secures Voluntary Commitments from Leading AI Labs, and Lessons from Oppenheimer .</title>
      <description>&lt;p&gt;&lt;strong&gt;White House Unveils Voluntary Commitments to AI Safety from Leading AI Labs&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;Last Friday, the White House announced a series of voluntary commitments from seven of the world's premier AI labs. Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI pledged to uphold these commitments, which are non-binding and pertain only to forthcoming "frontier models" superior to currently available AI systems. The White House also notes that the Biden-Harris Administration is developing an executive order alongside these voluntary commitments.&lt;/p&gt;&lt;p&gt;The commitments are timely and technically well-informed, demonstrating the ability of federal policymakers to respond capably and quickly to AI risks. The Center for AI Safety supports these commitments as a precedent for cooperation on AI [...]&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Outline:&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;(00:11) White House Unveils Voluntary Commitments to AI Safety from Leading AI Labs&lt;/p&gt;&lt;p&gt;(05:05) Lessons from Oppenheimer&lt;/p&gt;&lt;p&gt;(10:38) Links&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;
          &lt;p&gt;&lt;b&gt;First published:&lt;/b&gt;&lt;br/&gt;
          July 25th, 2023 &lt;/p&gt;
        
        &lt;p&gt;&lt;b&gt;Source:&lt;/b&gt;&lt;br/&gt;
        &lt;a href="https://newsletter.safe.ai/p/ai-safety-newsletter-16?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Source+URL+in+episode+description&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;https://newsletter.safe.ai/p/ai-safety-newsletter-16&lt;/a&gt; &lt;/p&gt;
        &lt;p&gt;---&lt;/p&gt;
      &lt;p&gt;Want more? Check out our &lt;a href="https://newsletter.mlsafety.org/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Episode+description+footer" target="_blank" rel="noreferrer"&gt;ML Safety Newsletter&lt;/a&gt; for technical safety research.&lt;/p&gt;
      
        &lt;p&gt;Narrated by &lt;a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Narrated+by+TYPE+III+AUDIO&amp;utm_term=center_for_ai_safety&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;</description>
      <pubDate>Tue, 25 Jul 2023 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">99bd9ed4-c683-4f5f-b894-8bbcf03dd333</guid>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
      <enclosure url="https://dl.type3.audio/episode/99bd9ed4-c683-4f5f-b894-8bbcf03dd333.mp3?request_source=rss&amp;client_id=center_for_ai_safety&amp;feed_id=newsletter__safe_ai&amp;type=ai_narration&amp;author=Center%2520for%2520AI%2520Safety&amp;title=AISN%20%2316%3A%20White%20House%20Secures%20Voluntary%20Commitments%20from%20Leading%20AI%20Labs%2C%20and%20Lessons%20from%20Oppenheimer%20.&amp;source_url=https%3A%2F%2Fnewsletter.safe.ai%2Fp%2Fai-safety-newsletter-16&amp;created_at=2023-08-16T11%3A21%3A30.381843%2B00%3A00&amp;duration=722" length="4330080" type="audio/mpeg"/>
      <link>https://newsletter.safe.ai/p/ai-safety-newsletter-16</link>
      <itunes:duration>722</itunes:duration>
    </item>
    <item>
      <title>AISN #15: China and the US take action to regulate AI, results from a tournament forecasting AI risk, updates on xAI’s plan, and Meta releases its open-source and commercially available Llama 2.</title>
      <description>&lt;p&gt;&lt;strong&gt;Both China and the US take action to regulate AI&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;Last week, regulators in both China and the US took aim at generative AI services. These actions show that China and the US are both concerned with AI safety. Hopefully, this is a sign they can eventually coordinate.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;China’s new generative AI rules&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;On Thursday, China’s government released new rules governing generative AI. China’s new rules, which are set to take effect on August 15th, regulate publicly-available generative AI services. The providers of such services will be criminally liable for the content their services generate. &lt;/p&gt;&lt;p&gt;The rules specify illegal [...]&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Outline:&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;(00:17) Both China and the US take action to regulate AI&lt;/p&gt;&lt;p&gt;(00:36) China’s new generative AI rules&lt;/p&gt;&lt;p&gt;(03:15) The FTC investigates OpenAI&lt;/p&gt;&lt;p&gt;(05:01) Results from a tournament forecasting AI risk&lt;/p&gt;&lt;p&gt;(08:18) Updates on xAI’s plan&lt;/p&gt;&lt;p&gt;(09:05) Meta releases Llama 2, open-source and commercially available&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;
          &lt;p&gt;&lt;b&gt;First published:&lt;/b&gt;&lt;br/&gt;
          July 19th, 2023 &lt;/p&gt;
        
        &lt;p&gt;&lt;b&gt;Source:&lt;/b&gt;&lt;br/&gt;
        &lt;a href="https://newsletter.safe.ai/p/ai-safety-newsletter-15?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Source+URL+in+episode+description&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;https://newsletter.safe.ai/p/ai-safety-newsletter-15&lt;/a&gt; &lt;/p&gt;
        &lt;p&gt;---&lt;/p&gt;
      &lt;p&gt;Want more? Check out our &lt;a href="https://newsletter.mlsafety.org/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Episode+description+footer" target="_blank" rel="noreferrer"&gt;ML Safety Newsletter&lt;/a&gt; for technical safety research.&lt;/p&gt;
      
        &lt;p&gt;Narrated by &lt;a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Narrated+by+TYPE+III+AUDIO&amp;utm_term=center_for_ai_safety&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;</description>
      <pubDate>Wed, 19 Jul 2023 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">679d426f-6877-4be5-b7ab-1c7bd8ab87d1</guid>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
      <enclosure url="https://dl.type3.audio/episode/679d426f-6877-4be5-b7ab-1c7bd8ab87d1.mp3?request_source=rss&amp;client_id=center_for_ai_safety&amp;feed_id=newsletter__safe_ai&amp;type=ai_narration&amp;author=Center%2520for%2520AI%2520Safety&amp;title=AISN%20%2315%3A%20China%20and%20the%20US%20take%20action%20to%20regulate%20AI%2C%20results%20from%20a%20tournament%20forecasting%20AI%20risk%2C%20updates%20on%20xAI%E2%80%99s%20plan%2C%20and%20Meta%20releases%20its%20open-source%20and%20commercially%20available%20Llama%202.&amp;source_url=https%3A%2F%2Fnewsletter.safe.ai%2Fp%2Fai-safety-newsletter-15&amp;created_at=2023-08-16T11%3A21%3A43.850094%2B00%3A00&amp;duration=729" length="4374432" type="audio/mpeg"/>
      <link>https://newsletter.safe.ai/p/ai-safety-newsletter-15</link>
      <itunes:duration>729</itunes:duration>
    </item>
    <item>
      <title>AISN #14: OpenAI’s ‘Superalignment’ team, Musk’s xAI launches, and developments in military AI use .</title>
      <description>&lt;p&gt;&lt;strong&gt;OpenAI announces a ‘superalignment’ team&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;On July 5th, OpenAI announced the ‘Superalignment’ team: a new research team given the goal of aligning superintelligence, and armed with 20% of OpenAI’s compute. In this story, we’ll explain and discuss the team’s strategy.&lt;/p&gt;&lt;p&gt;What is superintelligence? In their announcement, OpenAI distinguishes between ‘artificial general intelligence’ and ‘superintelligence.’ Briefly, ‘artificial general intelligence’ (AGI) is about breadth of performance. Generally intelligent systems perform well on a wide range of cognitive tasks. For example, humans are in many senses generally intelligent: we can learn how to drive a car, take a derivative, or play piano, even though evolution didn’t train us for those tasks. A superintelligent system would not only be [...]&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Outline:&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;(00:11) OpenAI announces a ‘superalignment’ team&lt;/p&gt;&lt;p&gt;(03:50) Musk launches xAI&lt;/p&gt;&lt;p&gt;(05:12) Developments in Military AI Use&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;
          &lt;p&gt;&lt;b&gt;First published:&lt;/b&gt;&lt;br/&gt;
          July 12th, 2023 &lt;/p&gt;
        
        &lt;p&gt;&lt;b&gt;Source:&lt;/b&gt;&lt;br/&gt;
        &lt;a href="https://newsletter.safe.ai/p/ai-safety-newsletter-14?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Source+URL+in+episode+description&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;https://newsletter.safe.ai/p/ai-safety-newsletter-14&lt;/a&gt; &lt;/p&gt;
        &lt;p&gt;---&lt;/p&gt;
      &lt;p&gt;Want more? Check out our &lt;a href="https://newsletter.mlsafety.org/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Episode+description+footer" target="_blank" rel="noreferrer"&gt;ML Safety Newsletter&lt;/a&gt; for technical safety research.&lt;/p&gt;
      
        &lt;p&gt;Narrated by &lt;a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Narrated+by+TYPE+III+AUDIO&amp;utm_term=center_for_ai_safety&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;</description>
      <pubDate>Wed, 12 Jul 2023 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">2d73737d-9ab2-41c2-946a-85e5c7b9393d</guid>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
      <enclosure url="https://dl.type3.audio/episode/2d73737d-9ab2-41c2-946a-85e5c7b9393d.mp3?request_source=rss&amp;client_id=center_for_ai_safety&amp;feed_id=newsletter__safe_ai&amp;type=ai_narration&amp;author=Center%2520for%2520AI%2520Safety&amp;title=AISN%20%2314%3A%20OpenAI%E2%80%99s%20%E2%80%98Superalignment%E2%80%99%20team%2C%20Musk%E2%80%99s%20xAI%20launches%2C%20and%20developments%20in%20military%20AI%20use%20.&amp;source_url=https%3A%2F%2Fnewsletter.safe.ai%2Fp%2Fai-safety-newsletter-14&amp;created_at=2023-08-16T11%3A21%3A56.69268%2B00%3A00&amp;duration=547" length="3279888" type="audio/mpeg"/>
      <link>https://newsletter.safe.ai/p/ai-safety-newsletter-14</link>
      <itunes:duration>547</itunes:duration>
    </item>
    <item>
      <title>AISN #13: An interdisciplinary perspective on AI proxy failures, new competitors to ChatGPT, and prompting language models to misbehave.</title>
      <description>&lt;p&gt;&lt;strong&gt;Interdisciplinary Perspective on AI Proxy Failures&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;In this story, we discuss a recent paper on why proxy goals fail. First, we introduce proxy gaming, and then summarize the paper’s findings. &lt;/p&gt;&lt;p&gt;Proxy gaming is a well-documented failure mode in AI safety. For example, social media platforms use AI systems to recommend content to users. These systems are sometimes built to maximize the amount of time a user spends on the platform. The idea is that the time the user spends on the platform approximates the quality of the content being recommended. However, a user might spend even more time on a platform because they’re responding to an enraging post or interacting [...]&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Outline:&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;(00:13) Interdisciplinary Perspective on AI Proxy Failures&lt;/p&gt;&lt;p&gt;(06:06) A Flurry of AI Fundraising and Model Releases&lt;/p&gt;&lt;p&gt;(12:53) Adversarial Inputs Make Chatbots Misbehave&lt;/p&gt;&lt;p&gt;(15:52) Links&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;
          &lt;p&gt;&lt;b&gt;First published:&lt;/b&gt;&lt;br/&gt;
          July 5th, 2023 &lt;/p&gt;
        
        &lt;p&gt;&lt;b&gt;Source:&lt;/b&gt;&lt;br/&gt;
        &lt;a href="https://newsletter.safe.ai/p/ai-safety-newsletter-13?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Source+URL+in+episode+description&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;https://newsletter.safe.ai/p/ai-safety-newsletter-13&lt;/a&gt; &lt;/p&gt;
        &lt;p&gt;---&lt;/p&gt;
      &lt;p&gt;Want more? Check out our &lt;a href="https://newsletter.mlsafety.org/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Episode+description+footer" target="_blank" rel="noreferrer"&gt;ML Safety Newsletter&lt;/a&gt; for technical safety research.&lt;/p&gt;
      
        &lt;p&gt;Narrated by &lt;a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Narrated+by+TYPE+III+AUDIO&amp;utm_term=center_for_ai_safety&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;</description>
      <pubDate>Wed, 05 Jul 2023 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">1488faad-ec43-4332-a06a-5dcab2e6d68b</guid>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
      <enclosure url="https://dl.type3.audio/episode/1488faad-ec43-4332-a06a-5dcab2e6d68b.mp3?request_source=rss&amp;client_id=center_for_ai_safety&amp;feed_id=newsletter__safe_ai&amp;type=ai_narration&amp;author=Dan%2520Hendrycks&amp;title=AISN%20%2313%3A%20An%20interdisciplinary%20perspective%20on%20AI%20proxy%20failures%2C%20new%20competitors%20to%20ChatGPT%2C%20and%20prompting%20language%20models%20to%20misbehave.&amp;source_url=https%3A%2F%2Fnewsletter.safe.ai%2Fp%2Fai-safety-newsletter-13&amp;created_at=2023-08-16T11%3A22%3A10.865829%2B00%3A00&amp;duration=1054" length="6321312" type="audio/mpeg"/>
      <link>https://newsletter.safe.ai/p/ai-safety-newsletter-13</link>
      <itunes:duration>1054</itunes:duration>
    </item>
    <item>
      <title>AISN #12: Policy Proposals from NTIA’s Request for Comment, and Reconsidering Instrumental Convergence.</title>
      <description>&lt;p&gt;&lt;strong&gt;Policy Proposals from NTIA’s Request for Comment&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;The National Telecommunications and Information Administration publicly requested comments on the matter from academics, think tanks, industry leaders, and concerned citizens. They asked 34 questions and received more than 1,400 responses on how to govern AI for the public benefit. This week, we cover some of the most promising proposals found in the NTIA submissions. &lt;/p&gt;&lt;picture&gt;&lt;/picture&gt;&lt;p&gt;&lt;strong&gt;Technical Proposals for Evaluating AI Safety&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;Several NTIA submissions focused on the technical question of how to evaluate the safety of an AI system. We review two areas of active research: red-teaming and transparency. &lt;/p&gt;&lt;p&gt;&lt;strong&gt;Red Teaming: Acting like an Adversary&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;Several submissions proposed government support for evaluating AIs via red teaming. In this evaluation method, a [...]&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Outline:&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;(00:11) Policy Proposals from NTIA’s Request for Comment&lt;/p&gt;&lt;p&gt;(00:48) Technical Proposals for Evaluating AI Safety&lt;/p&gt;&lt;p&gt;(01:04) Red Teaming: Acting like an Adversary&lt;/p&gt;&lt;p&gt;(02:24) Transparency: Understanding AIs From the Inside&lt;/p&gt;&lt;p&gt;(03:51) Governance Proposals for Improving Safety Processes&lt;/p&gt;&lt;p&gt;(04:25) Requiring a License for Frontier AI Systems&lt;/p&gt;&lt;p&gt;(06:29) Unifying Sector-Specific Expertise and General AI Oversight&lt;/p&gt;&lt;p&gt;(07:51) Does Antitrust Prevent Cooperation Between AI Labs?&lt;/p&gt;&lt;p&gt;(08:40) Reconsidering Instrumental Convergence&lt;/p&gt;&lt;p&gt;(10:39) Links&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;
          &lt;p&gt;&lt;b&gt;First published:&lt;/b&gt;&lt;br/&gt;
          June 27th, 2023 &lt;/p&gt;
        
        &lt;p&gt;&lt;b&gt;Source:&lt;/b&gt;&lt;br/&gt;
        &lt;a href="https://newsletter.safe.ai/p/ai-safety-newsletter-12?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Source+URL+in+episode+description&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;https://newsletter.safe.ai/p/ai-safety-newsletter-12&lt;/a&gt; &lt;/p&gt;
        &lt;p&gt;---&lt;/p&gt;
      &lt;p&gt;Want more? Check out our &lt;a href="https://newsletter.mlsafety.org/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Episode+description+footer" target="_blank" rel="noreferrer"&gt;ML Safety Newsletter&lt;/a&gt; for technical safety research.&lt;/p&gt;
      
        &lt;p&gt;Narrated by &lt;a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Narrated+by+TYPE+III+AUDIO&amp;utm_term=center_for_ai_safety&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;</description>
      <pubDate>Tue, 27 Jun 2023 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">5df59ab5-a5fa-4834-9c02-f0ca338bedd0</guid>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
      <enclosure url="https://dl.type3.audio/episode/5df59ab5-a5fa-4834-9c02-f0ca338bedd0.mp3?request_source=rss&amp;client_id=center_for_ai_safety&amp;feed_id=newsletter__safe_ai&amp;type=ai_narration&amp;author=Dan%2520Hendrycks&amp;title=AISN%20%2312%3A%20Policy%20Proposals%20from%20NTIA%E2%80%99s%20Request%20for%20Comment%2C%20and%20Reconsidering%20Instrumental%20Convergence.&amp;source_url=https%3A%2F%2Fnewsletter.safe.ai%2Fp%2Fai-safety-newsletter-12&amp;created_at=2023-08-16T11%3A22%3A26.179433%2B00%3A00&amp;duration=831" length="4983840" type="audio/mpeg"/>
      <link>https://newsletter.safe.ai/p/ai-safety-newsletter-12</link>
      <itunes:duration>831</itunes:duration>
    </item>
    <item>
      <title>AISN #11: An Overview of Catastrophic AI Risks.</title>
      <description>&lt;p&gt;&lt;strong&gt;An Overview of Catastrophic AI Risks&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;Global leaders are concerned that artificial intelligence could pose catastrophic risks. 42% of CEOs polled at the Yale CEO Summit agree that AI could destroy humanity in five to ten years. The Secretary General of the United Nations said we “must take these warnings seriously.” Amid all these frightening polls and public statements, there’s a simple question that’s worth asking: why exactly is AI such a risk?&lt;/p&gt;&lt;p&gt;The Center for AI Safety has released a new paper to provide a clear and comprehensive answer to this question. We detail the precise risks posed by AI, the structural dynamics making these problems so difficult to solve, and the technical, social, and political responses required to overcome this [...]&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Outline:&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;(00:08) An Overview of Catastrophic AI Risks&lt;/p&gt;&lt;p&gt;(00:56) Malicious actors can use AIs to cause harm.&lt;/p&gt;&lt;p&gt;(02:18) Racing towards an AI disaster.&lt;/p&gt;&lt;p&gt;(04:05) Safety should be a goal, not a constraint.&lt;/p&gt;&lt;p&gt;(05:46) The challenge of AI control.&lt;/p&gt;&lt;p&gt;(07:53) Positive visions for the future of AI.&lt;/p&gt;&lt;p&gt;(09:02) Links&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;
          &lt;p&gt;&lt;b&gt;First published:&lt;/b&gt;&lt;br/&gt;
          June 22nd, 2023 &lt;/p&gt;
        
        &lt;p&gt;&lt;b&gt;Source:&lt;/b&gt;&lt;br/&gt;
        &lt;a href="https://newsletter.safe.ai/p/ai-safety-newsletter-11?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Source+URL+in+episode+description&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;https://newsletter.safe.ai/p/ai-safety-newsletter-11&lt;/a&gt; &lt;/p&gt;
        &lt;p&gt;---&lt;/p&gt;
      &lt;p&gt;Want more? Check out our &lt;a href="https://newsletter.mlsafety.org/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Episode+description+footer" target="_blank" rel="noreferrer"&gt;ML Safety Newsletter&lt;/a&gt; for technical safety research.&lt;/p&gt;
      
        &lt;p&gt;Narrated by &lt;a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Narrated+by+TYPE+III+AUDIO&amp;utm_term=center_for_ai_safety&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;</description>
      <pubDate>Thu, 22 Jun 2023 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">563b0518-1897-4e8f-9517-3f0654a68864</guid>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
      <enclosure url="https://dl.type3.audio/episode/563b0518-1897-4e8f-9517-3f0654a68864.mp3?request_source=rss&amp;client_id=center_for_ai_safety&amp;feed_id=newsletter__safe_ai&amp;type=ai_narration&amp;author=Dan%2520Hendrycks&amp;title=AISN%20%2311%3A%20An%20Overview%20of%20Catastrophic%20AI%20Risks.&amp;source_url=https%3A%2F%2Fnewsletter.safe.ai%2Fp%2Fai-safety-newsletter-11&amp;created_at=2023-08-16T11%3A22%3A40.348192%2B00%3A00&amp;duration=682" length="4091616" type="audio/mpeg"/>
      <link>https://newsletter.safe.ai/p/ai-safety-newsletter-11</link>
      <itunes:duration>682</itunes:duration>
    </item>
    <item>
      <title>AISN #10: How AI could enable bioterrorism, and policymakers continue to focus on AI .</title>
      <description>&lt;p&gt;&lt;strong&gt;How AI could enable bioterrorism&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;Only a hundred years ago, no person could have single handedly destroyed humanity. Nuclear weapons changed this situation, giving the power of global annihilation to a small handful of nations with powerful militaries. Now, thanks to advances in biotechnology and AI, a much larger group of people could have the power to create a global catastrophe. &lt;/p&gt;&lt;p&gt;This is the upshot of a new paper from MIT titled “Can large language models democratize access to dual-use biotechnology?” The authors demonstrate that today’s language models are capable of providing detailed instructions for non-expert users about how to create pathogens that could cause a global pandemic.&lt;/p&gt;&lt;p&gt;Language models can help users build dangerous [...]&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Outline:&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;(00:10) How AI could enable bioterrorism&lt;/p&gt;&lt;p&gt;(03:48) Policymakers continue to focus on AI&lt;/p&gt;&lt;p&gt;(05:27) Links&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;
          &lt;p&gt;&lt;b&gt;First published:&lt;/b&gt;&lt;br/&gt;
          June 13th, 2023 &lt;/p&gt;
        
        &lt;p&gt;&lt;b&gt;Source:&lt;/b&gt;&lt;br/&gt;
        &lt;a href="https://newsletter.safe.ai/p/ai-safety-newsletter-10?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Source+URL+in+episode+description&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;https://newsletter.safe.ai/p/ai-safety-newsletter-10&lt;/a&gt; &lt;/p&gt;
        &lt;p&gt;---&lt;/p&gt;
      &lt;p&gt;Want more? Check out our &lt;a href="https://newsletter.mlsafety.org/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Episode+description+footer" target="_blank" rel="noreferrer"&gt;ML Safety Newsletter&lt;/a&gt; for technical safety research.&lt;/p&gt;
      
        &lt;p&gt;Narrated by &lt;a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Narrated+by+TYPE+III+AUDIO&amp;utm_term=center_for_ai_safety&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;</description>
      <pubDate>Tue, 13 Jun 2023 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">841e3ca8-7c35-41fd-b428-0f3cb7f840c9</guid>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
      <enclosure url="https://dl.type3.audio/episode/841e3ca8-7c35-41fd-b428-0f3cb7f840c9.mp3?request_source=rss&amp;client_id=center_for_ai_safety&amp;feed_id=newsletter__safe_ai&amp;type=ai_narration&amp;author=Dan%2520Hendrycks&amp;title=AISN%20%2310%3A%20How%20AI%20could%20enable%20bioterrorism%2C%20and%20policymakers%20continue%20to%20focus%20on%20AI%20.&amp;source_url=https%3A%2F%2Fnewsletter.safe.ai%2Fp%2Fai-safety-newsletter-10&amp;created_at=2023-08-16T11%3A22%3A53.406168%2B00%3A00&amp;duration=407" length="2441664" type="audio/mpeg"/>
      <link>https://newsletter.safe.ai/p/ai-safety-newsletter-10</link>
      <itunes:duration>407</itunes:duration>
    </item>
    <item>
      <title>AISN #9: Statement on Extinction Risks, Competitive Pressures, and When Will AI Reach Human-Level? .</title>
      <description>&lt;p&gt;&lt;strong&gt;Top Scientists Warn of Extinction Risks from AI&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;Last week, hundreds of AI scientists and notable public figures signed a public statement on AI risks written by the Center for AI Safety. The statement reads:&lt;/p&gt;&lt;p&gt;“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”&lt;/p&gt;&lt;p&gt;The statement was signed by a broad, diverse coalition. The statement represents a historic coalition of AI experts — along with philosophers, ethicists, legal scholars, economists, physicists, political scientists, pandemic scientists, nuclear scientists, and climate scientists — establishing the risk of extinction from advanced, future AI systems as one of the world’s most important problems. &lt;/p&gt;&lt;p&gt;The international community is [...]&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Outline:&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;(00:10) Top Scientists Warn of Extinction Risks from AI&lt;/p&gt;&lt;p&gt;(03:35) Competitive Pressures in AI Development&lt;/p&gt;&lt;p&gt;(07:22) When Will AI Reach Human Level?&lt;/p&gt;&lt;p&gt;(12:47) Links&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;
          &lt;p&gt;&lt;b&gt;First published:&lt;/b&gt;&lt;br/&gt;
          June 6th, 2023 &lt;/p&gt;
        
        &lt;p&gt;&lt;b&gt;Source:&lt;/b&gt;&lt;br/&gt;
        &lt;a href="https://newsletter.safe.ai/p/ai-safety-newsletter-9?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Source+URL+in+episode+description&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;https://newsletter.safe.ai/p/ai-safety-newsletter-9&lt;/a&gt; &lt;/p&gt;
        &lt;p&gt;---&lt;/p&gt;
      &lt;p&gt;Want more? Check out our &lt;a href="https://newsletter.mlsafety.org/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Episode+description+footer" target="_blank" rel="noreferrer"&gt;ML Safety Newsletter&lt;/a&gt; for technical safety research.&lt;/p&gt;
      
        &lt;p&gt;Narrated by &lt;a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Narrated+by+TYPE+III+AUDIO&amp;utm_term=center_for_ai_safety&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;</description>
      <pubDate>Tue, 06 Jun 2023 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">c3852b45-31b0-41bc-81b9-348e414c0b29</guid>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
      <enclosure url="https://dl.type3.audio/episode/c3852b45-31b0-41bc-81b9-348e414c0b29.mp3?request_source=rss&amp;client_id=center_for_ai_safety&amp;feed_id=newsletter__safe_ai&amp;type=ai_narration&amp;author=Dan%2520Hendrycks&amp;title=AISN%20%239%3A%20Statement%20on%20Extinction%20Risks%2C%20Competitive%20Pressures%2C%20and%20When%20Will%20AI%20Reach%20Human-Level%3F%20.&amp;source_url=https%3A%2F%2Fnewsletter.safe.ai%2Fp%2Fai-safety-newsletter-9&amp;created_at=2023-08-16T11%3A23%3A07.697053%2B00%3A00&amp;duration=873" length="5235408" type="audio/mpeg"/>
      <link>https://newsletter.safe.ai/p/ai-safety-newsletter-9</link>
      <itunes:duration>873</itunes:duration>
    </item>
    <item>
      <title>AISN #8: Why AI could go rogue, how to screen for AI risks, and grants for research on democratic governance of AI.</title>
      <description>&lt;p&gt;&lt;strong&gt;Yoshua Bengio makes the case for rogue AI&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;AI systems pose a variety of different risks. Renowned AI scientist Yoshua Bengio recently argued for one particularly concerning possibility: that advanced AI agents could pursue goals in conflict with human values. &lt;/p&gt;&lt;p&gt;Human intelligence has accomplished impressive feats, from flying to the moon to building nuclear weapons. But Bengio argues that across a range of important intellectual, economic, and social activities, human intelligence could be matched and even surpassed by AI. &lt;/p&gt;&lt;p&gt;How would advanced AIs change our world? Many technologies are tools, such as toasters and calculators, which humans use to accomplish our goals. AIs are different, Bengio says. [...]&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Outline:&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;(00:11) Yoshua Bengio makes the case for rogue AI&lt;/p&gt;&lt;p&gt;(05:11) How to screen AIs for extreme risks&lt;/p&gt;&lt;p&gt;(09:12) Funding for Work on Democratic Inputs to AI&lt;/p&gt;&lt;p&gt;(10:43) Links&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;
          &lt;p&gt;&lt;b&gt;First published:&lt;/b&gt;&lt;br/&gt;
          May 30th, 2023 &lt;/p&gt;
        
        &lt;p&gt;&lt;b&gt;Source:&lt;/b&gt;&lt;br/&gt;
        &lt;a href="https://newsletter.safe.ai/p/ai-safety-newsletter-8?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Source+URL+in+episode+description&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;https://newsletter.safe.ai/p/ai-safety-newsletter-8&lt;/a&gt; &lt;/p&gt;
        &lt;p&gt;---&lt;/p&gt;
      &lt;p&gt;Want more? Check out our &lt;a href="https://newsletter.mlsafety.org/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Episode+description+footer" target="_blank" rel="noreferrer"&gt;ML Safety Newsletter&lt;/a&gt; for technical safety research.&lt;/p&gt;
      
        &lt;p&gt;Narrated by &lt;a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Narrated+by+TYPE+III+AUDIO&amp;utm_term=center_for_ai_safety&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;</description>
      <pubDate>Tue, 30 May 2023 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">c1be7a1c-afed-41ed-a767-c0d41ad6b82e</guid>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
      <enclosure url="https://dl.type3.audio/episode/c1be7a1c-afed-41ed-a767-c0d41ad6b82e.mp3?request_source=rss&amp;client_id=center_for_ai_safety&amp;feed_id=newsletter__safe_ai&amp;type=ai_narration&amp;author=Dan%2520Hendrycks&amp;title=AISN%20%238%3A%20Why%20AI%20could%20go%20rogue%2C%20how%20to%20screen%20for%20AI%20risks%2C%20and%20grants%20for%20research%20on%20democratic%20governance%20of%20AI.&amp;source_url=https%3A%2F%2Fnewsletter.safe.ai%2Fp%2Fai-safety-newsletter-8&amp;created_at=2023-08-16T11%3A23%3A21.644335%2B00%3A00&amp;duration=732" length="4392720" type="audio/mpeg"/>
      <link>https://newsletter.safe.ai/p/ai-safety-newsletter-8</link>
      <itunes:duration>732</itunes:duration>
    </item>
    <item>
      <title>AISN #7: Disinformation, recommendations for AI labs, and Senate hearings on AI.</title>
      <description>&lt;p&gt;&lt;strong&gt;How AI enables disinformation&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;Yesterday, a fake photo generated by an AI tool showed an explosion at the Pentagon. The photo was falsely attributed to Bloomberg News and circulated quickly online. Within minutes, the stock market declined sharply, only to recover after it was discovered that the picture was a hoax. &lt;/p&gt;&lt;p&gt;This story is part of a broader trend. AIs can now generate text, audio, and images that are unnervingly similar to their naturally occurring counterparts. How will this affect our world, and what kinds of solutions are available?&lt;/p&gt;&lt;picture&gt;&lt;/picture&gt;The fake image generated by an AI showed an explosion at the Pentagon.&lt;p&gt;AIs can generate personalized scams. When John Podesta was the chair of Hillary Clinton’s 2016 presidential campaign [...]&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Outline:&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;(00:10) How AI enables disinformation&lt;/p&gt;&lt;p&gt;(05:38) Governance recommendations on AI safety&lt;/p&gt;&lt;p&gt;(08:21) Senate hearings on AI regulation&lt;/p&gt;&lt;p&gt;(11:10) Links&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;
          &lt;p&gt;&lt;b&gt;First published:&lt;/b&gt;&lt;br/&gt;
          May 23rd, 2023 &lt;/p&gt;
        
        &lt;p&gt;&lt;b&gt;Source:&lt;/b&gt;&lt;br/&gt;
        &lt;a href="https://newsletter.safe.ai/p/ai-safety-newsletter-7?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Source+URL+in+episode+description&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;https://newsletter.safe.ai/p/ai-safety-newsletter-7&lt;/a&gt; &lt;/p&gt;
        &lt;p&gt;---&lt;/p&gt;
      &lt;p&gt;Want more? Check out our &lt;a href="https://newsletter.mlsafety.org/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Episode+description+footer" target="_blank" rel="noreferrer"&gt;ML Safety Newsletter&lt;/a&gt; for technical safety research.&lt;/p&gt;
      
        &lt;p&gt;Narrated by &lt;a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Narrated+by+TYPE+III+AUDIO&amp;utm_term=center_for_ai_safety&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;</description>
      <pubDate>Tue, 23 May 2023 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">828d6fe2-508b-4f7c-989c-56b6f5e69954</guid>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
      <enclosure url="https://dl.type3.audio/episode/828d6fe2-508b-4f7c-989c-56b6f5e69954.mp3?request_source=rss&amp;client_id=center_for_ai_safety&amp;feed_id=newsletter__safe_ai&amp;type=ai_narration&amp;author=Center%2520for%2520AI%2520Safety&amp;title=AISN%20%237%3A%20Disinformation%2C%20recommendations%20for%20AI%20labs%2C%20and%20Senate%20hearings%20on%20AI.&amp;source_url=https%3A%2F%2Fnewsletter.safe.ai%2Fp%2Fai-safety-newsletter-7&amp;created_at=2023-08-16T11%3A23%3A35.182584%2B00%3A00&amp;duration=763" length="4573152" type="audio/mpeg"/>
      <link>https://newsletter.safe.ai/p/ai-safety-newsletter-7</link>
      <itunes:duration>763</itunes:duration>
    </item>
    <item>
      <title>AISN #6: Examples of AI safety progress, Yoshua Bengio proposes a ban on AI agents, and lessons from nuclear arms control .</title>
      <description>&lt;p&gt;&lt;strong&gt;Examples of AI safety progress&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;Training AIs to behave safely and beneficially is difficult. They might learn to game their reward function, deceive human oversight, or seek power. Some argue that researchers have not made much progress in addressing these problems, but here we offer a few examples of progress on AI safety. &lt;/p&gt;&lt;p&gt;Detecting lies in AI outputs. Language models often output false text, but a recent paper suggests they understand the truth in ways not reflected in their output. By analyzing a model’s internals, we can calculate the likelihood that a model believes a statement is true. The finding has been replicated in models that answer [...]&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Outline:&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;(00:13) Examples of AI safety progress&lt;/p&gt;&lt;p&gt;(03:56) Yoshua Bengio proposes a ban on AI agents&lt;/p&gt;&lt;p&gt;(07:19) Lessons from Nuclear Arms Control for Verifying AI Treaties&lt;/p&gt;&lt;p&gt;(10:02) Links&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;
          &lt;p&gt;&lt;b&gt;First published:&lt;/b&gt;&lt;br/&gt;
          May 16th, 2023 &lt;/p&gt;
        
        &lt;p&gt;&lt;b&gt;Source:&lt;/b&gt;&lt;br/&gt;
        &lt;a href="https://newsletter.safe.ai/p/ai-safety-newsletter-6?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Source+URL+in+episode+description&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;https://newsletter.safe.ai/p/ai-safety-newsletter-6&lt;/a&gt; &lt;/p&gt;
        &lt;p&gt;---&lt;/p&gt;
      &lt;p&gt;Want more? Check out our &lt;a href="https://newsletter.mlsafety.org/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Episode+description+footer" target="_blank" rel="noreferrer"&gt;ML Safety Newsletter&lt;/a&gt; for technical safety research.&lt;/p&gt;
      
        &lt;p&gt;Narrated by &lt;a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Narrated+by+TYPE+III+AUDIO&amp;utm_term=center_for_ai_safety&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;</description>
      <pubDate>Tue, 16 May 2023 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">96916e27-71d9-47f4-9c55-a456112ccbd7</guid>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
      <enclosure url="https://dl.type3.audio/episode/96916e27-71d9-47f4-9c55-a456112ccbd7.mp3?request_source=rss&amp;client_id=center_for_ai_safety&amp;feed_id=newsletter__safe_ai&amp;type=ai_narration&amp;author=Center%2520for%2520AI%2520Safety&amp;title=AISN%20%236%3A%20Examples%20of%20AI%20safety%20progress%2C%20Yoshua%20Bengio%20proposes%20a%20ban%20on%20AI%20agents%2C%20and%20lessons%20from%20nuclear%20arms%20control%20.&amp;source_url=https%3A%2F%2Fnewsletter.safe.ai%2Fp%2Fai-safety-newsletter-6&amp;created_at=2023-08-16T11%3A23%3A50.007643%2B00%3A00&amp;duration=690" length="4136832" type="audio/mpeg"/>
      <link>https://newsletter.safe.ai/p/ai-safety-newsletter-6</link>
      <itunes:duration>690</itunes:duration>
    </item>
    <item>
      <title>AISN #5: Geoffrey Hinton speaks out on AI risk, the White House meets with AI labs, and Trojan attacks on language models.</title>
      <description>&lt;p&gt;&lt;strong&gt;Geoffrey Hinton is concerned about existential risks from AI&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;Geoffrey Hinton won the Turing Award for his work on AI. Now he says that part of him regrets his life’s work, as he believes that AI poses an existential threat to humanity. As Hinton puts it, “it’s quite conceivable that humanity is just a passing phase in the evolution of intelligence.”&lt;/p&gt;&lt;picture&gt;&lt;/picture&gt;&lt;p&gt;AI is developing more rapidly than Hinton expected. In 2015, Andrew Ng argued that worrying about AI risk is like worrying about overpopulation on Mars. Geoffrey Hinton also used to believe that advanced AI was decades away, but recent progress has changed his views. Now he says [...]&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Outline:&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;(00:12) Geoffrey Hinton is concerned about existential risks from AI&lt;/p&gt;&lt;p&gt;(02:32) White House meets with AI labs&lt;/p&gt;&lt;p&gt;(04:22) Trojan Attacks on Language Models&lt;/p&gt;&lt;p&gt;(06:51) Assorted Links&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;
          &lt;p&gt;&lt;b&gt;First published:&lt;/b&gt;&lt;br/&gt;
          May 9th, 2023 &lt;/p&gt;
        
        &lt;p&gt;&lt;b&gt;Source:&lt;/b&gt;&lt;br/&gt;
        &lt;a href="https://newsletter.safe.ai/p/ai-safety-newsletter-5?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Source+URL+in+episode+description&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;https://newsletter.safe.ai/p/ai-safety-newsletter-5&lt;/a&gt; &lt;/p&gt;
        &lt;p&gt;---&lt;/p&gt;
      &lt;p&gt;Want more? Check out our &lt;a href="https://newsletter.mlsafety.org/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Episode+description+footer" target="_blank" rel="noreferrer"&gt;ML Safety Newsletter&lt;/a&gt; for technical safety research.&lt;/p&gt;
      
        &lt;p&gt;Narrated by &lt;a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Narrated+by+TYPE+III+AUDIO&amp;utm_term=center_for_ai_safety&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;</description>
      <pubDate>Tue, 09 May 2023 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">58050a51-aed6-4398-8d82-21d3c3a1fba8</guid>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
      <enclosure url="https://dl.type3.audio/episode/58050a51-aed6-4398-8d82-21d3c3a1fba8.mp3?request_source=rss&amp;client_id=center_for_ai_safety&amp;feed_id=newsletter__safe_ai&amp;type=ai_narration&amp;author=Center%2520for%2520AI%2520Safety&amp;title=AISN%20%235%3A%20Geoffrey%20Hinton%20speaks%20out%20on%20AI%20risk%2C%20the%20White%20House%20meets%20with%20AI%20labs%2C%20and%20Trojan%20attacks%20on%20language%20models.&amp;source_url=https%3A%2F%2Fnewsletter.safe.ai%2Fp%2Fai-safety-newsletter-5&amp;created_at=2023-08-16T11%3A24%3A03.572872%2B00%3A00&amp;duration=480" length="2877984" type="audio/mpeg"/>
      <link>https://newsletter.safe.ai/p/ai-safety-newsletter-5</link>
      <itunes:duration>480</itunes:duration>
    </item>
    <item>
      <title>AISN #4: AI and cybersecurity, persuasive AIs, weaponization, and Hinton talks AI risks.</title>
      <description>&lt;p&gt;&lt;strong&gt;Cybersecurity Challenges in AI Safety&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;Meta accidentally leaks a language model to the public. Meta’s newest language model, LLaMa, was publicly leaked online against the intentions of its developers. Gradual rollout is a popular goal with new AI models, opening access to academic researchers and government officials before sharing models with anonymous internet users. Meta intended to use this strategy, but within a week of sharing the model with an approved list of researchers, an unknown person who had been given access to the model publicly posted it online. &lt;/p&gt;&lt;p&gt;How can AI developers selectively share their models? One inspiration could be the film industry, which places watermarks and tracking technology on “screener” copies of movies sent [...]&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Outline:&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;(00:11) Cybersecurity Challenges in AI Safety&lt;/p&gt;&lt;p&gt;(02:48) Artificial Influence: An Analysis Of AI-Driven Persuasion&lt;/p&gt;&lt;p&gt;(05:37) Building Weapons with AI&lt;/p&gt;&lt;p&gt;(07:47) Assorted Links&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;
          &lt;p&gt;&lt;b&gt;First published:&lt;/b&gt;&lt;br/&gt;
          May 2nd, 2023 &lt;/p&gt;
        
        &lt;p&gt;&lt;b&gt;Source:&lt;/b&gt;&lt;br/&gt;
        &lt;a href="https://newsletter.safe.ai/p/ai-safety-newsletter-4?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Source+URL+in+episode+description&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;https://newsletter.safe.ai/p/ai-safety-newsletter-4&lt;/a&gt; &lt;/p&gt;
        &lt;p&gt;---&lt;/p&gt;
      &lt;p&gt;Want more? Check out our &lt;a href="https://newsletter.mlsafety.org/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Episode+description+footer" target="_blank" rel="noreferrer"&gt;ML Safety Newsletter&lt;/a&gt; for technical safety research.&lt;/p&gt;
      
        &lt;p&gt;Narrated by &lt;a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Narrated+by+TYPE+III+AUDIO&amp;utm_term=center_for_ai_safety&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;</description>
      <pubDate>Tue, 02 May 2023 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">694b063e-da57-4054-95d4-460f0c693387</guid>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
      <enclosure url="https://dl.type3.audio/episode/694b063e-da57-4054-95d4-460f0c693387.mp3?request_source=rss&amp;client_id=center_for_ai_safety&amp;feed_id=newsletter__safe_ai&amp;type=ai_narration&amp;author=Center%2520for%2520AI%2520Safety&amp;title=AISN%20%234%3A%20AI%20and%20cybersecurity%2C%20persuasive%20AIs%2C%20weaponization%2C%20and%20Hinton%20talks%20AI%20risks.&amp;source_url=https%3A%2F%2Fnewsletter.safe.ai%2Fp%2Fai-safety-newsletter-4&amp;created_at=2023-08-16T11%3A24%3A17.612041%2B00%3A00&amp;duration=570" length="3420720" type="audio/mpeg"/>
      <link>https://newsletter.safe.ai/p/ai-safety-newsletter-4</link>
      <itunes:duration>570</itunes:duration>
    </item>
    <item>
      <title>AISN #3: AI policy proposals and a new challenger approaches.</title>
      <description>&lt;p&gt;&lt;strong&gt;Policy Proposals for AI Safety&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;Critical industries rely on the government to protect consumer safety. The FAA approves new airplane designs, the FDA tests new drugs, and the SEC and CFPB regulate risky financial instruments. Currently, there is no analogous set of regulations for AI safety.  &lt;/p&gt;&lt;p&gt;This could soon change. President Biden and other members of Congress have recently been vocal about the risks of artificial intelligence and the need for policy solutions.&lt;/p&gt;&lt;picture&gt;&lt;/picture&gt;&lt;p&gt;From guiding principles to enforceable laws. Previous work on AI policy such as the White House Blueprint for an AI Bill of Rights and the NIST AI Risk Management Framework has articulated guiding principles like interpretability, robustness, and privacy. But these recommendations are not enforceable – AI [...]&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Outline:&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;(00:09) Policy Proposals for AI Safety&lt;/p&gt;&lt;p&gt;(04:19) Competitive Pressures in AI Development&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;
          &lt;p&gt;&lt;b&gt;First published:&lt;/b&gt;&lt;br/&gt;
          April 25th, 2023 &lt;/p&gt;
        
        &lt;p&gt;&lt;b&gt;Source:&lt;/b&gt;&lt;br/&gt;
        &lt;a href="https://newsletter.safe.ai/p/ai-safety-newsletter-3?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Source+URL+in+episode+description&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;https://newsletter.safe.ai/p/ai-safety-newsletter-3&lt;/a&gt; &lt;/p&gt;
        &lt;p&gt;---&lt;/p&gt;
      &lt;p&gt;Want more? Check out our &lt;a href="https://newsletter.mlsafety.org/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Episode+description+footer" target="_blank" rel="noreferrer"&gt;ML Safety Newsletter&lt;/a&gt; for technical safety research.&lt;/p&gt;
      
        &lt;p&gt;Narrated by &lt;a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Narrated+by+TYPE+III+AUDIO&amp;utm_term=center_for_ai_safety&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;</description>
      <pubDate>Tue, 25 Apr 2023 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">80fc810a-0519-4331-a6df-2881322f233c</guid>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
      <enclosure url="https://dl.type3.audio/episode/80fc810a-0519-4331-a6df-2881322f233c.mp3?request_source=rss&amp;client_id=center_for_ai_safety&amp;feed_id=newsletter__safe_ai&amp;type=ai_narration&amp;author=Center%2520for%2520AI%2520Safety&amp;title=AISN%20%233%3A%20AI%20policy%20proposals%20and%20a%20new%20challenger%20approaches.&amp;source_url=https%3A%2F%2Fnewsletter.safe.ai%2Fp%2Fai-safety-newsletter-3&amp;created_at=2023-08-16T11%3A24%3A31.060904%2B00%3A00&amp;duration=470" length="2818944" type="audio/mpeg"/>
      <link>https://newsletter.safe.ai/p/ai-safety-newsletter-3</link>
      <itunes:duration>470</itunes:duration>
    </item>
    <item>
      <title>AISN #2: ChaosGPT and the rise of language model agents, evolutionary pressures and AI, AI safety in the media.</title>
      <description>&lt;p&gt;&lt;strong&gt;ChaosGPT and the Rise of Language Agents&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;Chatbots like ChatGPT usually only respond to one prompt at a time, and a human user must provide a new prompt to get a new response. But an extremely popular new framework called AutoGPT automates that process. With AutoGPT, the user provides only a high-level goal, and the language model will create and execute a step-by-step plan to accomplish the goal.&lt;/p&gt;&lt;p&gt;AutoGPT and other language agents are still in their infancy. They struggle with long-term planning and repeat their own mistakes. Yet because they limit human oversight of AI actions, these agents are a step towards dangerous deployment of autonomous AI. &lt;/p&gt;&lt;p&gt;Individual bad actors [...]&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Outline:&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;(00:12) ChaosGPT and the Rise of Language Agents&lt;/p&gt;&lt;p&gt;(02:49) Natural Selection Favors AIs over Humans&lt;/p&gt;&lt;p&gt;(05:17) AI Safety in the Media&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;
          &lt;p&gt;&lt;b&gt;First published:&lt;/b&gt;&lt;br/&gt;
          April 18th, 2023 &lt;/p&gt;
        
        &lt;p&gt;&lt;b&gt;Source:&lt;/b&gt;&lt;br/&gt;
        &lt;a href="https://newsletter.safe.ai/p/ai-safety-newsletter-2?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Source+URL+in+episode+description&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;https://newsletter.safe.ai/p/ai-safety-newsletter-2&lt;/a&gt; &lt;/p&gt;
        &lt;p&gt;---&lt;/p&gt;
      &lt;p&gt;Want more? Check out our &lt;a href="https://newsletter.mlsafety.org/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Episode+description+footer" target="_blank" rel="noreferrer"&gt;ML Safety Newsletter&lt;/a&gt; for technical safety research.&lt;/p&gt;
      
        &lt;p&gt;Narrated by &lt;a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Narrated+by+TYPE+III+AUDIO&amp;utm_term=center_for_ai_safety&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;</description>
      <pubDate>Tue, 18 Apr 2023 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">be229ea6-a537-43a1-80d5-fbaa9bbe2613</guid>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
      <enclosure url="https://dl.type3.audio/episode/be229ea6-a537-43a1-80d5-fbaa9bbe2613.mp3?request_source=rss&amp;client_id=center_for_ai_safety&amp;feed_id=newsletter__safe_ai&amp;type=ai_narration&amp;author=Center%2520for%2520AI%2520Safety&amp;title=AISN%20%232%3A%20ChaosGPT%20and%20the%20rise%20of%20language%20model%20agents%2C%20evolutionary%20pressures%20and%20AI%2C%20AI%20safety%20in%20the%20media.&amp;source_url=https%3A%2F%2Fnewsletter.safe.ai%2Fp%2Fai-safety-newsletter-2&amp;created_at=2023-08-16T11%3A24%3A44.019671%2B00%3A00&amp;duration=448" length="2686176" type="audio/mpeg"/>
      <link>https://newsletter.safe.ai/p/ai-safety-newsletter-2</link>
      <itunes:duration>448</itunes:duration>
    </item>
    <item>
      <title>AISN #1: Public opinion on AI, plugging ChatGPT into the internet, and the economic impacts of language models..</title>
      <description>&lt;p&gt;&lt;strong&gt;Growing concerns about rapid AI progress&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;Recent advancements in AI have thrust it into the center of attention. What do people think about the risks of AI?&lt;/p&gt;&lt;p&gt;The American public is worried. 46% of Americans are concerned that AI will cause “the end of the human race on Earth,” according to a recent poll by YouGov. Young people are more likely to express such concerns, while there are no significant differences in responses between people of different genders or political parties. Another poll by Monmouth University found broad support for AI regulation, with 55% supporting the creation of a federal agency that governs AI similar to how the FDA approves drugs and [...]&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Outline:&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;(00:12) Growing concerns about rapid AI progress&lt;/p&gt;&lt;p&gt;(02:53) Plugging ChatGPT into email, spreadsheets, the internet, and more&lt;/p&gt;&lt;p&gt;(05:35) Which jobs could be affected by language models?&lt;/p&gt; &lt;p&gt;---&lt;/p&gt;
          &lt;p&gt;&lt;b&gt;First published:&lt;/b&gt;&lt;br/&gt;
          April 10th, 2023 &lt;/p&gt;
        
        &lt;p&gt;&lt;b&gt;Source:&lt;/b&gt;&lt;br/&gt;
        &lt;a href="https://newsletter.safe.ai/p/ai-safety-newsletter-1?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Source+URL+in+episode+description&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;https://newsletter.safe.ai/p/ai-safety-newsletter-1&lt;/a&gt; &lt;/p&gt;
        &lt;p&gt;---&lt;/p&gt;
      &lt;p&gt;Want more? Check out our &lt;a href="https://newsletter.mlsafety.org/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Episode+description+footer" target="_blank" rel="noreferrer"&gt;ML Safety Newsletter&lt;/a&gt; for technical safety research.&lt;/p&gt;
      
        &lt;p&gt;Narrated by &lt;a href="https://type3.audio/?utm_source=TYPE_III_AUDIO&amp;utm_medium=Podcast&amp;utm_content=Narrated+by+TYPE+III+AUDIO&amp;utm_term=center_for_ai_safety&amp;utm_campaign=ai_narration" rel="noopener noreferrer" target="_blank"&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;</description>
      <pubDate>Mon, 10 Apr 2023 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">fc14c63a-a55a-4829-b1e6-c4bf541ce2e7</guid>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
      <enclosure url="https://dl.type3.audio/episode/fc14c63a-a55a-4829-b1e6-c4bf541ce2e7.mp3?request_source=rss&amp;client_id=center_for_ai_safety&amp;feed_id=newsletter__safe_ai&amp;type=ai_narration&amp;author=Center%2520for%2520AI%2520Safety&amp;title=AISN%20%231%3A%20Public%20opinion%20on%20AI%2C%20plugging%20ChatGPT%20into%20the%20internet%2C%20and%20the%20economic%20impacts%20of%20language%20models..&amp;source_url=https%3A%2F%2Fnewsletter.safe.ai%2Fp%2Fai-safety-newsletter-1&amp;created_at=2023-08-16T11%3A24%3A58.530497%2B00%3A00&amp;duration=509" length="3052656" type="audio/mpeg"/>
      <link>https://newsletter.safe.ai/p/ai-safety-newsletter-1</link>
      <itunes:duration>509</itunes:duration>
    </item>
    <itunes:category text="Technology"/>
    <itunes:category text="Society &amp; Culture">
      <itunes:category text="Philosophy"/>
    </itunes:category>
    <link>https://newsletter.safe.ai/</link>
    <itunes:image href="https://files.type3.audio/cais/newsletter--ai-safety.jpg"/>
    <itunes:owner>
      <itunes:email>podcasts@type3.audio</itunes:email>
      <itunes:name>Center for AI Safety</itunes:name>
    </itunes:owner>
    <atom:link href="https://feeds.type3.audio/cais--newsletter-ai-safety.rss" rel="self" type="application/rss+xml"/>
  </channel>
</rss>