<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<?xml-stylesheet href="/styles.xsl" type="text/xsl"?>
<rss version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:podcast="https://podcastindex.org/namespace/1.0" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <atom:link href="https://forum-podcasts.effectivealtruism.org/ea-forum--all-audio.rss" rel="self" type="application/rss+xml"/>
    <atom:link href="https://pubsubhubbub.appspot.com/" rel="hub" xmlns="http://www.w3.org/2005/Atom"/>
    <title>EA Forum Podcast (Curated &amp; popular)</title>
    <lastBuildDate>Fri, 16 Jun 2023 00:44:35 -0400</lastBuildDate>
    <link>https://type3.audio</link>
    <language>en-gb</language>
    <copyright>© 2023 All rights reserved</copyright>
    <podcast:locked>yes</podcast:locked>
    <podcast:guid>dd5672c6-f571-5c23-b1d3-f0344a28749d</podcast:guid>
    <itunes:author>EA Forum Team</itunes:author>
    <itunes:type>episodic</itunes:type>
    <itunes:explicit>false</itunes:explicit>
    <description>Audio narrations from the Effective Altruism Forum, including curated posts and posts with 125+ karma.</description>
    <itunes:keywords>non-profit, philosophy, futurism, technology</itunes:keywords>
    <itunes:owner>
      <itunes:name>EA Forum Team</itunes:name>
      <itunes:email>podcasts@type3.audio</itunes:email>
    </itunes:owner>
    
    <itunes:image href="https://forum-podcasts.effectivealtruism.org/images/ea-forum/ea-forum--curated-popular.jpg"/>
    <itunes:category text="Society &amp; Culture">
      <itunes:category text="Philosophy"/>
    </itunes:category>
    <itunes:category text="Business">
      <itunes:category text="Non-Profit"/>
    </itunes:category>
    <itunes:category text="Technology"/>
    <item>
      <title>“EA organizations should have a transparent scope” by Joey</title>
      <description>Executive summaryOne of the biggest challenges of being in a community that really cares about counterfactuals is knowing where the most important gaps are and which areas are already effectively covered. This can be even more complex with meta organizations and funders that often have broad scopes that change over time. However, I think it is really important for every meta organization to clearly establish what they cover and thus where these gaps are; there is a substantial negative flowthrough effect when a community thinks an area is covered when it is not. Why this mattersThe topic of having a transparent scope recently came up at a conference as one of the top concerns with many EA meta orgs. Some negative effects that have been felt by the community are in large part due to unclear scopes, including: Organizations leaving a space thinking it's covered when it's not. Funders reducing funding in an area due to an assumption that someone else is covering it when there are still major gaps.Two organizations working on the same thing without knowledge of each other, due to both having a broad mandate, but simultaneously putting resources into an overlapping subcomponent of this mandate.Talent being turned off or feeling misled by EA when they think an org misportrays itself. Talent ‘dropping out of the funnel’ when they go to what they believe is the primary organization covering an area and finding that what they care about isn’t covered, due to the organization claiming too broad a mandate.There can also be a significant amount of general frustration caused when people think an organization will cover, or is covering, an area and then an organization fails to deliver (often on something they did not even plan on doing).  What do I mean when I say that organizations should have a  transparent scope: Broadly, I mean organizations being publicly clear and specific about what they are planning to cover both in terms of action and cause area. In a relevant timeframe: I think this is most important in the short term (e.g., there is a ton of value in an organization saying what they are going to cover over the next 12 months, and what they have covered over the last months). For the most important questions: This clarity needs to both be in priorities (e.g., cause prioritization) and planned actions (e.g., working with student chapters). This can include things the organization might like or think is impactful to do but are not doing due to capacity constraints or its current strategic direction.For the areas most likely for people to confuse: It is particularly important to provide clarity about things that people think one might be doing (for example, Charity Entrepreneurship probably doesn’t need to clarify that it doesn’t sell flowers, but should really be transparent over whether it plans to incubate projects in a certain cause area or not). How to do thisWhen I have talked to organizations about this, I sometimes think that the “perfect” becomes the enemy of the good and they do not [...]
        &lt;p&gt;&lt;b&gt;Source:&lt;/b&gt;&lt;br/&gt;
        &lt;a href="https://forum.effectivealtruism.org/posts/mzzPMrBjGpra2JSDw/ea-organizations-should-have-a-transparent-scope" rel="noopener noreferrer" target="_blank"&gt;https://forum.effectivealtruism.org/posts/mzzPMrBjGpra2JSDw/ea-organizations-should-have-a-transparent-scope&lt;/a&gt;&lt;/p&gt;
        &lt;p&gt;&lt;a href="https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;entry.584848066=https://forum.effectivealtruism.org/posts/mzzPMrBjGpra2JSDw/ea-organizations-should-have-a-transparent-scope" rel="noopener noreferrer" target="_blank"&gt;Share feedback on this narration&lt;/a&gt;.&lt;/p&gt;
      &lt;p&gt;Narrated by &lt;a href="https://type3.audio/" rel="noopener noreferrer" target="_blank"&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;
    </description>
      <pubDate>Wed, 14 Jun 2023 11:30:45 GMT+00:00</pubDate>
      <guid>eabe5ff4-75a1-410a-b886-e20af24a17d9</guid>
      <itunes:duration>322</itunes:duration>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
      <enclosure url="https://dl.type3.audio/episode/eabe5ff4-75a1-410a-b886-e20af24a17d9.mp3?request_source=rss" type="audio/mpeg"/>
    </item>
    <item>
      <title>“Effective altruism organizations should avoid using “polarizing techniques”” by Joey</title>
      <description>TL;DR: The EA movement should not use techniques that alienate people from the EA community as a whole if they do not align with a particular subgroup within the community. These approaches not only have an immediate negative impact on the EA community, but also have long-term repercussions on the sub-community utilizing them. Right now, the EA movement uses these sorts of tactics too often. People connect with the EA movement through many different channels, and often encounter sub-communities before they have a full understanding of the movement and the wide variety of opinions and viewpoints within it. These sub-communities can sometimes make the mistake of using "polarizing techniques". By this, I mean strategies that alienate people or burn bridges with the broader community. This could be from pushing a sub-perspective too hard, or being aggressively dismissive of other views.An example of this might be if I met a talented person at a party and they said they wanted to change to an impactful career, but had never heard of EA. If I then proceeded to aggressively push founding a charity through Charity Entrepreneurship (the organization) as a career path to them, to the point where they got turned off of EA altogether if they don’t come on board with my claims, I would consider that a polarizing approach: either they choose charity entrepreneurship as a path, or they don’t engage with effective altruism at all. Note that in the short term, all Charity Entrepreneurship really measures impact-wise is how many great charities get started, so a good person going into policy due to me connecting them to Probably Good means nothing to our organizational impact. Taken to an extreme, it might be worth pushing quite hard if I think that founding nonprofits is many times more important as a career path than policy. However, I think this style hurts both the community and Charity Entrepreneurship long-term. This phenomenon occurs across a diverse range of people, both in terms of funding and career transitions. Most often, it revolves around cause prioritization. It can be disappointing when someone does not share your enthusiasm for your preferred causes, but there is still a lot of value in directing them to the most impactful path they would in fact consider pursuing.  The clearest way this technique is damaging, is that turning off someone from one part of the community often demotivates them to engage positively in other parts of the community. It makes them more likely to become an active critic instead of a neutral or contributing member of a different sub-community, or to the philosophy of effective altruism as a whole.Different sub-communities look for different types of people and resources. It's difficult for one person to have a bird's eye view on all sub-communities within EA, and it’s easy to overvalue your own community's certain needs or strengths. On numerous occasions, I have witnessed instances where one sub-community dismisses individuals possessing skills that would be immensely valuable in another segment of the community. It seems [...]
        &lt;p&gt;&lt;b&gt;Source:&lt;/b&gt;&lt;br/&gt;
        &lt;a href="https://forum.effectivealtruism.org/posts/viXCv8thAAd68Qnfs/effective-altruism-organizations-should-avoid-using" rel="noopener noreferrer" target="_blank"&gt;https://forum.effectivealtruism.org/posts/viXCv8thAAd68Qnfs/effective-altruism-organizations-should-avoid-using&lt;/a&gt;&lt;/p&gt;
        &lt;p&gt;&lt;a href="https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;entry.584848066=https://forum.effectivealtruism.org/posts/viXCv8thAAd68Qnfs/effective-altruism-organizations-should-avoid-using" rel="noopener noreferrer" target="_blank"&gt;Share feedback on this narration&lt;/a&gt;.&lt;/p&gt;
      &lt;p&gt;Narrated by &lt;a href="https://type3.audio/" rel="noopener noreferrer" target="_blank"&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;
    </description>
      <pubDate>Mon, 12 Jun 2023 13:41:31 GMT+00:00</pubDate>
      <guid>2a94a7ef-e9ee-4375-811a-22f99c03e05d</guid>
      <itunes:duration>382</itunes:duration>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
      <enclosure url="https://dl.type3.audio/episode/2a94a7ef-e9ee-4375-811a-22f99c03e05d.mp3?request_source=rss" type="audio/mpeg"/>
    </item>
    <item>
      <title>“Critiques of prominent AI safety labs: Conjecture” by Omega</title>
      <description>In this series, we consider AI safety organizations that have received more than $10 million per year in funding. There have already been several conversations and critiques around MIRI (1) and OpenAI (1,2,3), so we will not be covering them. The authors include one technical AI safety researcher (&gt;4 years experience), and one non-technical community member with experience in the EA community. We’d like to make our critiques non-anonymously but believe this will not be a wise move professionally speaking. We believe our criticisms stand on their own without appeal to our positions. Readers should not assume that we are completely unbiased or don’t have anything to personally or professionally gain from publishing these critiques. We’ve tried to take the benefits and drawbacks of the anonymous nature of our post seriously and carefully, and are open to feedback on anything we might have done better.This is the second post in this series and it covers Conjecture. Conjecture is a for-profit alignment startup founded in late 2021 by Connor Leahy, Sid Black and Gabriel Alfour, which aims to scale applied alignment research. Based in London, Conjecture has received $10 million in funding from venture capitalists (VCs), and recruits heavily from the EA movement. We shared a draft of this document with Conjecture for feedback prior to publication, and include their response below. We also requested feedback on a draft from a small group of experienced alignment researchers from various organizations, and have invited them to share their views in the comments of this post.We would like to invite others to share their thoughts in the comments openly if you feel comfortable, or contribute anonymously via this form. We will add inputs from there to the comments section of this post, but will likely not be updating the main body of the post as a result (unless comments catch errors in our writing).Key TakeawaysFor those with limited knowledge and context on Conjecture, we recommend first reading or skimming the About Conjecture section. Time to read the core sections (Criticisms &amp; Suggestions and Our views on Conjecture) is 22 minutes. Criticisms and SuggestionsWe think Conjecture’s research is low quality (read more). Their posts don’t always make assumptions clear, don’t make it clear what evidence base they have for a given hypothesis, and evidence is frequently cherry-picked. We also think their bar for publishing is too low, which increases the signal to noise ratio. Conjecture has acknowledged some of these criticisms, but not all (read more).We make specific critiques of examples of their research from their initial research agenda (read more).There is limited information available on their new research direction (cognitive emulation), but from the publicly available information it appears extremely challenging and so we are skeptical as to its tractability (read more).We have some concerns with the CEO’s character and trustworthiness because, in order of importance (read more):The CEO and Conjecture have misrepresented themselves to external parties multiple times (read more);The CEO’s involvement in EleutherAI and Stability AI has contributed to race dynamics (read more);The CEO [...]
        &lt;p&gt;&lt;b&gt;Source:&lt;/b&gt;&lt;br/&gt;
        &lt;a href="https://forum.effectivealtruism.org/posts/gkfMLX4NWZdmpikto/critiques-of-prominent-ai-safety-labs-conjecture" rel="noopener noreferrer" target="_blank"&gt;https://forum.effectivealtruism.org/posts/gkfMLX4NWZdmpikto/critiques-of-prominent-ai-safety-labs-conjecture&lt;/a&gt;&lt;/p&gt;
        &lt;p&gt;&lt;a href="https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;entry.584848066=https://forum.effectivealtruism.org/posts/gkfMLX4NWZdmpikto/critiques-of-prominent-ai-safety-labs-conjecture" rel="noopener noreferrer" target="_blank"&gt;Share feedback on this narration&lt;/a&gt;.&lt;/p&gt;
      &lt;p&gt;Narrated by &lt;a href="https://type3.audio/" rel="noopener noreferrer" target="_blank"&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;
    </description>
      <pubDate>Mon, 12 Jun 2023 01:32:04 GMT+00:00</pubDate>
      <guid>03e4c435-4178-4ec6-a024-920f73fe92ad</guid>
      <itunes:duration>3742</itunes:duration>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
      <enclosure url="https://dl.type3.audio/episode/03e4c435-4178-4ec6-a024-920f73fe92ad.mp3?request_source=rss" type="audio/mpeg"/>
    </item>
    <item>
      <title>“Why I spoke to TIME magazine, and My Experience as a Female AI Researcher in Silicon Valley [SA Sequence Intro, Advice, and AMA]” by Lucretia</title>
      <description>Crossposted on Medium here.Twitter: @lucreti_aFrom Lore Olympus.Thank you to the supportive EA members who encouraged me to publicly share this difficult experience, to my friends and research collaborators for your kindness, and to the courageous women who helped me in writing this post, who I hope can someday speak publicly.To those who know me, please call me Lucretia.This is a megapost. Each section has a distinct purpose and may evolve into its own standalone post. For the full picture, I recommend reading to the end. My cross-posted version on Medium is broken into sections for easier reading.0. OverviewIntroduction. I was one of the women who spoke to TIME magazine about sexual harassment and abuse in EA. Here is my story without media distortions.Advice for Female Founders and AI Researchers in the Valley. Silicon Valley can be a brutal place for women. This is what I wish I knew five years ago.My Case Study: I am an AI researcher. I believe my AI alignment research career was needlessly encumbered by:My experience with the sexually abusive red pill and pickup artist sphere, which entwined with a branch of AI safety in Cambridge, MA and Silicon Valley. I describe the unethical core of red pill ideology, including the running of “rape scripts.”The recent retaliation by a Silicon Valley AI community to my report of harm. This community’s aggressive reaction showed many gender biases latent in AI culture.Systemic Sexual Violence in Silicon Valley. I believe the male-dominated environment, nepotistic connections to investor money, extreme power disparities between wealthy AI researchers and aspiring young women in the AI and startup sectors, hacker house party culture, psychedelics misused as date rape drugs, cults of personality, substantial population of low empathy, risk-seeking, and/or narcissistic men, and lack of functional policing mechanisms make sexual violence a systemic problem in a critical X-risk industry.Why I Spoke to TIME. I address some misconceptions about the original TIME article on sexual harassment, and why I spoke to TIME in the first place.Helpful Books and Movies. I share learnings about sexual harassment and abuse after ~15 months of focusing on the problem, including my favorite books and movies about sexual harassment/abuse to flesh out more conceptual space. For all the seriousness of this post, these books and movies are entertaining, gorgeous, and healing!Future Sequences? Depending on the reactions to this post, I would love to write a Sequence of sexual harassment and abuse from first principles.Call to Action: Recovery and Litigation Funds. AGI should neither be built nor aligned in environments of deceit. We propose a call-to-action for a Recovery Fund and Sociological AI Alignment Fund / Litigation Fund to counteract the sexual predation Moloch in Silicon Valley, which is a sociological AI safety problem.AppendixExcerpts from red pill literatureNotes on Rape vs Consent Culture1. IntroductionSome recent posts on the EA forum have thoughtfully and earnestly addressed sexual harassment and abuse. Thank you to the EA community for your insightful posts and comments, and for genuinely trying to address the problem, which made [...]
        &lt;p&gt;&lt;b&gt;Source:&lt;/b&gt;&lt;br/&gt;
        &lt;a href="https://forum.effectivealtruism.org/posts/LqjG4bAxHfmHC5iut/why-i-spoke-to-time-magazine-and-my-experience-as-a-female" rel="noopener noreferrer" target="_blank"&gt;https://forum.effectivealtruism.org/posts/LqjG4bAxHfmHC5iut/why-i-spoke-to-time-magazine-and-my-experience-as-a-female&lt;/a&gt;&lt;/p&gt;
        &lt;p&gt;&lt;a href="https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;entry.584848066=https://forum.effectivealtruism.org/posts/LqjG4bAxHfmHC5iut/why-i-spoke-to-time-magazine-and-my-experience-as-a-female" rel="noopener noreferrer" target="_blank"&gt;Share feedback on this narration&lt;/a&gt;.&lt;/p&gt;
      &lt;p&gt;Narrated by &lt;a href="https://type3.audio/" rel="noopener noreferrer" target="_blank"&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;
    </description>
      <pubDate>Sun, 11 Jun 2023 03:41:39 GMT+00:00</pubDate>
      <guid>b4043e22-360b-4258-a48a-938e7aa06ae3</guid>
      <itunes:duration>4404</itunes:duration>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
      <enclosure url="https://dl.type3.audio/episode/b4043e22-360b-4258-a48a-938e7aa06ae3.mp3?request_source=rss" type="audio/mpeg"/>
    </item>
    <item>
      <title>“How economists got Africa’s AIDS epidemic wrong” by Justin Sandefur</title>
      <description>I'm reposting this from the CGDev site, as I thought it might be interesting to EA folks (thanks to Ryan Briggs for the suggestion). For the short version, here's a twitter thread.--In the 2000s, cost-effectiveness analysis said it was a bad use of money to send antiretroviral drugs to low-income countries—drugs that ended up saving millions of lives.Twenty years ago, in the same State of the Union speech in which he made the case for invading Iraq, George W. Bush asked Congress for $15 billion over five years for an ambitious new plan to pay for antiretroviral drugs for two million AIDS patients in Africa and the Caribbean.The President’s Emergency Plan for AIDS Relief, or PEPFAR, went on to become probably the most celebrated American foreign aid program since the Marshall Plan. An evaluation by the National Academy of Sciences estimates PEPFAR has saved millions of lives (PEPFAR itself claims 25 million). Impacts on total mortality rates across fourteen African countries were visible within just the first few years of the program (see figure 1). Separate research suggests the rollout of antiretrovirals, of which PEPFAR was a major component, explained about a third of Africa's economic growth resurgence in the 2000s.Figure 1. Adult mortality in PEPFAR focus and non-focus countries (from Bendavid et al 2012, JAMA)But at the time, some economists balked. The conventional wisdom within health economics was that sending AIDS drugs to Africa was a waste of money. The dominant conceptual apparatus economists use to evaluate social policies—comparative cost-effectiveness analysis, which focuses on a specific goal like saving lives, and ranks policies by lives saved per dollar—suggested America’s foreign aid budget could’ve been better spent on condoms and awareness campaigns, or even malaria and diarrheal diseases.“Treating HIV doesn’t pay”In a now infamous op-ed published in Forbes in 2005, before PEPFAR’s impacts were well documented, Brown University economist Emily Oster declared that “treating HIV doesn’t pay.” “It is humane to pay for AIDS drugs in Africa,” she wrote, “but it isn’t economical. The same dollars spent on prevention would save more lives.”In fairness to Oster and others, the phrasing here is important. Her argument was not that African HIV patients’ lives weren’t worth the cost—that retroviral drug prices exceeded the “value of a statistical life”, as economists might phrase it—but rather that if we take the budget as fixed, and the prices as fixed, the money could do more good if spent on other health programs.Oster wasn’t alone. While her delivery was perhaps deliberately provocative, her basic reasoning reflected a broad professional consensus, which viewed antiretrovirals through the lens of comparative cost-effectiveness analysis, and deemed them middling to poor value.A systematic review published in the Lancet in 2002, just as the Bush administration was privately plotting the PEPFAR announcement, found that in terms of saving “disability-adjusted life years” or DALYs, "a case of HIV/AIDS can be prevented for $11, and a DALY gained for $1” by improving the safety of blood transfusions and distributing condoms, whereas “antiretroviral therapy for [...]
        &lt;p&gt;&lt;b&gt;Source:&lt;/b&gt;&lt;br/&gt;
        &lt;a href="https://forum.effectivealtruism.org/posts/qyhDz9djZAmxZ6Qzx/how-economists-got-africa-s-aids-epidemic-wrong" rel="noopener noreferrer" target="_blank"&gt;https://forum.effectivealtruism.org/posts/qyhDz9djZAmxZ6Qzx/how-economists-got-africa-s-aids-epidemic-wrong&lt;/a&gt;&lt;/p&gt;
        &lt;p&gt;&lt;a href="https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;entry.584848066=https://forum.effectivealtruism.org/posts/qyhDz9djZAmxZ6Qzx/how-economists-got-africa-s-aids-epidemic-wrong" rel="noopener noreferrer" target="_blank"&gt;Share feedback on this narration&lt;/a&gt;.&lt;/p&gt;
      &lt;p&gt;Narrated by &lt;a href="https://type3.audio/" rel="noopener noreferrer" target="_blank"&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;
    </description>
      <pubDate>Sat, 10 Jun 2023 05:26:19 GMT+00:00</pubDate>
      <guid>48911d92-0035-4ad6-9235-631d0d448c1e</guid>
      <itunes:duration>924</itunes:duration>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
      <enclosure url="https://dl.type3.audio/episode/48911d92-0035-4ad6-9235-631d0d448c1e.mp3?request_source=rss" type="audio/mpeg"/>
    </item>
    <item>
      <itunes:title>"Cause area report: Antimicrobial Resistance" by Akhil</itunes:title>
      <title>"Cause area report: Antimicrobial Resistance" by Akhil</title>
      <description>&lt;p&gt;This post is a summary of some of my work as a field strategy consultant at Schmidt Futures&amp;apos; Act 2 program, where I spoke with over a hundred experts and did a deep dive into antimicrobial resistance to find impactful investment opportunities within the cause area. The full report can be accessed &lt;a href='https://drive.google.com/file/d/1wiY4w0QADOZzc8-ac9hbXr9_xcDlTi8k/view?usp=share_link' rel='noopener noreferrer' target='_blank'&gt;here&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;br/&gt;&lt;/p&gt;&lt;p&gt;Antimicrobials, the medicines we use to fight infections, have played a foundational role in improving the length and quality of human life since penicillin and other antimicrobials were first developed in the early and mid 20th century.&lt;/p&gt;&lt;p&gt;&lt;br/&gt;&lt;/p&gt;&lt;p&gt;Antimicrobial resistance, or AMR, occurs when bacteria, viruses, fungi, and parasites evolve resistance to antimicrobials. As a result, antimicrobial medicine such as antibiotics and antifungals become ineffective and unable to fight infections in the body.&lt;/p&gt;&lt;p&gt;&lt;br/&gt;&lt;/p&gt;&lt;p&gt;AMR is responsible for millions of deaths each year, more than HIV or malaria (ARC 2022). The AMR Visualisation Tool, produced by Oxford University and IHME, visualises IHME data which finds that 1.27 million deaths per year are attributable to bacterial resistance and 4.95 million deaths per year are associated with bacterial resistance, as shown below.&lt;/p&gt;&lt;p&gt;&lt;br/&gt;&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Source:&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;&lt;a href='https://forum.effectivealtruism.org/posts/W93Pt7xch7eyrkZ7f/cause-area-report-antimicrobial-resistance' rel='noopener noreferrer' target='_blank'&gt;https://forum.effectivealtruism.org/posts/W93Pt7xch7eyrkZ7f/cause-area-report-antimicrobial-resistance&lt;/a&gt;&lt;/p&gt;&lt;p&gt;&lt;br/&gt;&lt;/p&gt;&lt;p&gt;Narrated for the &lt;a href='https://forum.effectivealtruism.org/' rel='noopener noreferrer' target='_blank'&gt;&lt;u&gt;Effective Altruism Forum&lt;/u&gt;&lt;/a&gt; by &lt;a href='\&amp;quot;https://type3.audio/\&amp;quot;' rel='noopener noreferrer' target='_blank'&gt;&lt;u&gt;TYPE III AUDIO&lt;/u&gt;&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded>&lt;p&gt;This post is a summary of some of my work as a field strategy consultant at Schmidt Futures&amp;apos; Act 2 program, where I spoke with over a hundred experts and did a deep dive into antimicrobial resistance to find impactful investment opportunities within the cause area. The full report can be accessed &lt;a href='https://drive.google.com/file/d/1wiY4w0QADOZzc8-ac9hbXr9_xcDlTi8k/view?usp=share_link' rel='noopener noreferrer' target='_blank'&gt;here&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;br/&gt;&lt;/p&gt;&lt;p&gt;Antimicrobials, the medicines we use to fight infections, have played a foundational role in improving the length and quality of human life since penicillin and other antimicrobials were first developed in the early and mid 20th century.&lt;/p&gt;&lt;p&gt;&lt;br/&gt;&lt;/p&gt;&lt;p&gt;Antimicrobial resistance, or AMR, occurs when bacteria, viruses, fungi, and parasites evolve resistance to antimicrobials. As a result, antimicrobial medicine such as antibiotics and antifungals become ineffective and unable to fight infections in the body.&lt;/p&gt;&lt;p&gt;&lt;br/&gt;&lt;/p&gt;&lt;p&gt;AMR is responsible for millions of deaths each year, more than HIV or malaria (ARC 2022). The AMR Visualisation Tool, produced by Oxford University and IHME, visualises IHME data which finds that 1.27 million deaths per year are attributable to bacterial resistance and 4.95 million deaths per year are associated with bacterial resistance, as shown below.&lt;/p&gt;&lt;p&gt;&lt;br/&gt;&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Source:&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;&lt;a href='https://forum.effectivealtruism.org/posts/W93Pt7xch7eyrkZ7f/cause-area-report-antimicrobial-resistance' rel='noopener noreferrer' target='_blank'&gt;https://forum.effectivealtruism.org/posts/W93Pt7xch7eyrkZ7f/cause-area-report-antimicrobial-resistance&lt;/a&gt;&lt;/p&gt;&lt;p&gt;&lt;br/&gt;&lt;/p&gt;&lt;p&gt;Narrated for the &lt;a href='https://forum.effectivealtruism.org/' rel='noopener noreferrer' target='_blank'&gt;&lt;u&gt;Effective Altruism Forum&lt;/u&gt;&lt;/a&gt; by &lt;a href='\&amp;quot;https://type3.audio/\&amp;quot;' rel='noopener noreferrer' target='_blank'&gt;&lt;u&gt;TYPE III AUDIO&lt;/u&gt;&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <itunes:author/>
      <enclosure url="https://www.buzzsprout.com/2062493/13000677-cause-area-report-antimicrobial-resistance-by-akhil.mp3" length="5783942" type="audio/mpeg"/>
      <guid isPermaLink="false">Buzzsprout-13000677</guid>
      <pubDate>Thu, 08 Jun 2023 04:00:00 +0100</pubDate>
      <itunes:duration>722</itunes:duration>
      <itunes:keywords/>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
    </item>
    <item>
      <title>“EA Strategy Fortnight (June 12-24)” by Ben_West</title>
      <description>Tl;dr: I’m kicking off a push for public discussions about EA strategy that will be happening June 12-24. You’ll see new posts under this tag, and you can find details about people who’ve committed to participating and more below. Motivation and what this is(n’t)I feel (and, from conversations in person and seeing discussions on the Forum, think that I am not alone in feeling) like there’s been a dearth of public discussion about EA strategy recently, particularly from people in leadership positions at EA organizations. To help address this, I’m setting up an “EA strategy fortnight” — two weeks where we’ll put in extra energy to make those discussions happen. A set of folks have already volunteered to post thoughts about major strategic EA questions, like how centralized EA should be or current priorities for GH&amp;W EA.This event and these posts are generally intended to start discussion, rather than give the final word on any given subject. I expect that people participating in this event will also often disagree with each other, and participation in this shouldn’t imply an endorsement of anything or anyone in particular.I see this mostly as an experiment into whether having a simple “event” can cause people to publish more stuff. Please don't interpret any of these posts as something like an official consensus statement.Some people have already agreed to participateI reached out to people through a combination of a) thinking of people who had shared private strategy documents with me before that still had not been published b) contacting leaders of EA organizations, and c) soliciting suggestions from others. About half of the people I contacted agreed to participate. I think you should view this as a convenience sample, heavily skewed towards the people who find writing Forum posts to be low cost. Also note that I contacted some of these people specifically because I disagree with them; no endorsement of these ideas is implied. People who’ve already agreed to post stuff during this fortnight [in random order]:Habryka - How EAs and Rationalists turn crazyMaxDalton - In Praise of PraiseMichaelA - Interim updates on the RP AI Governance &amp; Strategy teamWilliam_MacAskill - Decision-making in EAMichelle_Hutchinson - TBDArdenlk - Reallocating resources from EA per se to specific fieldsOzzie Gooen - Centralize Organizations, Decentralize PowerJulia_Wise - EA reform project updatesShakeel Hashim - EA Communications UpdatesJakub Stencel - EA’s success no one cares aboutlincolnq - Why Altruists Can't Have Nice ThingsBen_West and 2ndRichter - FTX’s impacts on EA brand and engagement with CEA projectsjeffsebo and Sofia_Fogel - EA and the nature and value of digital mindsAnonymous – Diseconomies of scale in community buildingLuke Freeman - TBDkuhanj - TBDJoey - The community wide advantages of having a transparent scopeJamesSnowden - Current priorities for Open Philanthropy's Effective Altruism, Global Health and Wellbeing programNicole_Ross - Crisis bootcamp: lessons learned and implications for EARob Gledhill - AIS vs EA groups for city and national groupsVaidehi Agarwalla - TBDRenan Araujo - Thoughts about AI safety field-building in LMICsIf you would like to participateIf you are able to pre-commit to writing a [...]
        &lt;p&gt;&lt;b&gt;Source:&lt;/b&gt;&lt;br/&gt;
        &lt;a href="https://forum.effectivealtruism.org/posts/ct3zLpD5FMwBwYCZ7/ea-strategy-fortnight-june-12-24" rel="noopener noreferrer" target="_blank"&gt;https://forum.effectivealtruism.org/posts/ct3zLpD5FMwBwYCZ7/ea-strategy-fortnight-june-12-24&lt;/a&gt;&lt;/p&gt;
        &lt;p&gt;&lt;a href="https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;entry.584848066=https://forum.effectivealtruism.org/posts/ct3zLpD5FMwBwYCZ7/ea-strategy-fortnight-june-12-24" rel="noopener noreferrer" target="_blank"&gt;Share feedback on this narration&lt;/a&gt;.&lt;/p&gt;
      &lt;p&gt;Narrated by &lt;a href="https://type3.audio/" rel="noopener noreferrer" target="_blank"&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;
    </description>
      <pubDate>Wed, 07 Jun 2023 23:07:54 GMT+00:00</pubDate>
      <guid>cf28a794-e7a0-463c-8873-ee184e7d968f</guid>
      <itunes:duration>300</itunes:duration>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
      <enclosure url="https://dl.type3.audio/episode/cf28a794-e7a0-463c-8873-ee184e7d968f.mp3?request_source=rss" type="audio/mpeg"/>
    </item>
    <item>
      <title>“I made a news site based on prediction markets” by vandemonian</title>
      <description>Introduction“News through prediction markets”The Base Rate Times is a nascent news site that incorporates prediction markets prominently into its coverage.Please see current iteration: www.baseratetimes.comTwitter: www.twitter.com/base_rate_timesWhat problem does it solve?Forecasts are underutilized by the mediaPrediction markets are more accurate than pundits, yet the media has made limited use of their forecasts. This is a big problem: one of the most rigorous information sources is being omitted from public discourse!The Base Rate Times creates prediction markets content, substituting for inferior news sources. This improves the epistemics of its audience.Forecasts are dispersed, generally inconvenient to consumePrediction markets are dispersed among many different platforms, fragmenting the information forecasters provide. For example, different platforms ask similar questions in different ways. Furthermore, platforms’ UX is orientated towards forecasters, not information consumers. Overall, trying to use prediction markets as ‘news replacement’ is cumbersome.There is value in aggregating and curating forecasts from various platforms. We need engaging ways of sharing prediction markets’ insights. The Base Rate Times aims to make prediction markets easily digestible to the general public.How does it work?News media (emotive narrative) vs Base Rate Times (actionable odds)For example, this is a real headline from a reputable newspaper: “Taiwan braces for China's fury over Pelosi visit”. Emotive and incendiary, it does not help you form an accurate model of the situation.By contrast, The Base Rate Times: “China-Taiwan conflict risk 14%, up 2x from 7% after Pelosi visit”. That's an actionable insight. It can inform your decision on whether to stay in Taiwan or to flee, for example.News aggregation, summarizing prediction marketsNaturally, the probabilities in the example above come from prediction markets. The Base Rate Times presents what prediction markets are telling us about news in an engaging way.Stories that shift market odds are highlighted. And if a seemingly important story doesn’t shift market odds, that also tells you something.On The Base Rate Times, right now you can see the latest odds on:Putin staying in powerRussian territorial gains in UkraineEscalation risk of NATO involvementand more...By glancing at a few charts, you can form a more accurate model (in less time) of Russia-Ukraine than reading countless narrative-based news stories.InspirationA key inspiration was Scott Alexander’s Prediction Market FAQ:I recently had to read many articles on Elon Musk’s takeover of Twitter, which all repeated that “rumors said” Twitter was about to go down because of his mass firing. Meanwhile, there were several prediction markets on whether this would happen, and they were all around 40%. If some journalist had thought to check the prediction markets and cite them in their article, they could have not only provided more value (a clear percent chance instead of just “there are some rumors saying this”), but also been right when everyone else was wrong.Also Scott’s 'Mantic Monday' posts and Zvi’s blog.This simple chart by @ClayGraubard was another inspiration. Wanted something like this, but for all major news stories. Couldn't find it, so making it myself. (Clay is making geopolitics videos and podcasts now, check it out.)GoalsLike 538, but for prediction marketsThe Base Rate Times is a bet that forecasts [...]
        &lt;p&gt;&lt;b&gt;Source:&lt;/b&gt;&lt;br/&gt;
        &lt;a href="https://forum.effectivealtruism.org/posts/hChXEPPkDpiufCE4E/i-made-a-news-site-based-on-prediction-markets" rel="noopener noreferrer" target="_blank"&gt;https://forum.effectivealtruism.org/posts/hChXEPPkDpiufCE4E/i-made-a-news-site-based-on-prediction-markets&lt;/a&gt;&lt;/p&gt;
        &lt;p&gt;&lt;a href="https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;entry.584848066=https://forum.effectivealtruism.org/posts/hChXEPPkDpiufCE4E/i-made-a-news-site-based-on-prediction-markets" rel="noopener noreferrer" target="_blank"&gt;Share feedback on this narration&lt;/a&gt;.&lt;/p&gt;
      &lt;p&gt;Narrated by &lt;a href="https://type3.audio/" rel="noopener noreferrer" target="_blank"&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;
    </description>
      <pubDate>Wed, 07 Jun 2023 17:29:10 GMT+00:00</pubDate>
      <guid>653452e7-233b-4540-bb13-32a623b4663c</guid>
      <itunes:duration>513</itunes:duration>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
      <enclosure url="https://dl.type3.audio/episode/653452e7-233b-4540-bb13-32a623b4663c.mp3?request_source=rss" type="audio/mpeg"/>
    </item>
    <item>
      <title>“A note of caution about recent AI risk coverage” by Sean_o_h</title>
      <description>Epistemic status: some thoughts I wanted to get out quicklyA lot of fantastic work has been done by people in the AI existential risk research community and related communities over the last several months in raising awareness about risks from advanced AI. However, I have some cause for unease that I’d like to share.These efforts may have been too successful too soon.Or, more specifically, this level of outreach success this far ahead of the development of AI capable of posing existential risk may have fallout. We should consider steps to mitigate this. (1)  TimelinesI know that there are well-informed people in the AI and existential risk communities who believe AI capable of posing existential risk may be developed within 10 years. I certainly can’t rule this out, and even a small chance of this is worth working to prevent or mitigate to the extent possible, given the possible consequences. My own timelines are longer, although my intuitions don’t have a rigorous model underpinning them (my intuitions line up similarly to the 15-40 year timelines mentioned in this recent blog post by Matthew Barnett from Epoch).Right now the nature of media communications means that the message is coming across with a lot of urgency. From speaking to lay colleagues, impressions often seem to be of short timelines (and some folks e.g. Geoff Hinton have explicitly said 5-20 years, sometimes with uncertainty caveats and sometimes without).It may be that those with short (&lt;10 years) timelines are right. And even if they’re not, and we’ve got decades before this technology poses an existential threat, many of the attendant challenges – alignment, governance, distribution of benefits – will need that additional time to be addressed. And I think it’s entirely plausible that the current level of buy-in will be needed in order to initiate the steps needed to avoid the worst outcomes, e.g. recruiting expertise and resources to alignment, development and commitment to robust regulation, even coming to agreements not to pursue certain technological developments beyond a certain point. However, if short timelines do not transpire, I believe there’s a need to consider a scenario I think is reasonably likely.(2)  Crying wolfI propose that it is most likely we are in a world where timelines are &gt;10 years, perhaps &gt;20 or 30 years. Right now this issue has a lot of the most prominent AI scientists and CEOs signed up, and political leaders worldwide committing to examining the issue seriously (examples from last week). What happens then in the &gt;10 year-timeline world? The extinction-level outcomes that the public is hearing, and that these experts are raising and policymakers making costly reputational investments in, don’t transpire.  What does happen is all the benefits of near-term AI that have been talked about, plus all the near-term harms that are being predominantly raised by the AI ethics/FAccT communities. Perhaps these harms include somewhat more extreme versions than what is currently talked about, but nowhere near catastrophic. Suddenly the year is 2028, and that whole 2023 furore is starting to look [...]
        &lt;p&gt;&lt;b&gt;Source:&lt;/b&gt;&lt;br/&gt;
        &lt;a href="https://forum.effectivealtruism.org/posts/weJZjku3HiNgQC4ER/a-note-of-caution-about-recent-ai-risk-coverage" rel="noopener noreferrer" target="_blank"&gt;https://forum.effectivealtruism.org/posts/weJZjku3HiNgQC4ER/a-note-of-caution-about-recent-ai-risk-coverage&lt;/a&gt;&lt;/p&gt;
        &lt;p&gt;&lt;a href="https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;entry.584848066=https://forum.effectivealtruism.org/posts/weJZjku3HiNgQC4ER/a-note-of-caution-about-recent-ai-risk-coverage" rel="noopener noreferrer" target="_blank"&gt;Share feedback on this narration&lt;/a&gt;.&lt;/p&gt;
      &lt;p&gt;Narrated by &lt;a href="https://type3.audio/" rel="noopener noreferrer" target="_blank"&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;
    </description>
      <pubDate>Wed, 07 Jun 2023 17:05:14 GMT+00:00</pubDate>
      <guid>19ac7510-083e-4c3b-9811-4831acbf05b2</guid>
      <itunes:duration>460</itunes:duration>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
      <enclosure url="https://dl.type3.audio/episode/19ac7510-083e-4c3b-9811-4831acbf05b2.mp3?request_source=rss" type="audio/mpeg"/>
    </item>
    <item>
      <title>“EA and Longtermism: not a crux for saving the world” by ClaireZabel</title>
      <description>This is partly based on my experiences working as a Program Officer leading Open Phil’s Longtermist EA Community Growth team, but it’s a hypothesis I have about how some longtermists could have more of an impact by their lights, not an official Open Phil position.Context: I originally wrote this in July 2022 as a memo for folks attending a retreat I was going to. I find that I refer to it pretty frequently and it seems relevant to ongoing discussions about how much meta effort done by EAs should focus on engaging more EAs vs. other non-EA people. I am publishing it with light-ish editing, and some parts are outdated, though for the most part I more strongly hold most of the conclusions than I did when I originally wrote it. Tl;dr: I think that recruiting and talent pipeline work done by EAs who currently prioritize x-risk reduction (“we” or “us” in this post, though I know it won’t apply to all readers) should put more emphasis on ideas related to existential risk, the advent of transformative technology, and the ‘most important century’ hypothesis, and less emphasis on effective altruism and longtermism, in the course of their outreach. A lot of EAs who prioritize existential risk reduction are making increasingly awkward and convoluted rhetorical maneuvers to use “EAs” or “longtermists” as the main label for people we see as aligned with our goals and priorities. I suspect this is suboptimal and, in the long term, infeasible. In particular, I’m concerned that this is a reason we’re failing to attract and effectively welcome some people who could add a lot of value. The strongest counterargument I can think of right now is that I know of relatively few people who are doing full-time work on existential risk reduction on AI and biosecurity who have been drawn in by just the “existential risk reduction” frame [this seemed more true in 2022 than 2023]. This is in the vein of Neel Nanda’s "Simplify EA Pitches to "Holy Shit, X-Risk"" and Scott Alexander’s “Long-termism vs. Existential Risk”, but I want to focus more on the hope of attracting people to do priority work even if their motivations are neither longtermist nor neartermist EA, but instead mostly driven by reasons unrelated to EA. EA and longtermism: not a crux for doing the most important workRight now, my priority in my professional life is helping humanity navigate the imminent creation of potential transformative technologies, to try to make the future better for sentient beings than it would otherwise be. I think that’s likely the most important thing anyone can do these days. And I don’t think EA or longtermism is a crux for this prioritization anymore. A lot of us (EAs who currently prioritize x-risk reduction) were “EA-first” —  we came to these goals first via broader EA principles and traits, like caring deeply about others; liking rigorous research, scope sensitivity, and expected value-based reasoning; and wanting to meet others with similar traits. Next, we were exposed to a cluster of philosophical and empirical arguments about [...]
        &lt;p&gt;&lt;b&gt;Source:&lt;/b&gt;&lt;br/&gt;
        &lt;a href="https://forum.effectivealtruism.org/posts/cP7gkDFxgJqHDGdfJ/ea-and-longtermism-not-a-crux-for-saving-the-world" rel="noopener noreferrer" target="_blank"&gt;https://forum.effectivealtruism.org/posts/cP7gkDFxgJqHDGdfJ/ea-and-longtermism-not-a-crux-for-saving-the-world&lt;/a&gt;&lt;/p&gt;
        &lt;p&gt;&lt;a href="https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;entry.584848066=https://forum.effectivealtruism.org/posts/cP7gkDFxgJqHDGdfJ/ea-and-longtermism-not-a-crux-for-saving-the-world" rel="noopener noreferrer" target="_blank"&gt;Share feedback on this narration&lt;/a&gt;.&lt;/p&gt;
      &lt;p&gt;Narrated by &lt;a href="https://type3.audio/" rel="noopener noreferrer" target="_blank"&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;
    </description>
      <pubDate>Sun, 04 Jun 2023 13:48:41 GMT+00:00</pubDate>
      <guid>bba66f80-f42c-4e47-bf9f-60fea28d8a7d</guid>
      <itunes:duration>1121</itunes:duration>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
      <enclosure url="https://dl.type3.audio/episode/bba66f80-f42c-4e47-bf9f-60fea28d8a7d.mp3?request_source=rss" type="audio/mpeg"/>
    </item>
    <item>
      <title>“Things I Learned by Spending Five Thousand Hours In Non-EA Charities” by jenn</title>
      <description>From late 2020 to last month, I worked at grassroots-level non-profits in operational roles. Over that time, I’ve seen surprisingly effective deployments of strategies that were counter-intuitive to my EA and rationalist sensibilities.I spent 6 months being the on-shift operations manager at one of the five largest food banks in Toronto (~50 staff/volunteers), and 2 years doing logistics work at Samaritans (fake name), a long-lived charity that was so multi-armed that it was basically operating as a supplementary social services department for the city it was in(~200 staff and 200 volunteers). Both of these non-profits were well-run, though both dealt with the traditional non-profit double whammy of being underfunded and understaffed.Neither place was super open to many EA concepts (explicit cost-benefit analyses, the ITN framework, geographic impartiality, the general sense that talent was the constraining factor instead of money, etc). Samaritans in particular is a spectacular non-profit, despite(?) having basically anti-EA philosophies, such as:Being very localist; Samaritans was established to help residents of the city it was founded in, and now very specialized in doing that.Adherence to faith; the philosophy of The Catholic Worker Movement continues to inform the operating choices of Samaritans to this day.A big streak of techno-pessimism; technology is first and foremost seen as a source of exploitation and alienation, and adopted only with great reluctance when necessary.Not treating money as fungible. The majority of funding came from grants or donations tied to specific projects or outcomes. (This is a system that the vast majority of nonprofits operate in.)Once early on I gently pushed them towards applying to some EA grants for some of their more EA-aligned work, and they were immediately turned off by the general vibes of EA upon visiting some of its websites. I think the term “borg-like” was used.Over this post, I’ll be largely focusing on Samaritans as I’ve worked there longer and in a more central role, and it’s also a more interesting case study due to its stronger anti-EA sentiment.Things I LearnedLong Term Reputation is PricelessNon-Profits Shouldn’t Be IslandsSlack is Incredibly PowerfulHospitality is Pretty ImportantFor each learning, I have a section for sketches for EA integration – I hesitate to call them anything as strong as recommendations, because the point is to give more concrete examples of what it could look like integrated in an EA framework, rather than saying that it’s the correct way forward.1. Long Term Reputation is PricelessInstitutional trust unlocks a stupid amount of value, and you can’t buy it with money. Lots of resources (amenity rentals; the mayor’s endorsement; business services; pro-bono and monetary donations) are priced/offered based on tail risk. If you can establish that you’re not a risk by having a longstanding, unblemished reputation, costs go way down for you, and opportunities way up. This is the world that Samaritans now operate in.Samaritans had a much better, easier time at city hall compared to newer organizations, because of a decades-long productive relationship where we were really helpful with issues surrounding unemployment and homelessness. Permits get [...]
        &lt;p&gt;&lt;b&gt;Source:&lt;/b&gt;&lt;br/&gt;
        &lt;a href="https://forum.effectivealtruism.org/posts/5oTr4ExwpvhjrSgFi/things-i-learned-by-spending-five-thousand-hours-in-non-ea" rel="noopener noreferrer" target="_blank"&gt;https://forum.effectivealtruism.org/posts/5oTr4ExwpvhjrSgFi/things-i-learned-by-spending-five-thousand-hours-in-non-ea&lt;/a&gt;&lt;/p&gt;
        &lt;p&gt;&lt;a href="https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;entry.584848066=https://forum.effectivealtruism.org/posts/5oTr4ExwpvhjrSgFi/things-i-learned-by-spending-five-thousand-hours-in-non-ea" rel="noopener noreferrer" target="_blank"&gt;Share feedback on this narration&lt;/a&gt;.&lt;/p&gt;
      &lt;p&gt;Narrated by &lt;a href="https://type3.audio/" rel="noopener noreferrer" target="_blank"&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;
    </description>
      <pubDate>Fri, 02 Jun 2023 18:06:24 GMT+00:00</pubDate>
      <guid>ce0f3ba2-eff9-42ab-b49d-1e44d205c705</guid>
      <itunes:duration>876</itunes:duration>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
      <enclosure url="https://dl.type3.audio/episode/ce0f3ba2-eff9-42ab-b49d-1e44d205c705.mp3?request_source=rss" type="audio/mpeg"/>
    </item>
    <item>
      <title>“Obstacles to the Implementation of Indoor Air Quality Improvements” by JesseSmith</title>
      <description>1. Tl;drMany reports indicate that indoor air quality (IAQ) interventions are likely to be effective at reducing respiratory disease transmission. However, to date there’s been very little focus on the workforce that will implement these interventions. I suggest that the US Heating, Ventilation and Air Conditioning (HVAC) and building maintenance workforces have already posed a significant obstacle to these interventions, and broad uptake of IAQ measures will be significantly hindered by them in the future. The impact will vary in predictable ways depending on the nature of the intervention and its implementation. We should favor simple techniques with improved oversight and outsource or crosscheck technically complex work to people outside of the current HVAC workforce. We should also make IAQ conditions and devices as transparent as possible to both experts and building occupants.To skip my bio and the technical horrors section, proceed to the recommendations in section 4.2. Who am I? Why do I think This? How Certain am I? I began working in construction in 1991. I did a formal carpentry apprenticeship in Victoria BC in the mid-90s and moved to the US in ‘99. Around 2008 I started taking greater interest in HVAC because - despite paying top dollar to local subcontractors - our projects had persistent HVAC problems. Despite protestations that they were following exemplary practices, our projects were plagued with high humidity, loud noise, frequent mechanical failure, and room-to-room temperature differences. This led me to first learn all aspects of system design and controls, and culminated in full system installations. Along the way I obtained a NJ Master HVAC license, performed the thermal work of ~2k light-duty energy retrofits, obtained multiple certifications in HVAC and low-energy design, and became a regional expert in building diagnostics. Since 2010 I’ve worked as a contractor or consultant to roughly a dozen major HVAC contractors and hundreds of homeowners.I’m reasonably certain that the baseline competence of the HVAC workforce is insufficient to broadly and reliably deploy IAQ interventions and that this is a serious obstacle. My comments are specific to the US. I’ve discussed these problems extensively with friends and acquaintances working at a national level and in other parts of the US and believe them to be common to most of the country. The problems are specific to the light commercial and residential workforce, but not domains that are closely monitored by mechanical engineering teams (e.g. hospitals). Based on some limited experience I suspect these problems are also common to Canada, but I’m less certain about their severity.3. Technical Horrors: Why is This so Difficult?Within HVAC, many important jobs are currently either not performed or delegated to people who are largely incapable of performing them. Many people convincingly lie about their capacity to perform a job they’re incapable of, report having done things they haven’t, or even make statements at odds with physics.Examples include:Accurate heat load/loss calculations: These are used to size heating and cooling systems, and in most areas are code mandated for both new and replacement systems. Competent sizing (Manual J [...]
        &lt;p&gt;&lt;b&gt;Source:&lt;/b&gt;&lt;br/&gt;
        &lt;a href="https://forum.effectivealtruism.org/posts/q7dJz9ZaZGTSZL8Jk/obstacles-to-the-implementation-of-indoor-air-quality" rel="noopener noreferrer" target="_blank"&gt;https://forum.effectivealtruism.org/posts/q7dJz9ZaZGTSZL8Jk/obstacles-to-the-implementation-of-indoor-air-quality&lt;/a&gt;&lt;/p&gt;
        &lt;p&gt;&lt;a href="https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;entry.584848066=https://forum.effectivealtruism.org/posts/q7dJz9ZaZGTSZL8Jk/obstacles-to-the-implementation-of-indoor-air-quality" rel="noopener noreferrer" target="_blank"&gt;Share feedback on this narration&lt;/a&gt;.&lt;/p&gt;
      &lt;p&gt;Narrated by &lt;a href="https://type3.audio/" rel="noopener noreferrer" target="_blank"&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;
    </description>
      <pubDate>Thu, 01 Jun 2023 12:47:29 GMT+00:00</pubDate>
      <guid>f308872f-2cc1-49a5-8030-8ea5a5e2730a</guid>
      <itunes:duration>846</itunes:duration>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
      <enclosure url="https://dl.type3.audio/episode/f308872f-2cc1-49a5-8030-8ea5a5e2730a.mp3?request_source=rss" type="audio/mpeg"/>
    </item>
    <item>
      <title>“Announcing Apollo Research” by mariushobbhahn</title>
      <description>TL;DRWe are a new AI evals research organization called Apollo Research based in London. We think that strategic AI deception – where a model outwardly seems aligned but is in fact misaligned – is a crucial step in many major catastrophic AI risk scenarios and that detecting deception in real-world models is the most important and tractable step to addressing this problem.Our agenda is split into interpretability and behavioral evals:On the interpretability side, we are currently working on two main research bets toward characterizing neural network cognition. We are also interested in benchmarking interpretability, e.g. testing whether given interpretability tools can meet specific requirements or solve specific challenges.On the behavioral evals side, we are conceptually breaking down ‘deception’ into measurable components in order to build a detailed evaluation suite using prompt- and finetuning-based tests. As an evals research org, we intend to use our research insights and tools directly on frontier models by serving as an external auditor of AGI labs, thus reducing the chance that deceptively misaligned AIs are developed and deployed. We also intend to engage with AI governance efforts, e.g. by working with policymakers and providing technical expertise to aid the drafting of auditing regulations.We have starter funding but estimate a $1.4M funding gap in our first year. We estimate that the maximal amount we could effectively use is $4-6M in addition to current funding levels (reach out if you are interested in donating). We are currently fiscally sponsored by Rethink Priorities. Our starting team consists of 8 researchers and engineers with strong backgrounds in technical alignment research. We are interested in collaborating with both technical and governance researchers. Feel free to reach out at info@apolloresearch.ai.We intend to hire once our funding gap is closed. If you’d like to stay informed about opportunities, you can fill out our expression of interest form.Research AgendaWe believe that AI deception – where a model outwardly seems aligned but is in fact misaligned and conceals this fact from human oversight – is a crucial component of many catastrophic risk scenarios from AI (see here for more). We also think that detecting/measuring deception is causally upstream of many potential solutions. For example, having good detection tools enables higher quality and safer feedback loops for empirical alignment approaches, enables us to point to concrete failure modes for lawmakers and the wider public, and provides evidence to AGI labs whether the models they are developing or deploying are deceptively misaligned.Ultimately, we aim to develop a holistic and far-ranging suite of deception evals that includes behavioral tests, fine-tuning, and interpretability-based approaches. Unfortunately, we think that interpretability is not yet at the stage where it can be used effectively on state-of-the-art models. Therefore, we have split the agenda into an interpretability research arm and a behavioral evals arm. We aim to eventually combine interpretability and behavioral evals into a comprehensive model evaluation suite.On the interpretability side, we are currently working on a new unsupervised approach and continuing work on an existing approach to attack the problem of superposition. Early experiments have shown promising results, but it [...]
        &lt;p&gt;&lt;b&gt;Source:&lt;/b&gt;&lt;br/&gt;
        &lt;a href="https://forum.effectivealtruism.org/posts/ysC6crBKhDBGZfob3/announcing-apollo-research" rel="noopener noreferrer" target="_blank"&gt;https://forum.effectivealtruism.org/posts/ysC6crBKhDBGZfob3/announcing-apollo-research&lt;/a&gt;&lt;/p&gt;
        &lt;p&gt;&lt;a href="https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;entry.584848066=https://forum.effectivealtruism.org/posts/ysC6crBKhDBGZfob3/announcing-apollo-research" rel="noopener noreferrer" target="_blank"&gt;Share feedback on this narration&lt;/a&gt;.&lt;/p&gt;
      &lt;p&gt;Narrated by &lt;a href="https://type3.audio/" rel="noopener noreferrer" target="_blank"&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;
    </description>
      <pubDate>Wed, 31 May 2023 18:53:20 GMT+00:00</pubDate>
      <guid>11b36c21-f592-41e5-912c-2aba996d7d2e</guid>
      <itunes:duration>974</itunes:duration>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
      <enclosure url="https://dl.type3.audio/episode/11b36c21-f592-41e5-912c-2aba996d7d2e.mp3?request_source=rss" type="audio/mpeg"/>
    </item>
    <item>
      <title>“Statement on AI Extinction - Signed by AGI Labs, Top Academics, and Many Other Notable Figures” by Center for AI Safety</title>
      <description>Today, the AI Extinction Statement was released by the Center for AI Safety, a one-sentence statement jointly signed by a historic coalition of AI experts, professors, and tech leaders.Geoffrey Hinton and Yoshua Bengio have signed, as have the CEOs of the major AGI labs–Sam Altman, Demis Hassabis, and Dario Amodei–as well as executives from Microsoft and Google (but notably not Meta).The statement reads: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”We hope this statement will bring AI x-risk further into the overton window and open up discussion around AI’s most severe risks. Given the growing number of experts and public figures who take risks from advanced AI seriously, we hope to improve epistemics by encouraging discussion and focusing public and international attention toward this issue.
        &lt;p&gt;&lt;b&gt;Source:&lt;/b&gt;&lt;br/&gt;
        &lt;a href="https://forum.effectivealtruism.org/posts/Yk4D4DZpx6eriMDyY/statement-on-ai-extinction-signed-by-agi-labs-top-academics" rel="noopener noreferrer" target="_blank"&gt;https://forum.effectivealtruism.org/posts/Yk4D4DZpx6eriMDyY/statement-on-ai-extinction-signed-by-agi-labs-top-academics&lt;/a&gt;&lt;/p&gt;
        &lt;p&gt;&lt;a href="https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;entry.584848066=https://forum.effectivealtruism.org/posts/Yk4D4DZpx6eriMDyY/statement-on-ai-extinction-signed-by-agi-labs-top-academics" rel="noopener noreferrer" target="_blank"&gt;Share feedback on this narration&lt;/a&gt;.&lt;/p&gt;
      &lt;p&gt;Narrated by &lt;a href="https://type3.audio/" rel="noopener noreferrer" target="_blank"&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;
    </description>
      <pubDate>Tue, 30 May 2023 15:05:08 GMT+00:00</pubDate>
      <guid>e6aa1687-2f6f-4303-80de-0164d6c141b3</guid>
      <itunes:duration>81</itunes:duration>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
      <enclosure url="https://dl.type3.audio/episode/e6aa1687-2f6f-4303-80de-0164d6c141b3.mp3?request_source=rss" type="audio/mpeg"/>
    </item>
    <item>
      <title>“KFC Supplier Sued for Cruelty” by alene</title>
      <description>Dear EA Forum readers,&lt;br&gt;
The EA charity, Legal Impact for Chickens (LIC), just filed our second lawsuit!
&lt;br&gt;&lt;br&gt;
As many of you know, LIC is a litigation nonprofit dedicated to making factory-farm cruelty a liability.  We focus on chickens because of the huge numbers in which they suffer and the extreme severity of that suffering. Today, we sued one of the country’s largest poultry producers and a KFC supplier, Case Farms, for animal cruelty.  The complaint comes on the heels of a 2021 undercover investigation by Animal Outlook, revealing abuse at a Morganton, N.C. Case Farms hatchery that processes more than 200,000 chicks daily.  Our lawsuit attacks the notion that Big Ag is above the law.  We are suing under North Carolina's 19A statute, which lets private parties enjoin animal cruelty. Case Farms was documented knowingly operating faulty equipment, including a machine piston which repeatedly smashes chicks to death and a dangerous metal conveyor belt which traps and kills young birds.  Case Farms was also documented crushing chicks’ necks between heavy plastic trays. Case Farms supplies its chicken to KFC, Taco Bell, and Boar’s Head, among other customers.
&lt;br&gt;&lt;br&gt;
Thank you so much to all the EA Forum readers who helped make this happen, by donating to, and volunteering for, Legal Impact for Chickens!
&lt;br&gt;&lt;br&gt;
Sincerely,
Alene
        &lt;p&gt;&lt;b&gt;Source:&lt;/b&gt;&lt;br/&gt;
        &lt;a href="https://forum.effectivealtruism.org/posts/8Z2uFCkrg2dCnadA4/kfc-supplier-sued-for-cruelty" rel="noopener noreferrer" target="_blank"&gt;https://forum.effectivealtruism.org/posts/8Z2uFCkrg2dCnadA4/kfc-supplier-sued-for-cruelty&lt;/a&gt;&lt;/p&gt;
        &lt;p&gt;&lt;a href="https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;entry.584848066=https://forum.effectivealtruism.org/posts/8Z2uFCkrg2dCnadA4/kfc-supplier-sued-for-cruelty" rel="noopener noreferrer" target="_blank"&gt;Share feedback on this narration&lt;/a&gt;.&lt;/p&gt;
      &lt;p&gt;Narrated by &lt;a href="https://type3.audio/" rel="noopener noreferrer" target="_blank"&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;
    </description>
      <pubDate>Mon, 29 May 2023 01:02:10 GMT+00:00</pubDate>
      <guid>60b3c245-e037-4fcb-9bb8-93d7a603684e</guid>
      <itunes:duration>107</itunes:duration>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
      <enclosure url="https://dl.type3.audio/episode/60b3c245-e037-4fcb-9bb8-93d7a603684e.mp3?request_source=rss" type="audio/mpeg"/>
    </item>
    <item>
      <itunes:title>"Tips for people considering starting new incubators" by Joey</itunes:title>
      <title>"Tips for people considering starting new incubators" by Joey</title>
      <description>&lt;p&gt;Charity Entrepreneurship is frequently contacted by individuals and donors who like our model. Several have expressed interest in seeing the model expanded, or seeing what a twist on the model would look like (e.g., different cause area, region, etc.) Although we are excited about maximizing CE’s impact, we are less convinced by the idea of growing the effective charity pool via franchising or other independent nonprofit incubators. This is because new incubators often do not address the actual bottlenecks faced by the nonprofit landscape, as we see them. There are lots of factors that prevent great new charities from being launched, and from eventually having a large impact. We have scaled CE to about 10 charities a year, and from our perspective, these are the three major bottlenecks to growing the new charity ecosystem further: Mid-stage funding, Founders and Multiplying effects.&lt;/p&gt;&lt;p&gt;&lt;br/&gt;&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Source:&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;&lt;a href='https://forum.effectivealtruism.org/posts/ckokr9uhr2Cu3h5En/tips-for-people-considering-starting-new-incubators' rel='noopener noreferrer' target='_blank'&gt;https://forum.effectivealtruism.org/posts/ckokr9uhr2Cu3h5En/tips-for-people-considering-starting-new-incubators&lt;/a&gt;&lt;/p&gt;&lt;p&gt;&lt;br/&gt;&lt;/p&gt;&lt;p&gt;Narrated for the &lt;a href='https://forum.effectivealtruism.org/' rel='noopener noreferrer' target='_blank'&gt;&lt;u&gt;Effective Altruism Forum&lt;/u&gt;&lt;/a&gt; by &lt;a href='\&amp;quot;https://type3.audio/\&amp;quot;' rel='noopener noreferrer' target='_blank'&gt;&lt;u&gt;TYPE III AUDIO&lt;/u&gt;&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded>&lt;p&gt;Charity Entrepreneurship is frequently contacted by individuals and donors who like our model. Several have expressed interest in seeing the model expanded, or seeing what a twist on the model would look like (e.g., different cause area, region, etc.) Although we are excited about maximizing CE’s impact, we are less convinced by the idea of growing the effective charity pool via franchising or other independent nonprofit incubators. This is because new incubators often do not address the actual bottlenecks faced by the nonprofit landscape, as we see them. There are lots of factors that prevent great new charities from being launched, and from eventually having a large impact. We have scaled CE to about 10 charities a year, and from our perspective, these are the three major bottlenecks to growing the new charity ecosystem further: Mid-stage funding, Founders and Multiplying effects.&lt;/p&gt;&lt;p&gt;&lt;br/&gt;&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Source:&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;&lt;a href='https://forum.effectivealtruism.org/posts/ckokr9uhr2Cu3h5En/tips-for-people-considering-starting-new-incubators' rel='noopener noreferrer' target='_blank'&gt;https://forum.effectivealtruism.org/posts/ckokr9uhr2Cu3h5En/tips-for-people-considering-starting-new-incubators&lt;/a&gt;&lt;/p&gt;&lt;p&gt;&lt;br/&gt;&lt;/p&gt;&lt;p&gt;Narrated for the &lt;a href='https://forum.effectivealtruism.org/' rel='noopener noreferrer' target='_blank'&gt;&lt;u&gt;Effective Altruism Forum&lt;/u&gt;&lt;/a&gt; by &lt;a href='\&amp;quot;https://type3.audio/\&amp;quot;' rel='noopener noreferrer' target='_blank'&gt;&lt;u&gt;TYPE III AUDIO&lt;/u&gt;&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <itunes:author/>
      <enclosure url="https://www.buzzsprout.com/2062493/12924217-tips-for-people-considering-starting-new-incubators-by-joey.mp3" length="10825242" type="audio/mpeg"/>
      <guid isPermaLink="false">Buzzsprout-12924217</guid>
      <pubDate>Fri, 26 May 2023 07:00:00 +0100</pubDate>
      <itunes:duration>901</itunes:duration>
      <itunes:keywords/>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
    </item>
    <item>
      <title>“EAG talks are underrated IMO” by Chi</title>
      <description>Underrated is relative.[1] My position is something like "most people should consider going to &gt;1 EAG talk" and not "most people should spend most of their EAG in talks." This probably most applies to people who are kind of like me. (Been involved for a while, already have a strong network, don't need to do 1-1s for their job.)
&lt;br&gt;&lt;br&gt;
There's a meme that 1-1s are clearly the most valuable part of EAG(x) and that you should not really go to talks. (See e.g. this, this, this, they don't say exactly this but I think push in the direction of the meme.)
&lt;br&gt;&lt;br&gt;
I think EAG talks can be really interesting and are underrated. It's true that most of them are recorded and you could watch them later but I'm guessing most people don't actually do that.[2] It also takes a while for them to be uploaded.I still think 1-1s are pretty great, especially if you'renew and don't know many people yet (or otherwise mostly want to increase the number of people you know),have a very specific thing you're trying to get out of EAG and talking to lots of people seems to be the right thing to achieve it.
&lt;br&gt;&lt;br&gt;
I'm mostly writing this post because I think the meme is really strong in some parts of the EA community. I can imagine that some people in the EA community would feel bad for attending talks because it doesn't feel "optimal."[2] If you feel like you need permission, I want to give you permission to go to talks without feeling bad. 
&lt;br&gt;&lt;br&gt;
Another motivation is that I recently attended my first set of EAG talks in years (I was doing lots of 1-1s for my job before) and was really surprised by how great they were. (That said, it was a bit hit or miss.) I previously accidentally assumed that talks and other prepared sessions would give me ~nothing.
        &lt;p&gt;&lt;b&gt;Source:&lt;/b&gt;&lt;br/&gt;
        &lt;a href="https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo" rel="noopener noreferrer" target="_blank"&gt;https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo&lt;/a&gt;&lt;/p&gt;
        &lt;p&gt;&lt;a href="https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo" rel="noopener noreferrer" target="_blank"&gt;Share feedback on this narration&lt;/a&gt;.&lt;/p&gt;
      &lt;p&gt;Narrated by &lt;a href="https://type3.audio/" rel="noopener noreferrer" target="_blank"&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;
    </description>
      <pubDate>Thu, 25 May 2023 07:32:27 GMT+00:00</pubDate>
      <guid>dc4c45eb-6715-46b6-965b-3943f3730336</guid>
      <itunes:duration>209</itunes:duration>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
      <enclosure url="https://dl.type3.audio/episode/dc4c45eb-6715-46b6-965b-3943f3730336.mp3?request_source=rss" type="audio/mpeg"/>
    </item>
    <item>
      <title>“If you find EA conferences emotionally difficult, you’re not alone” by Amber Dawn</title>
      <description>I went to EAG London this weekend. I had some interesting chats, wrote some cryptic squiggles in my notebook (“Clockify” “the Easterlin paradox”, “functionalist eudaimonic theories”), and gave and received some hopefully-useful advice. Overall, the conference was fun and worthwhile for me. But at times, I also found the conference emotionally difficult.
&lt;br&gt;&lt;br&gt;
I think this is pretty common. After last year’s EAG, Alastair Fraser-Urquhart wrote about how he burnt out at the conference and had to miss a retreat starting the next day. The post was popular, and many said they’d had similar experiences.
&lt;br&gt;&lt;br&gt;
The standard euphemism for this facet of EA conferences is ‘intense’ or ‘tiring’, but I suspect these adjectives are often a more socially-acceptable way of saying ‘I feel low/anxious/exhausted and want to curl up in a foetal position in a darkened room’.
&lt;br&gt;&lt;br&gt;
I want to write this post to: balance out the ‘woo EAG lfg!’ hype, and help people who found it a bad or ambivalent experience to feel less alonedig into to why EAGs can be difficult: this might help attendees have better experiences themselves, and also create an environment where others are more likely to have good experienceshelp people who mostly enjoy EAGs understand what their more neurotic or introverted friends are going through.
        &lt;p&gt;&lt;b&gt;Source:&lt;/b&gt;&lt;br/&gt;
        &lt;a href="https://forum.effectivealtruism.org/posts/mHk9h3RxvuGmTThaS/if-you-find-ea-conferences-emotionally-difficult-you-re-not" rel="noopener noreferrer" target="_blank"&gt;https://forum.effectivealtruism.org/posts/mHk9h3RxvuGmTThaS/if-you-find-ea-conferences-emotionally-difficult-you-re-not&lt;/a&gt;&lt;/p&gt;
        &lt;p&gt;&lt;a href="https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;entry.584848066=https://forum.effectivealtruism.org/posts/mHk9h3RxvuGmTThaS/if-you-find-ea-conferences-emotionally-difficult-you-re-not" rel="noopener noreferrer" target="_blank"&gt;Share feedback on this narration&lt;/a&gt;.&lt;/p&gt;
      &lt;p&gt;Narrated by &lt;a href="https://type3.audio/" rel="noopener noreferrer" target="_blank"&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;
    </description>
      <pubDate>Thu, 25 May 2023 04:45:22 GMT+00:00</pubDate>
      <guid>b1bf291d-c0da-4fd2-a3eb-9806c7ce5832</guid>
      <itunes:duration>418</itunes:duration>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
      <enclosure url="https://dl.type3.audio/episode/b1bf291d-c0da-4fd2-a3eb-9806c7ce5832.mp3?request_source=rss" type="audio/mpeg"/>
    </item>
    <item>
      <itunes:title>EA Forum Weekly Summaries – Episode 25 (May 1-7, 2023)</itunes:title>
      <title>EA Forum Weekly Summaries – Episode 25 (May 1-7, 2023)</title>
      <description>&lt;p&gt;&lt;/p&gt;&lt;p&gt;We&amp;apos;ve just passed the half-year mark for this project! If you&amp;apos;re reading this, &lt;b&gt;please consider taking &lt;/b&gt;&lt;a href='https://forms.gle/F1URnwsfKqPoTDJt7'&gt;&lt;b&gt;this 5 minute survey&lt;/b&gt;&lt;/a&gt; — all questions optional. If you listen to the podcast version, we have a separate survey for that &lt;a href='https://forms.gle/NBhA3tNGZLW7c7RJA'&gt;here&lt;/a&gt;. Thanks to everyone that has responded to this already! &lt;/p&gt;&lt;p&gt;&lt;b&gt;Original text:&lt;br/&gt;&lt;/b&gt;&lt;a href='https://forum.effectivealtruism.org/posts/9QcmyGAjERHRFfrr7/summaries-of-top-forum-posts-1st-to-7th-may-2023'&gt;https://forum.effectivealtruism.org/posts/9QcmyGAjERHRFfrr7/summaries-of-top-forum-posts-1st-to-7th-may-2023&lt;/a&gt;&lt;/p&gt;&lt;p&gt;This is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection &lt;a href='https://forum.effectivealtruism.org/s/W4fhpuN26naxGCBbN'&gt;here&lt;/a&gt;. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.&lt;/p&gt;&lt;p&gt;Narrated by &lt;a href='https://twitter.com/SnellColeman'&gt;Coleman Jackson Snell&lt;/a&gt;. Summaries written by &lt;a href='https://forum.effectivealtruism.org/users/greyarea'&gt;Zoe Williams&lt;/a&gt; (&lt;a href='https://rethinkpriorities.org/'&gt;Rethink Priorities&lt;/a&gt;).&lt;/p&gt;&lt;p&gt;Published by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt; on behalf of the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded>&lt;p&gt;&lt;/p&gt;&lt;p&gt;We&amp;apos;ve just passed the half-year mark for this project! If you&amp;apos;re reading this, &lt;b&gt;please consider taking &lt;/b&gt;&lt;a href='https://forms.gle/F1URnwsfKqPoTDJt7'&gt;&lt;b&gt;this 5 minute survey&lt;/b&gt;&lt;/a&gt; — all questions optional. If you listen to the podcast version, we have a separate survey for that &lt;a href='https://forms.gle/NBhA3tNGZLW7c7RJA'&gt;here&lt;/a&gt;. Thanks to everyone that has responded to this already! &lt;/p&gt;&lt;p&gt;&lt;b&gt;Original text:&lt;br/&gt;&lt;/b&gt;&lt;a href='https://forum.effectivealtruism.org/posts/9QcmyGAjERHRFfrr7/summaries-of-top-forum-posts-1st-to-7th-may-2023'&gt;https://forum.effectivealtruism.org/posts/9QcmyGAjERHRFfrr7/summaries-of-top-forum-posts-1st-to-7th-may-2023&lt;/a&gt;&lt;/p&gt;&lt;p&gt;This is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection &lt;a href='https://forum.effectivealtruism.org/s/W4fhpuN26naxGCBbN'&gt;here&lt;/a&gt;. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.&lt;/p&gt;&lt;p&gt;Narrated by &lt;a href='https://twitter.com/SnellColeman'&gt;Coleman Jackson Snell&lt;/a&gt;. Summaries written by &lt;a href='https://forum.effectivealtruism.org/users/greyarea'&gt;Zoe Williams&lt;/a&gt; (&lt;a href='https://rethinkpriorities.org/'&gt;Rethink Priorities&lt;/a&gt;).&lt;/p&gt;&lt;p&gt;Published by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt; on behalf of the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <itunes:author/>
      <enclosure url="https://www.buzzsprout.com/2062493/12818986-ea-forum-weekly-summaries-episode-25-may-1-7-2023.mp3" length="20795588" type="audio/mpeg"/>
      <guid isPermaLink="false">Buzzsprout-12818986</guid>
      <pubDate>Wed, 10 May 2023 02:00:00 +0100</pubDate>
      <itunes:duration>1732</itunes:duration>
      <itunes:keywords/>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
    </item>
    <item>
      <itunes:title>"Predictable updating about AI risk" by Joe Carlsmith</itunes:title>
      <title>"Predictable updating about AI risk" by Joe Carlsmith</title>
      <description>&lt;p&gt;&lt;/p&gt;&lt;p&gt;How worried about AI risk will we feel in the future, when we can see advanced machine intelligence up close? We should worry accordingly now.  &lt;/p&gt;&lt;p&gt;&lt;b&gt;Original article:&lt;/b&gt;&lt;br/&gt;&lt;a href='https://joecarlsmith.com/2023/05/08/predictable-updating-about-ai-risk'&gt;https://joecarlsmith.com/2023/05/08/predictable-updating-about-ai-risk&lt;/a&gt;&lt;/p&gt;&lt;p&gt;Narrated by &lt;a href='https://joecarlsmith.com/'&gt;Joe Carlsmith&lt;/a&gt; and included on the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt; by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded>&lt;p&gt;&lt;/p&gt;&lt;p&gt;How worried about AI risk will we feel in the future, when we can see advanced machine intelligence up close? We should worry accordingly now.  &lt;/p&gt;&lt;p&gt;&lt;b&gt;Original article:&lt;/b&gt;&lt;br/&gt;&lt;a href='https://joecarlsmith.com/2023/05/08/predictable-updating-about-ai-risk'&gt;https://joecarlsmith.com/2023/05/08/predictable-updating-about-ai-risk&lt;/a&gt;&lt;/p&gt;&lt;p&gt;Narrated by &lt;a href='https://joecarlsmith.com/'&gt;Joe Carlsmith&lt;/a&gt; and included on the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt; by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <itunes:author/>
      <enclosure url="https://www.buzzsprout.com/2062493/12814678-predictable-updating-about-ai-risk-by-joe-carlsmith.mp3" length="45534891" type="audio/mpeg"/>
      <guid isPermaLink="false">Buzzsprout-12814678</guid>
      <pubDate>Tue, 09 May 2023 15:00:00 +0100</pubDate>
      <itunes:duration>3794</itunes:duration>
      <itunes:keywords/>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
    </item>
    <item>
      <itunes:title>EA Forum Weekly Summaries – Episode 24 (April 24-30, 2023)</itunes:title>
      <title>EA Forum Weekly Summaries – Episode 24 (April 24-30, 2023)</title>
      <description>&lt;p&gt;&lt;/p&gt;&lt;p&gt;We&amp;apos;ve just passed the half-year mark for this project! If you&amp;apos;re reading this, &lt;b&gt;please consider taking &lt;/b&gt;&lt;a href='https://forms.gle/F1URnwsfKqPoTDJt7'&gt;&lt;b&gt;this 5 minute survey&lt;/b&gt;&lt;/a&gt; — all questions optional. If you listen to the podcast version, we have a separate survey for that &lt;a href='https://forms.gle/NBhA3tNGZLW7c7RJA'&gt;here&lt;/a&gt;. Thanks to everyone that has responded to this already! &lt;/p&gt;&lt;p&gt;&lt;b&gt;Original text:&lt;br/&gt;&lt;/b&gt;&lt;a href='https://forum.effectivealtruism.org/posts/wzn7hEj3BSz7us7ge/summaries-of-top-forum-posts-24th-30th-april-2023'&gt;https://forum.effectivealtruism.org/posts/wzn7hEj3BSz7us7ge/summaries-of-top-forum-posts-24th-30th-april-2023&lt;/a&gt;&lt;/p&gt;&lt;p&gt;This is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection &lt;a href='https://forum.effectivealtruism.org/s/W4fhpuN26naxGCBbN'&gt;here&lt;/a&gt;. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.&lt;/p&gt;&lt;p&gt;Narrated by &lt;a href='https://twitter.com/SnellColeman'&gt;Coleman Jackson Snell&lt;/a&gt;. Summaries written by &lt;a href='https://forum.effectivealtruism.org/users/greyarea'&gt;Zoe Williams&lt;/a&gt; (&lt;a href='https://rethinkpriorities.org/'&gt;Rethink Priorities&lt;/a&gt;).&lt;/p&gt;&lt;p&gt;Published by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt; on behalf of the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded>&lt;p&gt;&lt;/p&gt;&lt;p&gt;We&amp;apos;ve just passed the half-year mark for this project! If you&amp;apos;re reading this, &lt;b&gt;please consider taking &lt;/b&gt;&lt;a href='https://forms.gle/F1URnwsfKqPoTDJt7'&gt;&lt;b&gt;this 5 minute survey&lt;/b&gt;&lt;/a&gt; — all questions optional. If you listen to the podcast version, we have a separate survey for that &lt;a href='https://forms.gle/NBhA3tNGZLW7c7RJA'&gt;here&lt;/a&gt;. Thanks to everyone that has responded to this already! &lt;/p&gt;&lt;p&gt;&lt;b&gt;Original text:&lt;br/&gt;&lt;/b&gt;&lt;a href='https://forum.effectivealtruism.org/posts/wzn7hEj3BSz7us7ge/summaries-of-top-forum-posts-24th-30th-april-2023'&gt;https://forum.effectivealtruism.org/posts/wzn7hEj3BSz7us7ge/summaries-of-top-forum-posts-24th-30th-april-2023&lt;/a&gt;&lt;/p&gt;&lt;p&gt;This is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection &lt;a href='https://forum.effectivealtruism.org/s/W4fhpuN26naxGCBbN'&gt;here&lt;/a&gt;. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.&lt;/p&gt;&lt;p&gt;Narrated by &lt;a href='https://twitter.com/SnellColeman'&gt;Coleman Jackson Snell&lt;/a&gt;. Summaries written by &lt;a href='https://forum.effectivealtruism.org/users/greyarea'&gt;Zoe Williams&lt;/a&gt; (&lt;a href='https://rethinkpriorities.org/'&gt;Rethink Priorities&lt;/a&gt;).&lt;/p&gt;&lt;p&gt;Published by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt; on behalf of the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <itunes:author/>
      <enclosure url="https://www.buzzsprout.com/2062493/12809949-ea-forum-weekly-summaries-episode-24-april-24-30-2023.mp3" length="15962838" type="audio/mpeg"/>
      <guid isPermaLink="false">Buzzsprout-12809949</guid>
      <pubDate>Mon, 08 May 2023 22:00:00 +0100</pubDate>
      <itunes:duration>1329</itunes:duration>
      <itunes:keywords/>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
    </item>
    <item>
      <itunes:title>"AGI safety career advice" by Richard Ngo</itunes:title>
      <title>"AGI safety career advice" by Richard Ngo</title>
      <description>&lt;p&gt;&lt;/p&gt;&lt;p&gt;People often ask me for career advice related to AGI safety. This post summarizes the advice I most commonly give. I’ve split it into three sections: general mindset, alignment research and governance work. For each of the latter two, I start with high-level advice aimed primarily at students and those early in their careers, then dig into more details of the field. See also this post I wrote two years ago, containing a bunch of fairly general career advice. ## General mindset In order to have a big impact on the world you need to find a big lever. This document assumes that you think, as I do, that AGI safety is the biggest such lever. There are many ways to pull on that lever, though—from research and engineering to operations and field-building to politics and communications. I encourage you to choose between these based primarily on your personal fit—a combination of what you&amp;apos;re really good at and what you really enjoy. In my opinion the difference between being a great versus a mediocre fit swamps other differences in the impactfulness of most pairs of AGI-safety-related jobs.&lt;/p&gt;&lt;p&gt;&lt;br/&gt;&lt;/p&gt;&lt;p&gt;&lt;b&gt;Original article:&lt;/b&gt;&lt;/p&gt;&lt;p&gt;&lt;a href='https://forum.effectivealtruism.org/posts/xg7gxsYaMa6F3uH8h/agi-safety-career-advice'&gt;https://forum.effectivealtruism.org/posts/xg7gxsYaMa6F3uH8h/agi-safety-career-advice&lt;/a&gt;&lt;/p&gt;&lt;p&gt;&lt;br/&gt;&lt;/p&gt;&lt;p&gt;Narrated for the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt; by &lt;a href='\&amp;quot;https://type3.audio/\&amp;quot;'&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded>&lt;p&gt;&lt;/p&gt;&lt;p&gt;People often ask me for career advice related to AGI safety. This post summarizes the advice I most commonly give. I’ve split it into three sections: general mindset, alignment research and governance work. For each of the latter two, I start with high-level advice aimed primarily at students and those early in their careers, then dig into more details of the field. See also this post I wrote two years ago, containing a bunch of fairly general career advice. ## General mindset In order to have a big impact on the world you need to find a big lever. This document assumes that you think, as I do, that AGI safety is the biggest such lever. There are many ways to pull on that lever, though—from research and engineering to operations and field-building to politics and communications. I encourage you to choose between these based primarily on your personal fit—a combination of what you&amp;apos;re really good at and what you really enjoy. In my opinion the difference between being a great versus a mediocre fit swamps other differences in the impactfulness of most pairs of AGI-safety-related jobs.&lt;/p&gt;&lt;p&gt;&lt;br/&gt;&lt;/p&gt;&lt;p&gt;&lt;b&gt;Original article:&lt;/b&gt;&lt;/p&gt;&lt;p&gt;&lt;a href='https://forum.effectivealtruism.org/posts/xg7gxsYaMa6F3uH8h/agi-safety-career-advice'&gt;https://forum.effectivealtruism.org/posts/xg7gxsYaMa6F3uH8h/agi-safety-career-advice&lt;/a&gt;&lt;/p&gt;&lt;p&gt;&lt;br/&gt;&lt;/p&gt;&lt;p&gt;Narrated for the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt; by &lt;a href='\&amp;quot;https://type3.audio/\&amp;quot;'&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <itunes:author/>
      <enclosure url="https://www.buzzsprout.com/2062493/12782245-agi-safety-career-advice-by-richard-ngo.mp3" length="16406163" type="audio/mpeg"/>
      <guid isPermaLink="false">Buzzsprout-12782245</guid>
      <pubDate>Thu, 04 May 2023 09:00:00 +0100</pubDate>
      <itunes:duration>1366</itunes:duration>
      <itunes:keywords/>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
    </item>
    <item>
      <itunes:title>EA Forum Weekly Summaries – Episode 23 (April 17-23, 2023)</itunes:title>
      <title>EA Forum Weekly Summaries – Episode 23 (April 17-23, 2023)</title>
      <description>&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;b&gt;Original article:&lt;br/&gt;&lt;/b&gt;&lt;a href='https://forum.effectivealtruism.org/posts/m2Y6HheC2Q2GLQ3oS/summaries-of-top-forum-posts-17th-23rd-april-2023'&gt;https://forum.effectivealtruism.org/posts/m2Y6HheC2Q2GLQ3oS/summaries-of-top-forum-posts-17th-23rd-april-2023&lt;/a&gt;&lt;br/&gt;This podcast has just passed the 6-month mark! &lt;b&gt;Please &lt;/b&gt;&lt;a href='https://forum.effectivealtruism.org/posts/m2Y6HheC2Q2GLQ3oS/summaries-of-top-forum-posts-17th-23rd-april-2023'&gt;&lt;b&gt;give us your feedback and suggestions&lt;/b&gt;&lt;/a&gt; so we can continue to improve — the survey should take no more than 10 minutes, and we really appreciate your input!&lt;/p&gt;&lt;p&gt;This is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection &lt;a href='https://forum.effectivealtruism.org/s/W4fhpuN26naxGCBbN'&gt;here&lt;/a&gt;. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.&lt;/p&gt;&lt;p&gt;Narrated by &lt;a href='https://twitter.com/SnellColeman'&gt;Coleman Jackson Snell&lt;/a&gt;. Summaries written by &lt;a href='https://forum.effectivealtruism.org/users/greyarea'&gt;Zoe Williams&lt;/a&gt; (&lt;a href='https://rethinkpriorities.org/'&gt;Rethink Priorities&lt;/a&gt;).&lt;/p&gt;&lt;p&gt;Published by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt; on behalf of the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded>&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;b&gt;Original article:&lt;br/&gt;&lt;/b&gt;&lt;a href='https://forum.effectivealtruism.org/posts/m2Y6HheC2Q2GLQ3oS/summaries-of-top-forum-posts-17th-23rd-april-2023'&gt;https://forum.effectivealtruism.org/posts/m2Y6HheC2Q2GLQ3oS/summaries-of-top-forum-posts-17th-23rd-april-2023&lt;/a&gt;&lt;br/&gt;This podcast has just passed the 6-month mark! &lt;b&gt;Please &lt;/b&gt;&lt;a href='https://forum.effectivealtruism.org/posts/m2Y6HheC2Q2GLQ3oS/summaries-of-top-forum-posts-17th-23rd-april-2023'&gt;&lt;b&gt;give us your feedback and suggestions&lt;/b&gt;&lt;/a&gt; so we can continue to improve — the survey should take no more than 10 minutes, and we really appreciate your input!&lt;/p&gt;&lt;p&gt;This is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection &lt;a href='https://forum.effectivealtruism.org/s/W4fhpuN26naxGCBbN'&gt;here&lt;/a&gt;. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.&lt;/p&gt;&lt;p&gt;Narrated by &lt;a href='https://twitter.com/SnellColeman'&gt;Coleman Jackson Snell&lt;/a&gt;. Summaries written by &lt;a href='https://forum.effectivealtruism.org/users/greyarea'&gt;Zoe Williams&lt;/a&gt; (&lt;a href='https://rethinkpriorities.org/'&gt;Rethink Priorities&lt;/a&gt;).&lt;/p&gt;&lt;p&gt;Published by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt; on behalf of the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <itunes:author/>
      <enclosure url="https://www.buzzsprout.com/2062493/12768801-ea-forum-weekly-summaries-episode-23-april-17-23-2023.mp3" length="10612543" type="audio/mpeg"/>
      <guid isPermaLink="false">Buzzsprout-12768801</guid>
      <pubDate>Tue, 02 May 2023 16:00:00 +0100</pubDate>
      <itunes:duration>883</itunes:duration>
      <itunes:keywords/>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
    </item>
    <item>
      <itunes:title>"First clean water, now clean air" by Fin Moorhouse</itunes:title>
      <title>"First clean water, now clean air" by Fin Moorhouse</title>
      <description>&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/document/d/e/2PACX-1vQmAmdB2BmzVJ42zhFwpnPz0Cl1GFTSDMi4Qx_RvVNCMfFZ_wvqby4wpIRdB0KK0XiiXSsCMYbKkROP/pub#h.gdlsqjaiveng'&gt;The excellent report from Rethink Priorities&lt;/a&gt; was my main source for this. Many of the substantial points I make are taken from it, though errors are my own. It’s worth reading! The authors are Gavriel Kleinwaks, Alastair Fraser-Urquhart, Jam Kraprayoon, and Josh Morrison.&lt;/p&gt;&lt;p&gt;&lt;b&gt;Clean water&lt;/b&gt;&lt;/p&gt;&lt;p&gt;&lt;br/&gt;&lt;/p&gt;&lt;p&gt;In the mid 19th century, London had a sewage problem. It relied on a patchwork of a few hundred sewers, of brick and wood, and hundreds of thousands of cesspits. The Thames — Londoners’ main source of drinking water — was near-opaque with waste. Here is Michael Faraday in an 1855 letter to The Times:&lt;/p&gt;&lt;p&gt;&lt;br/&gt;&lt;/p&gt;&lt;p&gt;&amp;quot;Near the bridges the feculence rolled up in clouds so dense that they were visible at the surface even in water of this kind […] The smell was very bad, and common to the whole of the water. It was the same as that which now comes up from the gully holes in the streets. The whole river was for the time a real sewer […] If we neglect this subject, we cannot expect to do so with impunity; nor ought we to be surprised if, ere many years are over, a season give us sad proof of the folly of our carelessness.&amp;quot;&lt;/p&gt;&lt;p&gt;&lt;br/&gt;&lt;/p&gt;&lt;p&gt;&lt;b&gt;Original article:&lt;/b&gt;&lt;/p&gt;&lt;p&gt;&lt;a href='https://forum.effectivealtruism.org/posts/WLok4YuJ4kfFpDRTi/first-clean-water-now-clean-air'&gt;https://forum.effectivealtruism.org/posts/WLok4YuJ4kfFpDRTi/first-clean-water-now-clean-air&lt;/a&gt;&lt;/p&gt;&lt;p&gt;&lt;br/&gt;&lt;/p&gt;&lt;p&gt;Narrated for the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt; by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded>&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/document/d/e/2PACX-1vQmAmdB2BmzVJ42zhFwpnPz0Cl1GFTSDMi4Qx_RvVNCMfFZ_wvqby4wpIRdB0KK0XiiXSsCMYbKkROP/pub#h.gdlsqjaiveng'&gt;The excellent report from Rethink Priorities&lt;/a&gt; was my main source for this. Many of the substantial points I make are taken from it, though errors are my own. It’s worth reading! The authors are Gavriel Kleinwaks, Alastair Fraser-Urquhart, Jam Kraprayoon, and Josh Morrison.&lt;/p&gt;&lt;p&gt;&lt;b&gt;Clean water&lt;/b&gt;&lt;/p&gt;&lt;p&gt;&lt;br/&gt;&lt;/p&gt;&lt;p&gt;In the mid 19th century, London had a sewage problem. It relied on a patchwork of a few hundred sewers, of brick and wood, and hundreds of thousands of cesspits. The Thames — Londoners’ main source of drinking water — was near-opaque with waste. Here is Michael Faraday in an 1855 letter to The Times:&lt;/p&gt;&lt;p&gt;&lt;br/&gt;&lt;/p&gt;&lt;p&gt;&amp;quot;Near the bridges the feculence rolled up in clouds so dense that they were visible at the surface even in water of this kind […] The smell was very bad, and common to the whole of the water. It was the same as that which now comes up from the gully holes in the streets. The whole river was for the time a real sewer […] If we neglect this subject, we cannot expect to do so with impunity; nor ought we to be surprised if, ere many years are over, a season give us sad proof of the folly of our carelessness.&amp;quot;&lt;/p&gt;&lt;p&gt;&lt;br/&gt;&lt;/p&gt;&lt;p&gt;&lt;b&gt;Original article:&lt;/b&gt;&lt;/p&gt;&lt;p&gt;&lt;a href='https://forum.effectivealtruism.org/posts/WLok4YuJ4kfFpDRTi/first-clean-water-now-clean-air'&gt;https://forum.effectivealtruism.org/posts/WLok4YuJ4kfFpDRTi/first-clean-water-now-clean-air&lt;/a&gt;&lt;/p&gt;&lt;p&gt;&lt;br/&gt;&lt;/p&gt;&lt;p&gt;Narrated for the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt; by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <itunes:author/>
      <enclosure url="https://www.buzzsprout.com/2062493/12765102-first-clean-water-now-clean-air-by-fin-moorhouse.mp3" length="24077411" type="audio/mpeg"/>
      <guid isPermaLink="false">Buzzsprout-12765102</guid>
      <pubDate>Tue, 02 May 2023 04:00:00 +0100</pubDate>
      <itunes:duration>2005</itunes:duration>
      <itunes:keywords/>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
    </item>
    <item>
      <itunes:title>"New 80,000 Hours Podcast on high-impact climate philanthropy" by Johannes Ackva</itunes:title>
      <title>"New 80,000 Hours Podcast on high-impact climate philanthropy" by Johannes Ackva</title>
      <description>&lt;p&gt;&lt;/p&gt;&lt;p&gt;This is a linkpost for a &lt;a href='https://80000hours.org/podcast/episodes/johannes-ackva-unfashionable-climate-interventions/'&gt;new 80,000 hours episode&lt;/a&gt; focused on how to engage in climate from an effective altruist perspective.&lt;/p&gt;&lt;ul&gt;&lt;li&gt;The &lt;a href='https://t.co/KsyBDEV6sl'&gt;podcast lives&lt;/a&gt; here, including a selection of highlights as well as a full transcript and lots of additional links. Thanks to &lt;a href='https://forum.effectivealtruism.org/posts/BhzqvnaZiqfsssquM/new-80-000-hours-feature-listen-to-audio-versions-of-our'&gt;80,000hours’ new feature&lt;/a&gt; rolled out on April 1st you can even listen to it!&lt;/li&gt;&lt;li&gt;My &lt;a href='https://twitter.com/J_Ackva/status/1643260677107118081'&gt;Twitter thread&lt;/a&gt; is here.&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;Rob and I are having a pretty wide-ranging conversation, here are the things we cover which I find most interesting for different audiences:&lt;/p&gt;&lt;p&gt;&lt;b&gt;Original article:&lt;br/&gt;&lt;/b&gt;&lt;a href='https://forum.effectivealtruism.org/posts/A3ZLLanDZZt9sgGQ9/new-80-000-hours-podcast-on-high-impact-climate-philanthropy'&gt;https://forum.effectivealtruism.org/posts/A3ZLLanDZZt9sgGQ9/new-80-000-hours-podcast-on-high-impact-climate-philanthropy&lt;/a&gt;&lt;/p&gt;&lt;p&gt;Narrated for the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt; by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded>&lt;p&gt;&lt;/p&gt;&lt;p&gt;This is a linkpost for a &lt;a href='https://80000hours.org/podcast/episodes/johannes-ackva-unfashionable-climate-interventions/'&gt;new 80,000 hours episode&lt;/a&gt; focused on how to engage in climate from an effective altruist perspective.&lt;/p&gt;&lt;ul&gt;&lt;li&gt;The &lt;a href='https://t.co/KsyBDEV6sl'&gt;podcast lives&lt;/a&gt; here, including a selection of highlights as well as a full transcript and lots of additional links. Thanks to &lt;a href='https://forum.effectivealtruism.org/posts/BhzqvnaZiqfsssquM/new-80-000-hours-feature-listen-to-audio-versions-of-our'&gt;80,000hours’ new feature&lt;/a&gt; rolled out on April 1st you can even listen to it!&lt;/li&gt;&lt;li&gt;My &lt;a href='https://twitter.com/J_Ackva/status/1643260677107118081'&gt;Twitter thread&lt;/a&gt; is here.&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;Rob and I are having a pretty wide-ranging conversation, here are the things we cover which I find most interesting for different audiences:&lt;/p&gt;&lt;p&gt;&lt;b&gt;Original article:&lt;br/&gt;&lt;/b&gt;&lt;a href='https://forum.effectivealtruism.org/posts/A3ZLLanDZZt9sgGQ9/new-80-000-hours-podcast-on-high-impact-climate-philanthropy'&gt;https://forum.effectivealtruism.org/posts/A3ZLLanDZZt9sgGQ9/new-80-000-hours-podcast-on-high-impact-climate-philanthropy&lt;/a&gt;&lt;/p&gt;&lt;p&gt;Narrated for the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt; by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <itunes:author/>
      <enclosure url="https://www.buzzsprout.com/2062493/12733406-new-80-000-hours-podcast-on-high-impact-climate-philanthropy-by-johannes-ackva.mp3" length="2475862" type="audio/mpeg"/>
      <guid isPermaLink="false">Buzzsprout-12733406</guid>
      <pubDate>Thu, 27 Apr 2023 06:00:00 +0100</pubDate>
      <itunes:duration>205</itunes:duration>
      <itunes:keywords/>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
    </item>
    <item>
      <itunes:title>EA Forum Weekly Summaries – Episode 22 (Mar. 27 - Apr. 16, 2023)</itunes:title>
      <title>EA Forum Weekly Summaries – Episode 22 (Mar. 27 - Apr. 16, 2023)</title>
      <description>&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;b&gt;Original article:&lt;br/&gt;&lt;/b&gt;&lt;a href='https://forum.effectivealtruism.org/posts/o3Gaoizs2So6SpgLH/summaries-of-top-forum-posts-27th-march-to-16th-april'&gt;https://forum.effectivealtruism.org/posts/o3Gaoizs2So6SpgLH/summaries-of-top-forum-posts-27th-march-to-16th-april&lt;/a&gt;&lt;/p&gt;&lt;p&gt;This podcast has just passed the 6-month mark! &lt;b&gt;Please &lt;/b&gt;&lt;a href='https://forms.gle/vt8hrt4PAT5jyp427'&gt;&lt;b&gt;give us your feedback and suggestions&lt;/b&gt;&lt;/a&gt; so we can continue to improve — the survey should take no more than 10 minutes, and we really appreciate your input!&lt;/p&gt;&lt;p&gt;This is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection &lt;a href='https://forum.effectivealtruism.org/s/W4fhpuN26naxGCBbN'&gt;here&lt;/a&gt;. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.&lt;/p&gt;&lt;p&gt;Narrated by &lt;a href='https://twitter.com/SnellColeman'&gt;Coleman Jackson Snell&lt;/a&gt;. Summaries written by &lt;a href='https://forum.effectivealtruism.org/users/greyarea'&gt;Zoe Williams&lt;/a&gt; (&lt;a href='https://rethinkpriorities.org/'&gt;Rethink Priorities&lt;/a&gt;).&lt;/p&gt;&lt;p&gt;Published by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt; on behalf of the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded>&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;b&gt;Original article:&lt;br/&gt;&lt;/b&gt;&lt;a href='https://forum.effectivealtruism.org/posts/o3Gaoizs2So6SpgLH/summaries-of-top-forum-posts-27th-march-to-16th-april'&gt;https://forum.effectivealtruism.org/posts/o3Gaoizs2So6SpgLH/summaries-of-top-forum-posts-27th-march-to-16th-april&lt;/a&gt;&lt;/p&gt;&lt;p&gt;This podcast has just passed the 6-month mark! &lt;b&gt;Please &lt;/b&gt;&lt;a href='https://forms.gle/vt8hrt4PAT5jyp427'&gt;&lt;b&gt;give us your feedback and suggestions&lt;/b&gt;&lt;/a&gt; so we can continue to improve — the survey should take no more than 10 minutes, and we really appreciate your input!&lt;/p&gt;&lt;p&gt;This is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection &lt;a href='https://forum.effectivealtruism.org/s/W4fhpuN26naxGCBbN'&gt;here&lt;/a&gt;. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.&lt;/p&gt;&lt;p&gt;Narrated by &lt;a href='https://twitter.com/SnellColeman'&gt;Coleman Jackson Snell&lt;/a&gt;. Summaries written by &lt;a href='https://forum.effectivealtruism.org/users/greyarea'&gt;Zoe Williams&lt;/a&gt; (&lt;a href='https://rethinkpriorities.org/'&gt;Rethink Priorities&lt;/a&gt;).&lt;/p&gt;&lt;p&gt;Published by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt; on behalf of the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <itunes:author/>
      <enclosure url="https://www.buzzsprout.com/2062493/12702817-ea-forum-weekly-summaries-episode-22-mar-27-apr-16-2023.mp3" length="15274471" type="audio/mpeg"/>
      <guid isPermaLink="false">Buzzsprout-12702817</guid>
      <pubDate>Sat, 22 Apr 2023 21:00:00 +0100</pubDate>
      <itunes:duration>1272</itunes:duration>
      <itunes:keywords/>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
    </item>
    <item>
      <itunes:title>"A freshman year during the AI midgame: my approach to the next year" by Buck</itunes:title>
      <title>"A freshman year during the AI midgame: my approach to the next year" by Buck</title>
      <description>&lt;p&gt;&lt;/p&gt;&lt;p&gt;I recently spent some time reflecting on my career and my life, for a few reasons:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;It was my 29th birthday, an occasion which felt like a particularly natural time to think through what I wanted to accomplish over the course of the next year 🙂.&lt;/li&gt;&lt;li&gt;It seems like AI progress is heating up.&lt;/li&gt;&lt;li&gt;It felt like a good time to reflect on how Redwood has been going, because we’ve been having conversations with funders about getting more funding.&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;I wanted to have a better answer to these questions:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;What’s the default trajectory that I should plan for my career to follow? And what does this imply for what I should be doing right now?&lt;/li&gt;&lt;li&gt;How much urgency should I feel in my life?&lt;/li&gt;&lt;li&gt;How hard should I work?&lt;/li&gt;&lt;li&gt;How much should I be trying to do the most valuable-seeming thing, vs engaging in more playful exploration and learning?&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;&lt;b&gt;Original article:&lt;br/&gt;&lt;/b&gt;&lt;a href='https://forum.effectivealtruism.org/posts/2DzLY6YP2z5zRDAGA/a-freshman-year-during-the-ai-midgame-my-approach-to-the'&gt;https://forum.effectivealtruism.org/posts/2DzLY6YP2z5zRDAGA/a-freshman-year-during-the-ai-midgame-my-approach-to-the&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;Narrated for the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt; by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded>&lt;p&gt;&lt;/p&gt;&lt;p&gt;I recently spent some time reflecting on my career and my life, for a few reasons:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;It was my 29th birthday, an occasion which felt like a particularly natural time to think through what I wanted to accomplish over the course of the next year 🙂.&lt;/li&gt;&lt;li&gt;It seems like AI progress is heating up.&lt;/li&gt;&lt;li&gt;It felt like a good time to reflect on how Redwood has been going, because we’ve been having conversations with funders about getting more funding.&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;I wanted to have a better answer to these questions:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;What’s the default trajectory that I should plan for my career to follow? And what does this imply for what I should be doing right now?&lt;/li&gt;&lt;li&gt;How much urgency should I feel in my life?&lt;/li&gt;&lt;li&gt;How hard should I work?&lt;/li&gt;&lt;li&gt;How much should I be trying to do the most valuable-seeming thing, vs engaging in more playful exploration and learning?&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;&lt;b&gt;Original article:&lt;br/&gt;&lt;/b&gt;&lt;a href='https://forum.effectivealtruism.org/posts/2DzLY6YP2z5zRDAGA/a-freshman-year-during-the-ai-midgame-my-approach-to-the'&gt;https://forum.effectivealtruism.org/posts/2DzLY6YP2z5zRDAGA/a-freshman-year-during-the-ai-midgame-my-approach-to-the&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;Narrated for the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt; by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <itunes:author/>
      <enclosure url="https://www.buzzsprout.com/2062493/12680418-a-freshman-year-during-the-ai-midgame-my-approach-to-the-next-year-by-buck.mp3" length="7779664" type="audio/mpeg"/>
      <guid isPermaLink="false">Buzzsprout-12680418</guid>
      <pubDate>Wed, 19 Apr 2023 06:00:00 +0100</pubDate>
      <itunes:duration>647</itunes:duration>
      <itunes:keywords/>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
    </item>
    <item>
      <itunes:title>"Want to win the AGI race? Solve alignment." by Leopold Aschenbrenner</itunes:title>
      <title>"Want to win the AGI race? Solve alignment." by Leopold Aschenbrenner</title>
      <description>&lt;p&gt;This is a linkpost for &lt;a href='https://forum.effectivealtruism.org/out?url=https%3A%2F%2Fwww.forourposterity.com%2Fwant-to-win-the-agi-race-solve-alignment%2F'&gt;https://www.forourposterity.com/want-to-win-the-agi-race-solve-alignment/&lt;/a&gt;&lt;/p&gt;&lt;p&gt;&lt;em&gt;Society really cares about safety. Practically speaking, the binding constraint on deploying your AGI could well be your ability to align your AGI. Solving (scalable) alignment might be worth lots of $$$ and key to beating China.&lt;/em&gt;&lt;/p&gt;&lt;p&gt;Look, I really don&amp;apos;t want Xi Jinping Thought to rule the world. If China gets AGI first, the ensuing rapid AI-powered scientific and technological progress could well give it a decisive advantage (cf &lt;a href='https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/'&gt;potential for &amp;gt;30%/year economic growth with AGI&lt;/a&gt;). I think there&amp;apos;s a very real specter of global authoritarianism here. &lt;/p&gt;&lt;p&gt;Or hey, maybe you just think AGI is cool. You want to go build amazing products and enable breakthrough science and solve the world’s problems.&lt;/p&gt;&lt;p&gt;So, race to AGI with reckless abandon then? At this point, people get into agonizing discussions about safety tradeoffs. And many people just mood affiliate their way to an answer: &amp;quot;accelerate, progress go brrrr,&amp;quot; or &amp;quot;AI scary, slow it down.&amp;quot;&lt;/p&gt;&lt;p&gt;I see this much more practically. And, practically, society cares about safety, a lot. Do you actually think that you’ll be able to and allowed to deploy an AI system that has, say, a &lt;a href='https://www.cold-takes.com/how-we-could-stumble-into-ai-catastrophe/'&gt;10% chance of&lt;/a&gt; &lt;a href='https://www.cold-takes.com/ai-could-defeat-all-of-us-combined/'&gt;destroying all of humanity&lt;/a&gt;?&lt;br/&gt;&lt;br/&gt;&lt;b&gt;Original article:&lt;/b&gt;&lt;br/&gt;&lt;a href='https://forum.effectivealtruism.org/posts/Ackzs8Wbk7isDzs2n/want-to-win-the-agi-race-solve-alignment'&gt;https://forum.effectivealtruism.org/posts/Ackzs8Wbk7isDzs2n/want-to-win-the-agi-race-solve-alignment&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;Narrated for the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt; by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded>&lt;p&gt;This is a linkpost for &lt;a href='https://forum.effectivealtruism.org/out?url=https%3A%2F%2Fwww.forourposterity.com%2Fwant-to-win-the-agi-race-solve-alignment%2F'&gt;https://www.forourposterity.com/want-to-win-the-agi-race-solve-alignment/&lt;/a&gt;&lt;/p&gt;&lt;p&gt;&lt;em&gt;Society really cares about safety. Practically speaking, the binding constraint on deploying your AGI could well be your ability to align your AGI. Solving (scalable) alignment might be worth lots of $$$ and key to beating China.&lt;/em&gt;&lt;/p&gt;&lt;p&gt;Look, I really don&amp;apos;t want Xi Jinping Thought to rule the world. If China gets AGI first, the ensuing rapid AI-powered scientific and technological progress could well give it a decisive advantage (cf &lt;a href='https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/'&gt;potential for &amp;gt;30%/year economic growth with AGI&lt;/a&gt;). I think there&amp;apos;s a very real specter of global authoritarianism here. &lt;/p&gt;&lt;p&gt;Or hey, maybe you just think AGI is cool. You want to go build amazing products and enable breakthrough science and solve the world’s problems.&lt;/p&gt;&lt;p&gt;So, race to AGI with reckless abandon then? At this point, people get into agonizing discussions about safety tradeoffs. And many people just mood affiliate their way to an answer: &amp;quot;accelerate, progress go brrrr,&amp;quot; or &amp;quot;AI scary, slow it down.&amp;quot;&lt;/p&gt;&lt;p&gt;I see this much more practically. And, practically, society cares about safety, a lot. Do you actually think that you’ll be able to and allowed to deploy an AI system that has, say, a &lt;a href='https://www.cold-takes.com/how-we-could-stumble-into-ai-catastrophe/'&gt;10% chance of&lt;/a&gt; &lt;a href='https://www.cold-takes.com/ai-could-defeat-all-of-us-combined/'&gt;destroying all of humanity&lt;/a&gt;?&lt;br/&gt;&lt;br/&gt;&lt;b&gt;Original article:&lt;/b&gt;&lt;br/&gt;&lt;a href='https://forum.effectivealtruism.org/posts/Ackzs8Wbk7isDzs2n/want-to-win-the-agi-race-solve-alignment'&gt;https://forum.effectivealtruism.org/posts/Ackzs8Wbk7isDzs2n/want-to-win-the-agi-race-solve-alignment&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;Narrated for the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt; by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <itunes:author>TYPE III AUDIO</itunes:author>
      <enclosure url="https://www.buzzsprout.com/2062493/12648435-want-to-win-the-agi-race-solve-alignment-by-leopold-aschenbrenner.mp3" length="6748230" type="audio/mpeg"/>
      <guid isPermaLink="false">Buzzsprout-12648435</guid>
      <pubDate>Fri, 14 Apr 2023 06:00:00 +0100</pubDate>
      <itunes:duration>561</itunes:duration>
      <itunes:keywords/>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
    </item>
    <item>
      <itunes:title>"Nobody’s on the ball on AGI alignment" by Leopold Aschenbrenner</itunes:title>
      <title>"Nobody’s on the ball on AGI alignment" by Leopold Aschenbrenner</title>
      <description>&lt;p&gt;&lt;em&gt;Far fewer people are working on it than you might think, and even the alignment research that is happening is very much not on track. (But it’s a solvable problem, if we get our act together.)&lt;br/&gt;&lt;br/&gt;&lt;/em&gt;&lt;b&gt;Original article:&lt;br/&gt;&lt;/b&gt;This is a linkpost for &lt;a href='https://forum.effectivealtruism.org/out?url=https%3A%2F%2Fwww.forourposterity.com%2Fnobodys-on-the-ball-on-agi-alignment%2F'&gt;https://www.forourposterity.com/nobodys-on-the-ball-on-agi-alignment/&lt;/a&gt;&lt;br/&gt;&lt;a href='https://forum.effectivealtruism.org/posts/5LNxeWFdoynvgZeik/nobody-s-on-the-ball-on-agi-alignment'&gt;https://forum.effectivealtruism.org/posts/5LNxeWFdoynvgZeik/nobody-s-on-the-ball-on-agi-alignment&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;Narrated for the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt; by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded>&lt;p&gt;&lt;em&gt;Far fewer people are working on it than you might think, and even the alignment research that is happening is very much not on track. (But it’s a solvable problem, if we get our act together.)&lt;br/&gt;&lt;br/&gt;&lt;/em&gt;&lt;b&gt;Original article:&lt;br/&gt;&lt;/b&gt;This is a linkpost for &lt;a href='https://forum.effectivealtruism.org/out?url=https%3A%2F%2Fwww.forourposterity.com%2Fnobodys-on-the-ball-on-agi-alignment%2F'&gt;https://www.forourposterity.com/nobodys-on-the-ball-on-agi-alignment/&lt;/a&gt;&lt;br/&gt;&lt;a href='https://forum.effectivealtruism.org/posts/5LNxeWFdoynvgZeik/nobody-s-on-the-ball-on-agi-alignment'&gt;https://forum.effectivealtruism.org/posts/5LNxeWFdoynvgZeik/nobody-s-on-the-ball-on-agi-alignment&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;Narrated for the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt; by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <itunes:author/>
      <enclosure url="https://www.buzzsprout.com/2062493/12591818-nobody-s-on-the-ball-on-agi-alignment-by-leopold-aschenbrenner.mp3" length="12396098" type="audio/mpeg"/>
      <guid isPermaLink="false">Buzzsprout-12591818</guid>
      <pubDate>Wed, 05 Apr 2023 06:00:00 +0100</pubDate>
      <podcast:chapters url="https://feeds.buzzsprout.com/2062493/12591818/chapters.json" type="application/json"/>
      <itunes:duration>1032</itunes:duration>
      <itunes:keywords/>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
    </item>
    <item>
      <itunes:title>"Seeing more whole" by Joe Carlsmith</itunes:title>
      <title>"Seeing more whole" by Joe Carlsmith</title>
      <description>&lt;p&gt;&lt;/p&gt;&lt;p&gt;In my &lt;a href='https://joecarlsmith.com/2023/02/16/why-should-ethical-anti-realists-do-ethics'&gt;last essay&lt;/a&gt;, I looked at two stories (brute preference for systematic-ness, and money-pumps) about why ethical anti-realists should still be interested in ethics – two stories about why the “philosophy game” is worth playing, even if there are no objective normative truths, and you’re free to do whatever you want. I think some versions of these stories might well have a role to play; but I find that on their own, they don’t fully capture what feels alive to me about ethics. Here I try to say something that gets closer.&lt;/p&gt;&lt;p&gt;&lt;b&gt;Original article:&lt;/b&gt;&lt;/p&gt;&lt;p&gt;&lt;a href='https://joecarlsmith.com/2023/02/17/seeing-more-whole'&gt;https://joecarlsmith.com/2023/02/17/seeing-more-whole&lt;/a&gt;&lt;/p&gt;&lt;p&gt;&lt;br/&gt;&lt;/p&gt;&lt;p&gt;Narrated by &lt;a href='https://joecarlsmith.com/'&gt;Joe Carlsmith&lt;/a&gt; and included on the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt; by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded>&lt;p&gt;&lt;/p&gt;&lt;p&gt;In my &lt;a href='https://joecarlsmith.com/2023/02/16/why-should-ethical-anti-realists-do-ethics'&gt;last essay&lt;/a&gt;, I looked at two stories (brute preference for systematic-ness, and money-pumps) about why ethical anti-realists should still be interested in ethics – two stories about why the “philosophy game” is worth playing, even if there are no objective normative truths, and you’re free to do whatever you want. I think some versions of these stories might well have a role to play; but I find that on their own, they don’t fully capture what feels alive to me about ethics. Here I try to say something that gets closer.&lt;/p&gt;&lt;p&gt;&lt;b&gt;Original article:&lt;/b&gt;&lt;/p&gt;&lt;p&gt;&lt;a href='https://joecarlsmith.com/2023/02/17/seeing-more-whole'&gt;https://joecarlsmith.com/2023/02/17/seeing-more-whole&lt;/a&gt;&lt;/p&gt;&lt;p&gt;&lt;br/&gt;&lt;/p&gt;&lt;p&gt;Narrated by &lt;a href='https://joecarlsmith.com/'&gt;Joe Carlsmith&lt;/a&gt; and included on the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt; by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <itunes:author/>
      <enclosure url="https://www.buzzsprout.com/2062493/12550248-seeing-more-whole-by-joe-carlsmith.mp3" length="37767713" type="audio/mpeg"/>
      <guid isPermaLink="false">Buzzsprout-12550248</guid>
      <pubDate>Thu, 30 Mar 2023 18:00:00 +0100</pubDate>
      <itunes:duration>3146</itunes:duration>
      <itunes:keywords/>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
    </item>
    <item>
      <itunes:title>EA Forum Weekly Summaries – Episode 21 (March 13-19, 2023)</itunes:title>
      <title>EA Forum Weekly Summaries – Episode 21 (March 13-19, 2023)</title>
      <description>&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;b&gt;Original article:&lt;br/&gt;&lt;/b&gt;&lt;a href='https://forum.effectivealtruism.org/posts/idpbfmPjHFCvzj46L/ea-and-lw-forum-weekly-summary-13th-19th-march-2023'&gt;https://forum.effectivealtruism.org/posts/idpbfmPjHFCvzj46L/ea-and-lw-forum-weekly-summary-13th-19th-march-2023&lt;/a&gt;&lt;/p&gt;&lt;p&gt;This is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection &lt;a href='https://forum.effectivealtruism.org/s/W4fhpuN26naxGCBbN'&gt;here&lt;/a&gt;. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.&lt;/p&gt;&lt;p&gt;Narrated by &lt;a href='https://twitter.com/SnellColeman'&gt;Coleman Jackson Snell&lt;/a&gt;. Summaries written by &lt;a href='https://forum.effectivealtruism.org/users/greyarea'&gt;Zoe Williams&lt;/a&gt; (&lt;a href='https://rethinkpriorities.org/'&gt;Rethink Priorities&lt;/a&gt;).&lt;/p&gt;&lt;p&gt;Published by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt; on behalf of the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded>&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;b&gt;Original article:&lt;br/&gt;&lt;/b&gt;&lt;a href='https://forum.effectivealtruism.org/posts/idpbfmPjHFCvzj46L/ea-and-lw-forum-weekly-summary-13th-19th-march-2023'&gt;https://forum.effectivealtruism.org/posts/idpbfmPjHFCvzj46L/ea-and-lw-forum-weekly-summary-13th-19th-march-2023&lt;/a&gt;&lt;/p&gt;&lt;p&gt;This is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection &lt;a href='https://forum.effectivealtruism.org/s/W4fhpuN26naxGCBbN'&gt;here&lt;/a&gt;. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.&lt;/p&gt;&lt;p&gt;Narrated by &lt;a href='https://twitter.com/SnellColeman'&gt;Coleman Jackson Snell&lt;/a&gt;. Summaries written by &lt;a href='https://forum.effectivealtruism.org/users/greyarea'&gt;Zoe Williams&lt;/a&gt; (&lt;a href='https://rethinkpriorities.org/'&gt;Rethink Priorities&lt;/a&gt;).&lt;/p&gt;&lt;p&gt;Published by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt; on behalf of the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <itunes:author/>
      <enclosure url="https://www.buzzsprout.com/2062493/12545757-ea-forum-weekly-summaries-episode-21-march-13-19-2023.mp3" length="11769872" type="audio/mpeg"/>
      <guid isPermaLink="false">Buzzsprout-12545757</guid>
      <pubDate>Thu, 30 Mar 2023 02:00:00 +0100</pubDate>
      <itunes:duration>980</itunes:duration>
      <itunes:keywords/>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
    </item>
    <item>
      <itunes:title>"How much should governments pay to prevent catastrophes? Longtermism’s limited role" by Elliott Thornley and Carl Shulman</itunes:title>
      <title>"How much should governments pay to prevent catastrophes? Longtermism’s limited role" by Elliott Thornley and Carl Shulman</title>
      <description>&lt;p&gt;Longtermists have argued that humanity should significantly increase its efforts to prevent catastrophes like nuclear wars, pandemics, and AI disasters. But one prominent longtermist argument overshoots this conclusion: the argument also implies that humanity should reduce the risk of existential catastrophe even at extreme cost to the present generation. This overshoot means that democratic governments cannot use the longtermist argument to guide their catastrophe policy. In this paper, we show that the case for preventing catastrophe does not depend on longtermism. Standard cost-benefit analysis implies that governments should spend much more on reducing catastrophic risk. We argue that a government catastrophe policy guided by cost-benefit analysis should be the goal of longtermists in the political sphere. This policy would be democratically acceptable, and it would reduce existential risk by almost as much as a strong longtermist policy.&lt;br/&gt;&lt;br/&gt;&lt;strong&gt;Original article:&lt;/strong&gt;&lt;br/&gt;&lt;a href='https://forum.effectivealtruism.org/posts/DiGL5FuLgWActPBsf/how-much-should-governments-pay-to-prevent-catastrophes'&gt;https://forum.effectivealtruism.org/posts/DiGL5FuLgWActPBsf/how-much-should-governments-pay-to-prevent-catastrophes&lt;/a&gt;&lt;a href='https://forum.effectivealtruism.org/posts/jk7A3NMdbxp65kcJJ/500-million-but-not-a-single-one-more'&gt;&lt;br/&gt;&lt;br/&gt;&lt;/a&gt;Narrated for the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt; by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt;.&lt;br/&gt;&lt;br/&gt;&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded>&lt;p&gt;Longtermists have argued that humanity should significantly increase its efforts to prevent catastrophes like nuclear wars, pandemics, and AI disasters. But one prominent longtermist argument overshoots this conclusion: the argument also implies that humanity should reduce the risk of existential catastrophe even at extreme cost to the present generation. This overshoot means that democratic governments cannot use the longtermist argument to guide their catastrophe policy. In this paper, we show that the case for preventing catastrophe does not depend on longtermism. Standard cost-benefit analysis implies that governments should spend much more on reducing catastrophic risk. We argue that a government catastrophe policy guided by cost-benefit analysis should be the goal of longtermists in the political sphere. This policy would be democratically acceptable, and it would reduce existential risk by almost as much as a strong longtermist policy.&lt;br/&gt;&lt;br/&gt;&lt;strong&gt;Original article:&lt;/strong&gt;&lt;br/&gt;&lt;a href='https://forum.effectivealtruism.org/posts/DiGL5FuLgWActPBsf/how-much-should-governments-pay-to-prevent-catastrophes'&gt;https://forum.effectivealtruism.org/posts/DiGL5FuLgWActPBsf/how-much-should-governments-pay-to-prevent-catastrophes&lt;/a&gt;&lt;a href='https://forum.effectivealtruism.org/posts/jk7A3NMdbxp65kcJJ/500-million-but-not-a-single-one-more'&gt;&lt;br/&gt;&lt;br/&gt;&lt;/a&gt;Narrated for the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt; by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt;.&lt;br/&gt;&lt;br/&gt;&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <itunes:author>TYPE III AUDIO</itunes:author>
      <enclosure url="https://www.buzzsprout.com/2062493/12495649-how-much-should-governments-pay-to-prevent-catastrophes-longtermism-s-limited-role-by-elliott-thornley-and-carl-shulman.mp3" length="55228993" type="audio/mpeg"/>
      <guid isPermaLink="false">Buzzsprout-12495649</guid>
      <pubDate>Wed, 22 Mar 2023 19:00:00 +0000</pubDate>
      <podcast:chapters url="https://feeds.buzzsprout.com/2062493/12495649/chapters.json" type="application/json"/>
      <itunes:duration>4601</itunes:duration>
      <itunes:keywords/>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
    </item>
    <item>
      <itunes:title>"Salt, Sugar, Water, Zinc: How Scientists Learned to Treat the 20th Century’s Biggest Killer of Children" by Matt Reynolds</itunes:title>
      <title>"Salt, Sugar, Water, Zinc: How Scientists Learned to Treat the 20th Century’s Biggest Killer of Children" by Matt Reynolds</title>
      <description>&lt;p&gt;&lt;/p&gt;&lt;p&gt;Oral rehydration therapy is now the standard treatment for dehydration. It’s saved millions of lives, and can be prepared at home in minutes. So why did it take so long to discover?&lt;br/&gt;&lt;br/&gt;Written by Matt Reynolds for Asterisk Magazine.&lt;/p&gt;&lt;p&gt;&lt;b&gt;Original article:&lt;br/&gt;&lt;/b&gt;&lt;a href='https://asteriskmag.com/issues/2/salt-sugar-water-zinc-how-scientists-learned-to-treat-the-20th-century-s-biggest-killer-of-children'&gt;https://asteriskmag.com/issues/2/salt-sugar-water-zinc-how-scientists-learned-to-treat-the-20th-century-s-biggest-killer-of-children&lt;/a&gt;&lt;/p&gt;&lt;p&gt;Narrated for the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt; by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded>&lt;p&gt;&lt;/p&gt;&lt;p&gt;Oral rehydration therapy is now the standard treatment for dehydration. It’s saved millions of lives, and can be prepared at home in minutes. So why did it take so long to discover?&lt;br/&gt;&lt;br/&gt;Written by Matt Reynolds for Asterisk Magazine.&lt;/p&gt;&lt;p&gt;&lt;b&gt;Original article:&lt;br/&gt;&lt;/b&gt;&lt;a href='https://asteriskmag.com/issues/2/salt-sugar-water-zinc-how-scientists-learned-to-treat-the-20th-century-s-biggest-killer-of-children'&gt;https://asteriskmag.com/issues/2/salt-sugar-water-zinc-how-scientists-learned-to-treat-the-20th-century-s-biggest-killer-of-children&lt;/a&gt;&lt;/p&gt;&lt;p&gt;Narrated for the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt; by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <itunes:author/>
      <enclosure url="https://www.buzzsprout.com/2062493/12489855-salt-sugar-water-zinc-how-scientists-learned-to-treat-the-20th-century-s-biggest-killer-of-children-by-matt-reynolds.mp3" length="24831351" type="audio/mpeg"/>
      <guid isPermaLink="false">Buzzsprout-12489855</guid>
      <pubDate>Tue, 21 Mar 2023 23:00:00 +0000</pubDate>
      <podcast:chapters url="https://feeds.buzzsprout.com/2062493/12489855/chapters.json" type="application/json"/>
      <itunes:duration>2068</itunes:duration>
      <itunes:keywords/>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
    </item>
    <item>
      <itunes:title>EA Forum Weekly Summaries – Episode 20 (March 6-12, 2023)</itunes:title>
      <title>EA Forum Weekly Summaries – Episode 20 (March 6-12, 2023)</title>
      <description>&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;b&gt;Original article:&lt;br/&gt;&lt;/b&gt;&lt;a href='https://forum.effectivealtruism.org/posts/fWGdsWbS6vtC9E7ii/ea-and-lw-forum-weekly-summary-6th-12th-march-2023'&gt;https://forum.effectivealtruism.org/posts/fWGdsWbS6vtC9E7ii/ea-and-lw-forum-weekly-summary-6th-12th-march-2023&lt;/a&gt;&lt;/p&gt;&lt;p&gt;This is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection &lt;a href='https://forum.effectivealtruism.org/s/W4fhpuN26naxGCBbN'&gt;here&lt;/a&gt;. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.&lt;/p&gt;&lt;p&gt;Narrated by &lt;a href='https://twitter.com/SnellColeman'&gt;Coleman Jackson Snell&lt;/a&gt;. Summaries written by &lt;a href='https://forum.effectivealtruism.org/users/greyarea'&gt;Zoe Williams&lt;/a&gt; (&lt;a href='https://rethinkpriorities.org/'&gt;Rethink Priorities&lt;/a&gt;).&lt;/p&gt;&lt;p&gt;Published by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt; on behalf of the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded>&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;b&gt;Original article:&lt;br/&gt;&lt;/b&gt;&lt;a href='https://forum.effectivealtruism.org/posts/fWGdsWbS6vtC9E7ii/ea-and-lw-forum-weekly-summary-6th-12th-march-2023'&gt;https://forum.effectivealtruism.org/posts/fWGdsWbS6vtC9E7ii/ea-and-lw-forum-weekly-summary-6th-12th-march-2023&lt;/a&gt;&lt;/p&gt;&lt;p&gt;This is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection &lt;a href='https://forum.effectivealtruism.org/s/W4fhpuN26naxGCBbN'&gt;here&lt;/a&gt;. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.&lt;/p&gt;&lt;p&gt;Narrated by &lt;a href='https://twitter.com/SnellColeman'&gt;Coleman Jackson Snell&lt;/a&gt;. Summaries written by &lt;a href='https://forum.effectivealtruism.org/users/greyarea'&gt;Zoe Williams&lt;/a&gt; (&lt;a href='https://rethinkpriorities.org/'&gt;Rethink Priorities&lt;/a&gt;).&lt;/p&gt;&lt;p&gt;Published by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt; on behalf of the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <itunes:author/>
      <enclosure url="https://www.buzzsprout.com/2062493/12441673-ea-forum-weekly-summaries-episode-20-march-6-12-2023.mp3" length="16077253" type="audio/mpeg"/>
      <guid isPermaLink="false">Buzzsprout-12441673</guid>
      <pubDate>Tue, 14 Mar 2023 21:00:00 +0000</pubDate>
      <itunes:duration>1339</itunes:duration>
      <itunes:keywords/>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
    </item>
    <item>
      <itunes:title>EA Forum Weekly Summaries – Episode 19 (Feb. 27 - Mar. 5, 2023)</itunes:title>
      <title>EA Forum Weekly Summaries – Episode 19 (Feb. 27 - Mar. 5, 2023)</title>
      <description>&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;b&gt;Original article:&lt;br/&gt;&lt;/b&gt;&lt;a href='https://forum.effectivealtruism.org/posts/yCxsz9jk5iau2uvYH/ea-and-lw-forum-weekly-summary-27th-feb-5th-mar-2023'&gt;https://forum.effectivealtruism.org/posts/yCxsz9jk5iau2uvYH/ea-and-lw-forum-weekly-summary-27th-feb-5th-mar-2023&lt;/a&gt;&lt;/p&gt;&lt;p&gt;This is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection &lt;a href='https://forum.effectivealtruism.org/s/W4fhpuN26naxGCBbN'&gt;here&lt;/a&gt;. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.&lt;/p&gt;&lt;p&gt;Narrated by &lt;a href='https://twitter.com/SnellColeman'&gt;Coleman Jackson Snell&lt;/a&gt;. Summaries written by &lt;a href='https://forum.effectivealtruism.org/users/greyarea'&gt;Zoe Williams&lt;/a&gt; (&lt;a href='https://rethinkpriorities.org/'&gt;Rethink Priorities&lt;/a&gt;).&lt;/p&gt;&lt;p&gt;Published by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt; on behalf of the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded>&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;b&gt;Original article:&lt;br/&gt;&lt;/b&gt;&lt;a href='https://forum.effectivealtruism.org/posts/yCxsz9jk5iau2uvYH/ea-and-lw-forum-weekly-summary-27th-feb-5th-mar-2023'&gt;https://forum.effectivealtruism.org/posts/yCxsz9jk5iau2uvYH/ea-and-lw-forum-weekly-summary-27th-feb-5th-mar-2023&lt;/a&gt;&lt;/p&gt;&lt;p&gt;This is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection &lt;a href='https://forum.effectivealtruism.org/s/W4fhpuN26naxGCBbN'&gt;here&lt;/a&gt;. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.&lt;/p&gt;&lt;p&gt;Narrated by &lt;a href='https://twitter.com/SnellColeman'&gt;Coleman Jackson Snell&lt;/a&gt;. Summaries written by &lt;a href='https://forum.effectivealtruism.org/users/greyarea'&gt;Zoe Williams&lt;/a&gt; (&lt;a href='https://rethinkpriorities.org/'&gt;Rethink Priorities&lt;/a&gt;).&lt;/p&gt;&lt;p&gt;Published by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt; on behalf of the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <itunes:author/>
      <enclosure url="https://www.buzzsprout.com/2062493/12404030-ea-forum-weekly-summaries-episode-19-feb-27-mar-5-2023.mp3" length="14146920" type="audio/mpeg"/>
      <guid isPermaLink="false">Buzzsprout-12404030</guid>
      <pubDate>Thu, 09 Mar 2023 00:00:00 +0000</pubDate>
      <itunes:duration>1178</itunes:duration>
      <itunes:keywords/>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
    </item>
    <item>
      <itunes:title>"Scoring forecasts from the 2016 “Expert Survey on Progress in AI”" by PatrickL</itunes:title>
      <title>"Scoring forecasts from the 2016 “Expert Survey on Progress in AI”" by PatrickL</title>
      <description>&lt;p&gt;&lt;/p&gt;&lt;div&gt;This document looks at the predictions made by AI experts in The 2016 Expert Survey on Progress in AI, analyses the predictions on ‘Narrow tasks’, and gives a Brier score to the median of the experts’ predictions. &lt;br/&gt;&lt;br/&gt;&lt;/div&gt;&lt;div&gt;&lt;br/&gt;&lt;/div&gt;&lt;div&gt;My analysis suggests that the experts did a fairly good job of forecasting (Brier score = 0.21), and would have been less accurate if they had predicted each development in AI to generally come, by a factor of 1.5, later (Brier score = 0.26) or sooner (Brier score = 0.29) than they actually predicted.&lt;br/&gt;&lt;br/&gt;&lt;/div&gt;&lt;div&gt;&lt;br/&gt;&lt;/div&gt;&lt;div&gt;&lt;strong&gt;Original article:&lt;br/&gt;&lt;/strong&gt;&lt;br/&gt;&lt;/div&gt;&lt;div&gt;&lt;a href='https://forum.effectivealtruism.org/posts/tCkBsT6cAw6LEKAbm/scoring-forecasts-from-the-2016-expert-survey-on-progress-in'&gt;https://forum.effectivealtruism.org/posts/tCkBsT6cAw6LEKAbm/scoring-forecasts-from-the-2016-expert-survey-on-progress-in&lt;br/&gt;&lt;/a&gt;&lt;br/&gt;&lt;/div&gt;&lt;div&gt;&lt;br/&gt;&lt;/div&gt;&lt;div&gt;Narrated for the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt; by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt;.&lt;br/&gt;&lt;br/&gt;&lt;/div&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded>&lt;p&gt;&lt;/p&gt;&lt;div&gt;This document looks at the predictions made by AI experts in The 2016 Expert Survey on Progress in AI, analyses the predictions on ‘Narrow tasks’, and gives a Brier score to the median of the experts’ predictions. &lt;br/&gt;&lt;br/&gt;&lt;/div&gt;&lt;div&gt;&lt;br/&gt;&lt;/div&gt;&lt;div&gt;My analysis suggests that the experts did a fairly good job of forecasting (Brier score = 0.21), and would have been less accurate if they had predicted each development in AI to generally come, by a factor of 1.5, later (Brier score = 0.26) or sooner (Brier score = 0.29) than they actually predicted.&lt;br/&gt;&lt;br/&gt;&lt;/div&gt;&lt;div&gt;&lt;br/&gt;&lt;/div&gt;&lt;div&gt;&lt;strong&gt;Original article:&lt;br/&gt;&lt;/strong&gt;&lt;br/&gt;&lt;/div&gt;&lt;div&gt;&lt;a href='https://forum.effectivealtruism.org/posts/tCkBsT6cAw6LEKAbm/scoring-forecasts-from-the-2016-expert-survey-on-progress-in'&gt;https://forum.effectivealtruism.org/posts/tCkBsT6cAw6LEKAbm/scoring-forecasts-from-the-2016-expert-survey-on-progress-in&lt;br/&gt;&lt;/a&gt;&lt;br/&gt;&lt;/div&gt;&lt;div&gt;&lt;br/&gt;&lt;/div&gt;&lt;div&gt;Narrated for the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt; by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt;.&lt;br/&gt;&lt;br/&gt;&lt;/div&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <itunes:author/>
      <enclosure url="https://www.buzzsprout.com/2062493/12382234-scoring-forecasts-from-the-2016-expert-survey-on-progress-in-ai-by-patrickl.mp3" length="13883346" type="audio/mpeg"/>
      <guid isPermaLink="false">Buzzsprout-12382234</guid>
      <pubDate>Mon, 06 Mar 2023 08:00:00 +0000</pubDate>
      <podcast:chapters url="https://feeds.buzzsprout.com/2062493/12382234/chapters.json" type="application/json"/>
      <itunes:duration>1156</itunes:duration>
      <itunes:keywords/>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
    </item>
    <item>
      <itunes:title>"Why I don’t agree with HLI’s estimate of household spillovers from therapy" by James Snowden</itunes:title>
      <title>"Why I don’t agree with HLI’s estimate of household spillovers from therapy" by James Snowden</title>
      <description>&lt;p&gt;&lt;/p&gt;&lt;p&gt;I don’t think the existing evidence justifies HLI&amp;apos;s estimate of 50% household spillovers. &lt;/p&gt;&lt;p&gt;My main disagreements are:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;Two of the three RCTs HLI relies on to estimate spillovers are on interventions specifically intended to benefit household members (unlike StrongMinds’ program, which targets women and adolescents living with depression). &lt;/li&gt;&lt;li&gt;Those RCTs only measure the wellbeing of a subset of household members most likely to benefit from the intervention.&lt;/li&gt;&lt;li&gt;The results of the third RCT are inconsistent with HLI’s estimate.&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;&lt;b&gt;Original article:&lt;/b&gt;&lt;/p&gt;&lt;p&gt;&lt;a href='https://forum.effectivealtruism.org/posts/gr4epkwe5WoYJXF32/why-i-don-t-agree-with-hli-s-estimate-of-household'&gt;https://forum.effectivealtruism.org/posts/gr4epkwe5WoYJXF32/why-i-don-t-agree-with-hli-s-estimate-of-household&lt;/a&gt;&lt;/p&gt;&lt;p&gt;Narrated for the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt; by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded>&lt;p&gt;&lt;/p&gt;&lt;p&gt;I don’t think the existing evidence justifies HLI&amp;apos;s estimate of 50% household spillovers. &lt;/p&gt;&lt;p&gt;My main disagreements are:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;Two of the three RCTs HLI relies on to estimate spillovers are on interventions specifically intended to benefit household members (unlike StrongMinds’ program, which targets women and adolescents living with depression). &lt;/li&gt;&lt;li&gt;Those RCTs only measure the wellbeing of a subset of household members most likely to benefit from the intervention.&lt;/li&gt;&lt;li&gt;The results of the third RCT are inconsistent with HLI’s estimate.&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;&lt;b&gt;Original article:&lt;/b&gt;&lt;/p&gt;&lt;p&gt;&lt;a href='https://forum.effectivealtruism.org/posts/gr4epkwe5WoYJXF32/why-i-don-t-agree-with-hli-s-estimate-of-household'&gt;https://forum.effectivealtruism.org/posts/gr4epkwe5WoYJXF32/why-i-don-t-agree-with-hli-s-estimate-of-household&lt;/a&gt;&lt;/p&gt;&lt;p&gt;Narrated for the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt; by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <itunes:author/>
      <enclosure url="https://www.buzzsprout.com/2062493/12369716-why-i-don-t-agree-with-hli-s-estimate-of-household-spillovers-from-therapy-by-james-snowden.mp3" length="20444916" type="audio/mpeg"/>
      <guid isPermaLink="false">Buzzsprout-12369716</guid>
      <pubDate>Fri, 03 Mar 2023 20:00:00 +0000</pubDate>
      <podcast:chapters url="https://feeds.buzzsprout.com/2062493/12369716/chapters.json" type="application/json"/>
      <itunes:duration>1703</itunes:duration>
      <itunes:keywords/>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
    </item>
    <item>
      <itunes:title>EA Forum Weekly Summaries – Episode 18 (Feb. 20-26, 2023)</itunes:title>
      <title>EA Forum Weekly Summaries – Episode 18 (Feb. 20-26, 2023)</title>
      <description>&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;b&gt;Original article:&lt;br/&gt;&lt;/b&gt;&lt;a href='https://forum.effectivealtruism.org/posts/bEJ6SyrkSF45B2LWZ/ea-and-lw-forum-weekly-summary-20th-26th-feb-2023'&gt;https://forum.effectivealtruism.org/posts/bEJ6SyrkSF45B2LWZ/ea-and-lw-forum-weekly-summary-20th-26th-feb-2023&lt;/a&gt;&lt;/p&gt;&lt;p&gt;This is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection &lt;a href='https://forum.effectivealtruism.org/s/W4fhpuN26naxGCBbN'&gt;here&lt;/a&gt;. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.&lt;/p&gt;&lt;p&gt;Narrated by &lt;a href='https://twitter.com/SnellColeman'&gt;Coleman Jackson Snell&lt;/a&gt;. Summaries written by &lt;a href='https://forum.effectivealtruism.org/users/greyarea'&gt;Zoe Williams&lt;/a&gt; (&lt;a href='https://rethinkpriorities.org/'&gt;Rethink Priorities&lt;/a&gt;).&lt;/p&gt;&lt;p&gt;Published by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt; on behalf of the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded>&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;b&gt;Original article:&lt;br/&gt;&lt;/b&gt;&lt;a href='https://forum.effectivealtruism.org/posts/bEJ6SyrkSF45B2LWZ/ea-and-lw-forum-weekly-summary-20th-26th-feb-2023'&gt;https://forum.effectivealtruism.org/posts/bEJ6SyrkSF45B2LWZ/ea-and-lw-forum-weekly-summary-20th-26th-feb-2023&lt;/a&gt;&lt;/p&gt;&lt;p&gt;This is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection &lt;a href='https://forum.effectivealtruism.org/s/W4fhpuN26naxGCBbN'&gt;here&lt;/a&gt;. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.&lt;/p&gt;&lt;p&gt;Narrated by &lt;a href='https://twitter.com/SnellColeman'&gt;Coleman Jackson Snell&lt;/a&gt;. Summaries written by &lt;a href='https://forum.effectivealtruism.org/users/greyarea'&gt;Zoe Williams&lt;/a&gt; (&lt;a href='https://rethinkpriorities.org/'&gt;Rethink Priorities&lt;/a&gt;).&lt;/p&gt;&lt;p&gt;Published by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt; on behalf of the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <itunes:author/>
      <enclosure url="https://www.buzzsprout.com/2062493/12368784-ea-forum-weekly-summaries-episode-18-feb-20-26-2023.mp3" length="15597958" type="audio/mpeg"/>
      <guid isPermaLink="false">Buzzsprout-12368784</guid>
      <pubDate>Fri, 03 Mar 2023 17:00:00 +0000</pubDate>
      <itunes:duration>1299</itunes:duration>
      <itunes:keywords/>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
    </item>
    <item>
      <itunes:title>"Why should ethical anti-realists do ethics?" by Joe Carlsmith</itunes:title>
      <title>"Why should ethical anti-realists do ethics?" by Joe Carlsmith</title>
      <description>&lt;p&gt;Ethical philosophy often tries to systematize. That is, it seeks general principles that will explain, unify, and revise our more particular intuitions. And sometimes, this can lead to strange and uncomfortable places.&lt;br/&gt;&lt;br/&gt;&lt;/p&gt;&lt;div&gt;So why do it? If you believe in an objective ethical truth, you might talk about getting closer to that truth. But suppose that you don’t. Suppose you think that you’re “free to do whatever you want.” In that case, if “systematizing” starts getting tough and uncomfortable, why not just … stop? After all, you can always just do whatever’s most intuitive or common-sensical in a given case – and often, this is the choice the “ethics game” was trying so hard to validate, anyway. So why play?&lt;br/&gt;&lt;br/&gt;&lt;/div&gt;&lt;div&gt;I think it’s a reasonable question. And I’ve found it showing up in my life in various ways. So I wrote a set of two essays explaining part of my current take. This is the first essay. Here I describe the question in more detail, give some examples of where it shows up, and describe my dissatisfaction with two places anti-realists often look for answers. &lt;br/&gt;&lt;br/&gt;&lt;strong&gt;Original article:&lt;br/&gt;&lt;/strong&gt;&lt;a href='https://forum.effectivealtruism.org/posts/saEQXBgzmDbob9GdH/why-i-no-longer-prioritize-wild-animal-welfare'&gt;https://joecarlsmith.com/2023/02/16/why-should-ethical-anti-realists-do-ethics&lt;br/&gt;&lt;/a&gt;&lt;br/&gt;&lt;/div&gt;&lt;div&gt;Narrated by &lt;a href='https://joecarlsmith.com/'&gt;Joe Carlsmith&lt;/a&gt; and included on the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt; by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt;.&lt;br/&gt;&lt;br/&gt;&lt;/div&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded>&lt;p&gt;Ethical philosophy often tries to systematize. That is, it seeks general principles that will explain, unify, and revise our more particular intuitions. And sometimes, this can lead to strange and uncomfortable places.&lt;br/&gt;&lt;br/&gt;&lt;/p&gt;&lt;div&gt;So why do it? If you believe in an objective ethical truth, you might talk about getting closer to that truth. But suppose that you don’t. Suppose you think that you’re “free to do whatever you want.” In that case, if “systematizing” starts getting tough and uncomfortable, why not just … stop? After all, you can always just do whatever’s most intuitive or common-sensical in a given case – and often, this is the choice the “ethics game” was trying so hard to validate, anyway. So why play?&lt;br/&gt;&lt;br/&gt;&lt;/div&gt;&lt;div&gt;I think it’s a reasonable question. And I’ve found it showing up in my life in various ways. So I wrote a set of two essays explaining part of my current take. This is the first essay. Here I describe the question in more detail, give some examples of where it shows up, and describe my dissatisfaction with two places anti-realists often look for answers. &lt;br/&gt;&lt;br/&gt;&lt;strong&gt;Original article:&lt;br/&gt;&lt;/strong&gt;&lt;a href='https://forum.effectivealtruism.org/posts/saEQXBgzmDbob9GdH/why-i-no-longer-prioritize-wild-animal-welfare'&gt;https://joecarlsmith.com/2023/02/16/why-should-ethical-anti-realists-do-ethics&lt;br/&gt;&lt;/a&gt;&lt;br/&gt;&lt;/div&gt;&lt;div&gt;Narrated by &lt;a href='https://joecarlsmith.com/'&gt;Joe Carlsmith&lt;/a&gt; and included on the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt; by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt;.&lt;br/&gt;&lt;br/&gt;&lt;/div&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <itunes:author/>
      <enclosure url="https://www.buzzsprout.com/2062493/12331463-why-should-ethical-anti-realists-do-ethics-by-joe-carlsmith.mp3" length="38526047" type="audio/mpeg"/>
      <guid isPermaLink="false">Buzzsprout-12331463</guid>
      <pubDate>Sun, 26 Feb 2023 20:00:00 +0000</pubDate>
      <itunes:duration>3209</itunes:duration>
      <itunes:keywords/>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
    </item>
    <item>
      <itunes:title>EA Forum Weekly Summaries – Episode 17 (Feb. 6-19, 2023)</itunes:title>
      <title>EA Forum Weekly Summaries – Episode 17 (Feb. 6-19, 2023)</title>
      <description>&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;b&gt;Original article:&lt;br/&gt;&lt;/b&gt;&lt;a href='https://forum.effectivealtruism.org/posts/fAWotZTEnyycJnuxz/ea-and-lw-forum-weekly-summary-6th-19th-feb-2023'&gt;https://forum.effectivealtruism.org/posts/fAWotZTEnyycJnuxz/ea-and-lw-forum-weekly-summary-6th-19th-feb-2023&lt;/a&gt;&lt;/p&gt;&lt;p&gt;This is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection &lt;a href='https://forum.effectivealtruism.org/s/W4fhpuN26naxGCBbN'&gt;here&lt;/a&gt;. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.&lt;/p&gt;&lt;p&gt;Narrated by &lt;a href='https://twitter.com/SnellColeman'&gt;Coleman Jackson Snell&lt;/a&gt;. Summaries written by &lt;a href='https://forum.effectivealtruism.org/users/greyarea'&gt;Zoe Williams&lt;/a&gt; (&lt;a href='https://rethinkpriorities.org/'&gt;Rethink Priorities&lt;/a&gt;).&lt;/p&gt;&lt;p&gt;Published by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt; on behalf of the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded>&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;b&gt;Original article:&lt;br/&gt;&lt;/b&gt;&lt;a href='https://forum.effectivealtruism.org/posts/fAWotZTEnyycJnuxz/ea-and-lw-forum-weekly-summary-6th-19th-feb-2023'&gt;https://forum.effectivealtruism.org/posts/fAWotZTEnyycJnuxz/ea-and-lw-forum-weekly-summary-6th-19th-feb-2023&lt;/a&gt;&lt;/p&gt;&lt;p&gt;This is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection &lt;a href='https://forum.effectivealtruism.org/s/W4fhpuN26naxGCBbN'&gt;here&lt;/a&gt;. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.&lt;/p&gt;&lt;p&gt;Narrated by &lt;a href='https://twitter.com/SnellColeman'&gt;Coleman Jackson Snell&lt;/a&gt;. Summaries written by &lt;a href='https://forum.effectivealtruism.org/users/greyarea'&gt;Zoe Williams&lt;/a&gt; (&lt;a href='https://rethinkpriorities.org/'&gt;Rethink Priorities&lt;/a&gt;).&lt;/p&gt;&lt;p&gt;Published by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt; on behalf of the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <itunes:author/>
      <enclosure url="https://www.buzzsprout.com/2062493/12309113-ea-forum-weekly-summaries-episode-17-feb-6-19-2023.mp3" length="11936007" type="audio/mpeg"/>
      <guid isPermaLink="false">Buzzsprout-12309113</guid>
      <pubDate>Thu, 23 Feb 2023 00:00:00 +0000</pubDate>
      <itunes:duration>994</itunes:duration>
      <itunes:keywords/>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
    </item>
    <item>
      <itunes:title>"Why I No Longer Prioritize Wild Animal Welfare (edited)" by saulius</itunes:title>
      <title>"Why I No Longer Prioritize Wild Animal Welfare (edited)" by saulius</title>
      <description>&lt;p&gt;&lt;/p&gt;&lt;p&gt;This is the story of how I came to see Wild Animal Welfare (WAW) as a less promising cause than I did initially. I summarise three articles I wrote on WAW: ‘Why it’s difficult to find cost-effective WAW interventions we could do now’, ‘Lobbying governments to improve WAW’, and ‘WAW in the far future’. I then draw some more general conclusions. The articles assume some familiarity with WAW ideas. See here or here for an intro to WAW ideas.&lt;/p&gt;&lt;p&gt;&lt;br/&gt;&lt;b&gt;Original article:&lt;/b&gt;&lt;/p&gt;&lt;p&gt;&lt;a href='https://forum.effectivealtruism.org/posts/saEQXBgzmDbob9GdH/why-i-no-longer-prioritize-wild-animal-welfare'&gt;https://forum.effectivealtruism.org/posts/saEQXBgzmDbob9GdH/why-i-no-longer-prioritize-wild-animal-welfare&lt;/a&gt;&lt;/p&gt;&lt;p&gt;&lt;br/&gt;Narrated for the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt; by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded>&lt;p&gt;&lt;/p&gt;&lt;p&gt;This is the story of how I came to see Wild Animal Welfare (WAW) as a less promising cause than I did initially. I summarise three articles I wrote on WAW: ‘Why it’s difficult to find cost-effective WAW interventions we could do now’, ‘Lobbying governments to improve WAW’, and ‘WAW in the far future’. I then draw some more general conclusions. The articles assume some familiarity with WAW ideas. See here or here for an intro to WAW ideas.&lt;/p&gt;&lt;p&gt;&lt;br/&gt;&lt;b&gt;Original article:&lt;/b&gt;&lt;/p&gt;&lt;p&gt;&lt;a href='https://forum.effectivealtruism.org/posts/saEQXBgzmDbob9GdH/why-i-no-longer-prioritize-wild-animal-welfare'&gt;https://forum.effectivealtruism.org/posts/saEQXBgzmDbob9GdH/why-i-no-longer-prioritize-wild-animal-welfare&lt;/a&gt;&lt;/p&gt;&lt;p&gt;&lt;br/&gt;Narrated for the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt; by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <itunes:author/>
      <enclosure url="https://www.buzzsprout.com/2062493/12276866-why-i-no-longer-prioritize-wild-animal-welfare-edited-by-saulius.mp3" length="7088569" type="audio/mpeg"/>
      <guid isPermaLink="false">Buzzsprout-12276866</guid>
      <pubDate>Sat, 18 Feb 2023 00:00:00 +0000</pubDate>
      <podcast:chapters url="https://feeds.buzzsprout.com/2062493/12276866/chapters.json" type="application/json"/>
      <itunes:duration>590</itunes:duration>
      <itunes:keywords/>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
    </item>
    <item>
      <itunes:title>"CE: Announcing our 2023 Charity Ideas. Apply now!" by Steve Thompson &amp; Charity Entrepreneurship</itunes:title>
      <title>"CE: Announcing our 2023 Charity Ideas. Apply now!" by Steve Thompson &amp; Charity Entrepreneurship</title>
      <description>&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;b&gt;Apply now to start a nonprofit in Biosecurity or Large-Scale Global Health&lt;br/&gt;&lt;/b&gt;&lt;br/&gt;&lt;/p&gt;&lt;p&gt;In this post we introduce our top five charity ideas for launch in 2023, in the areas of Biosecurity and Large-Scale Global Health. These are the result of five months’ work from our research team, and a six-stage iterative process that includes collaboration with partners and ideas from within and outside of the EA community. &lt;/p&gt;&lt;p&gt;&lt;b&gt;We’re looking for people to launch these ideas &lt;/b&gt;through our&lt;b&gt; &lt;/b&gt;July - August 2023&lt;b&gt; &lt;/b&gt;&lt;a href='https://bit.ly/CE2023IP'&gt;&lt;b&gt;Incubation Program&lt;/b&gt;&lt;/a&gt;. The deadline for applications is March 12, 2023. &lt;/p&gt;&lt;p&gt;[&lt;a href='https://bit.ly/IP2023Apply'&gt;APPLY NOW&lt;/a&gt;]&lt;/p&gt;&lt;p&gt;We provide cost-covered two-month training, stipends, ongoing mentorship, and grants up to $200,000 per project. You can learn more on &lt;a href='https://bit.ly/CE2023IP'&gt;our website&lt;/a&gt;. We also invite you to join our event on February 20, 6PM UK Time. Sam Hilton, our Director of Research, will introduce the ideas and answer your questions. &lt;a href='https://bit.ly/TIevent2023'&gt;Sign up here&lt;/a&gt;. &lt;/p&gt;&lt;p&gt;&lt;br/&gt;&lt;b&gt;Original article:&lt;br/&gt;&lt;/b&gt;&lt;br/&gt;&lt;/p&gt;&lt;p&gt;&lt;a href='https://forum.effectivealtruism.org/posts/xWRweQmmEKoLFwGyu/ce-announcing-our-2023-charity-ideas-apply-now-2'&gt;https://forum.effectivealtruism.org/posts/xWRweQmmEKoLFwGyu/ce-announcing-our-2023-charity-ideas-apply-now-2&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;&lt;/p&gt;&lt;p&gt;Narrated for the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt; by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded>&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;b&gt;Apply now to start a nonprofit in Biosecurity or Large-Scale Global Health&lt;br/&gt;&lt;/b&gt;&lt;br/&gt;&lt;/p&gt;&lt;p&gt;In this post we introduce our top five charity ideas for launch in 2023, in the areas of Biosecurity and Large-Scale Global Health. These are the result of five months’ work from our research team, and a six-stage iterative process that includes collaboration with partners and ideas from within and outside of the EA community. &lt;/p&gt;&lt;p&gt;&lt;b&gt;We’re looking for people to launch these ideas &lt;/b&gt;through our&lt;b&gt; &lt;/b&gt;July - August 2023&lt;b&gt; &lt;/b&gt;&lt;a href='https://bit.ly/CE2023IP'&gt;&lt;b&gt;Incubation Program&lt;/b&gt;&lt;/a&gt;. The deadline for applications is March 12, 2023. &lt;/p&gt;&lt;p&gt;[&lt;a href='https://bit.ly/IP2023Apply'&gt;APPLY NOW&lt;/a&gt;]&lt;/p&gt;&lt;p&gt;We provide cost-covered two-month training, stipends, ongoing mentorship, and grants up to $200,000 per project. You can learn more on &lt;a href='https://bit.ly/CE2023IP'&gt;our website&lt;/a&gt;. We also invite you to join our event on February 20, 6PM UK Time. Sam Hilton, our Director of Research, will introduce the ideas and answer your questions. &lt;a href='https://bit.ly/TIevent2023'&gt;Sign up here&lt;/a&gt;. &lt;/p&gt;&lt;p&gt;&lt;br/&gt;&lt;b&gt;Original article:&lt;br/&gt;&lt;/b&gt;&lt;br/&gt;&lt;/p&gt;&lt;p&gt;&lt;a href='https://forum.effectivealtruism.org/posts/xWRweQmmEKoLFwGyu/ce-announcing-our-2023-charity-ideas-apply-now-2'&gt;https://forum.effectivealtruism.org/posts/xWRweQmmEKoLFwGyu/ce-announcing-our-2023-charity-ideas-apply-now-2&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;&lt;/p&gt;&lt;p&gt;Narrated for the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt; by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <itunes:author/>
      <enclosure url="https://www.buzzsprout.com/2062493/12269623-ce-announcing-our-2023-charity-ideas-apply-now-by-steve-thompson-charity-entrepreneurship.mp3" length="19528322" type="audio/mpeg"/>
      <guid isPermaLink="false">Buzzsprout-12269623</guid>
      <pubDate>Thu, 16 Feb 2023 22:00:00 +0000</pubDate>
      <podcast:chapters url="https://feeds.buzzsprout.com/2062493/12269623/chapters.json" type="application/json"/>
      <itunes:duration>1626</itunes:duration>
      <itunes:keywords/>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
    </item>
    <item>
      <itunes:title>"H5N1 - thread for information sharing, planning, and action" by MathiasKB</itunes:title>
      <title>"H5N1 - thread for information sharing, planning, and action" by MathiasKB</title>
      <description>&lt;p&gt;&lt;/p&gt;&lt;p&gt;Hi everyone,&lt;/p&gt;&lt;p&gt;I&amp;apos;ve been reading up on H5N1 this weekend, and I&amp;apos;m pretty concerned. Right now my hunch is that there is a non-zero chance that it will cost more than 10,000 people their lives.&lt;/p&gt;&lt;p&gt;To be clear, I think it is unlikely that H5N1 will become a pandemic anywhere close to the size of covid.&lt;/p&gt;&lt;p&gt;Nevertheless, I think our community should be actively following the news and start thinking about ways to be helpful if the probability increases. I am creating this thread as a place where people can discuss and share information about H5N1. We have a lot of pandemic experts in this community, do chime in!&lt;/p&gt;&lt;p&gt;&lt;b&gt;Original article:&lt;/b&gt;&lt;/p&gt;&lt;p&gt;&lt;a href='https://forum.effectivealtruism.org/posts/QMMFyAX3ajf9vF5sb/h5n1-thread-for-information-sharing-planning-and-action'&gt;https://forum.effectivealtruism.org/posts/QMMFyAX3ajf9vF5sb/h5n1-thread-for-information-sharing-planning-and-action&lt;/a&gt;&lt;/p&gt;&lt;p&gt;Narrated for the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt; by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded>&lt;p&gt;&lt;/p&gt;&lt;p&gt;Hi everyone,&lt;/p&gt;&lt;p&gt;I&amp;apos;ve been reading up on H5N1 this weekend, and I&amp;apos;m pretty concerned. Right now my hunch is that there is a non-zero chance that it will cost more than 10,000 people their lives.&lt;/p&gt;&lt;p&gt;To be clear, I think it is unlikely that H5N1 will become a pandemic anywhere close to the size of covid.&lt;/p&gt;&lt;p&gt;Nevertheless, I think our community should be actively following the news and start thinking about ways to be helpful if the probability increases. I am creating this thread as a place where people can discuss and share information about H5N1. We have a lot of pandemic experts in this community, do chime in!&lt;/p&gt;&lt;p&gt;&lt;b&gt;Original article:&lt;/b&gt;&lt;/p&gt;&lt;p&gt;&lt;a href='https://forum.effectivealtruism.org/posts/QMMFyAX3ajf9vF5sb/h5n1-thread-for-information-sharing-planning-and-action'&gt;https://forum.effectivealtruism.org/posts/QMMFyAX3ajf9vF5sb/h5n1-thread-for-information-sharing-planning-and-action&lt;/a&gt;&lt;/p&gt;&lt;p&gt;Narrated for the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt; by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <itunes:author/>
      <enclosure url="https://www.buzzsprout.com/2062493/12246179-h5n1-thread-for-information-sharing-planning-and-action-by-mathiaskb.mp3" length="2958726" type="audio/mpeg"/>
      <guid isPermaLink="false">Buzzsprout-12246179</guid>
      <pubDate>Mon, 13 Feb 2023 23:00:00 +0000</pubDate>
      <itunes:duration>246</itunes:duration>
      <itunes:keywords/>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
    </item>
    <item>
      <itunes:title>"Literature review of Transformative Artificial Intelligence timelines" by Jaime Sevilla</itunes:title>
      <title>"Literature review of Transformative Artificial Intelligence timelines" by Jaime Sevilla</title>
      <description>&lt;p&gt;&lt;/p&gt;&lt;p&gt;This is a linkpost for &lt;a href='https://forum.effectivealtruism.org/out?url=https%3A%2F%2Fepochai.org%2Fblog%2Fliterature-review-of-transformative-artificial-intelligence-timelines&amp;amp;foreignId=4eAnBaLxvnkydiavw'&gt;https://epochai.org/blog/literature-review-of-transformative-artificial-intelligence-timelines&lt;/a&gt;&lt;/p&gt;&lt;p&gt;We summarize and compare several models and forecasts predicting when transformative AI will be developed.&lt;/p&gt;&lt;p&gt;&lt;b&gt;Highlights&lt;/b&gt;&lt;/p&gt;&lt;ul&gt;&lt;li&gt;The review includes quantitative models, including both outside and inside view, and judgment-based forecasts by (teams of) experts.&lt;/li&gt;&lt;li&gt;While we do not necessarily endorse their conclusions, the inside-view model the Epoch team found most compelling is Ajeya Cotra’s &lt;a href='https://epochai.org/blog/grokking-bioanchors'&gt;&lt;em&gt;“Forecasting TAI with biological anchors”&lt;/em&gt;&lt;/a&gt;, the best-rated outside-view model was Tom Davidson’s &lt;a href='https://epochai.org/blog/grokking-semi-informative-priors'&gt;&lt;em&gt;“Semi-informative priors over AI timelines”&lt;/em&gt;&lt;/a&gt;, and the best-rated judgment-based forecast was &lt;a href='https://forum.effectivealtruism.org/posts/ByBBqwRXWqX5m9erL/update-to-samotsvety-agi-timelines'&gt;Samotsvety’s AGI Timelines Forecast&lt;/a&gt;.&lt;/li&gt;&lt;li&gt;The inside-view models we reviewed predicted shorter timelines (e.g. bioanchors has a median of 2052) while the outside-view models predicted longer timelines (e.g. semi-informative priors has a median over 2100). The judgment-based forecasts are skewed towards agreement with the inside-view models, and are often more aggressive (e.g. Samotsvety assigned a median of 2043).&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;&lt;b&gt;Original article:&lt;/b&gt;&lt;/p&gt;&lt;p&gt;&lt;a href='https://forum.effectivealtruism.org/posts/4Ckc2zNrAKQwnAyA2/literature-review-of-transformative-artificial-intelligence'&gt;https://forum.effectivealtruism.org/posts/4Ckc2zNrAKQwnAyA2/literature-review-of-transformative-artificial-intelligence&lt;/a&gt;&lt;/p&gt;&lt;p&gt;Narrated for the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt; by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded>&lt;p&gt;&lt;/p&gt;&lt;p&gt;This is a linkpost for &lt;a href='https://forum.effectivealtruism.org/out?url=https%3A%2F%2Fepochai.org%2Fblog%2Fliterature-review-of-transformative-artificial-intelligence-timelines&amp;amp;foreignId=4eAnBaLxvnkydiavw'&gt;https://epochai.org/blog/literature-review-of-transformative-artificial-intelligence-timelines&lt;/a&gt;&lt;/p&gt;&lt;p&gt;We summarize and compare several models and forecasts predicting when transformative AI will be developed.&lt;/p&gt;&lt;p&gt;&lt;b&gt;Highlights&lt;/b&gt;&lt;/p&gt;&lt;ul&gt;&lt;li&gt;The review includes quantitative models, including both outside and inside view, and judgment-based forecasts by (teams of) experts.&lt;/li&gt;&lt;li&gt;While we do not necessarily endorse their conclusions, the inside-view model the Epoch team found most compelling is Ajeya Cotra’s &lt;a href='https://epochai.org/blog/grokking-bioanchors'&gt;&lt;em&gt;“Forecasting TAI with biological anchors”&lt;/em&gt;&lt;/a&gt;, the best-rated outside-view model was Tom Davidson’s &lt;a href='https://epochai.org/blog/grokking-semi-informative-priors'&gt;&lt;em&gt;“Semi-informative priors over AI timelines”&lt;/em&gt;&lt;/a&gt;, and the best-rated judgment-based forecast was &lt;a href='https://forum.effectivealtruism.org/posts/ByBBqwRXWqX5m9erL/update-to-samotsvety-agi-timelines'&gt;Samotsvety’s AGI Timelines Forecast&lt;/a&gt;.&lt;/li&gt;&lt;li&gt;The inside-view models we reviewed predicted shorter timelines (e.g. bioanchors has a median of 2052) while the outside-view models predicted longer timelines (e.g. semi-informative priors has a median over 2100). The judgment-based forecasts are skewed towards agreement with the inside-view models, and are often more aggressive (e.g. Samotsvety assigned a median of 2043).&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;&lt;b&gt;Original article:&lt;/b&gt;&lt;/p&gt;&lt;p&gt;&lt;a href='https://forum.effectivealtruism.org/posts/4Ckc2zNrAKQwnAyA2/literature-review-of-transformative-artificial-intelligence'&gt;https://forum.effectivealtruism.org/posts/4Ckc2zNrAKQwnAyA2/literature-review-of-transformative-artificial-intelligence&lt;/a&gt;&lt;/p&gt;&lt;p&gt;Narrated for the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt; by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <itunes:author/>
      <enclosure url="https://www.buzzsprout.com/2062493/12224009-literature-review-of-transformative-artificial-intelligence-timelines-by-jaime-sevilla.mp3" length="7236351" type="audio/mpeg"/>
      <guid isPermaLink="false">Buzzsprout-12224009</guid>
      <pubDate>Fri, 10 Feb 2023 06:00:00 +0000</pubDate>
      <podcast:chapters url="https://feeds.buzzsprout.com/2062493/12224009/chapters.json" type="application/json"/>
      <itunes:duration>602</itunes:duration>
      <itunes:keywords/>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
    </item>
    <item>
      <itunes:title>EA Forum Weekly Summaries – Episode 16 (Jan. 30 - Feb. 5, 2023)</itunes:title>
      <title>EA Forum Weekly Summaries – Episode 16 (Jan. 30 - Feb. 5, 2023)</title>
      <description>&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;b&gt;Original article:&lt;br/&gt;&lt;/b&gt;&lt;a href='https://forum.effectivealtruism.org/posts/Qzfew7EBPgdCzsxED/ea-and-lw-forum-weekly-summary-30th-jan-5th-feb-2023'&gt;https://forum.effectivealtruism.org/posts/Qzfew7EBPgdCzsxED/ea-and-lw-forum-weekly-summary-30th-jan-5th-feb-2023&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;This is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection &lt;a href='https://forum.effectivealtruism.org/s/W4fhpuN26naxGCBbN'&gt;here&lt;/a&gt;. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.&lt;br/&gt;&lt;br/&gt;Narrated by &lt;a href='https://twitter.com/SnellColeman'&gt;Coleman Jackson Snell&lt;/a&gt;. Summaries written by &lt;a href='https://forum.effectivealtruism.org/users/greyarea'&gt;Zoe Williams&lt;/a&gt; (&lt;a href='https://rethinkpriorities.org/'&gt;Rethink Priorities&lt;/a&gt;).&lt;br/&gt;&lt;br/&gt;Published by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt; on behalf of the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded>&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;b&gt;Original article:&lt;br/&gt;&lt;/b&gt;&lt;a href='https://forum.effectivealtruism.org/posts/Qzfew7EBPgdCzsxED/ea-and-lw-forum-weekly-summary-30th-jan-5th-feb-2023'&gt;https://forum.effectivealtruism.org/posts/Qzfew7EBPgdCzsxED/ea-and-lw-forum-weekly-summary-30th-jan-5th-feb-2023&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;This is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection &lt;a href='https://forum.effectivealtruism.org/s/W4fhpuN26naxGCBbN'&gt;here&lt;/a&gt;. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.&lt;br/&gt;&lt;br/&gt;Narrated by &lt;a href='https://twitter.com/SnellColeman'&gt;Coleman Jackson Snell&lt;/a&gt;. Summaries written by &lt;a href='https://forum.effectivealtruism.org/users/greyarea'&gt;Zoe Williams&lt;/a&gt; (&lt;a href='https://rethinkpriorities.org/'&gt;Rethink Priorities&lt;/a&gt;).&lt;br/&gt;&lt;br/&gt;Published by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt; on behalf of the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <itunes:author/>
      <enclosure url="https://www.buzzsprout.com/2062493/12217585-ea-forum-weekly-summaries-episode-16-jan-30-feb-5-2023.mp3" length="15658156" type="audio/mpeg"/>
      <guid isPermaLink="false">Buzzsprout-12217585</guid>
      <pubDate>Thu, 09 Feb 2023 12:00:00 +0000</pubDate>
      <itunes:duration>1304</itunes:duration>
      <itunes:keywords/>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
    </item>
    <item>
      <itunes:title>"The Capability Approach to Human Welfare" by Ryan C Briggs</itunes:title>
      <title>"The Capability Approach to Human Welfare" by Ryan C Briggs</title>
      <description>&lt;p&gt;This post outlines the capability approach to thinking about human welfare. I think that this approach, while very popular in international development, is neglected in EA. While the capability approach has problems, I think that it provides a better approach to thinking about improving human welfare than approaches based on measuring happiness or subjective wellbeing (SWB) or approaches based on preference satisfaction. Finally, even if you disagree that the capability approach is best, I think this post will be useful to you because it may clarify why many people and organizations in the international development or global health space take the positions that they do. I will be drawing heavily on the work of Amartya Sen, but I will often not be citing specific texts because I’m an academic and getting to write without careful citations is thrilling.&lt;br/&gt;&lt;br/&gt;&lt;b&gt;Original article:&lt;/b&gt;&lt;br/&gt;&lt;a href='https://forum.effectivealtruism.org/posts/zy6jGPeFKHaoxKEfT/the-capability-approach'&gt;https://forum.effectivealtruism.org/posts/zy6jGPeFKHaoxKEfT/the-capability-approach&lt;br/&gt;&lt;br/&gt;&lt;/a&gt;Narrated for the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt; by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded>&lt;p&gt;This post outlines the capability approach to thinking about human welfare. I think that this approach, while very popular in international development, is neglected in EA. While the capability approach has problems, I think that it provides a better approach to thinking about improving human welfare than approaches based on measuring happiness or subjective wellbeing (SWB) or approaches based on preference satisfaction. Finally, even if you disagree that the capability approach is best, I think this post will be useful to you because it may clarify why many people and organizations in the international development or global health space take the positions that they do. I will be drawing heavily on the work of Amartya Sen, but I will often not be citing specific texts because I’m an academic and getting to write without careful citations is thrilling.&lt;br/&gt;&lt;br/&gt;&lt;b&gt;Original article:&lt;/b&gt;&lt;br/&gt;&lt;a href='https://forum.effectivealtruism.org/posts/zy6jGPeFKHaoxKEfT/the-capability-approach'&gt;https://forum.effectivealtruism.org/posts/zy6jGPeFKHaoxKEfT/the-capability-approach&lt;br/&gt;&lt;br/&gt;&lt;/a&gt;Narrated for the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt; by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <itunes:author/>
      <enclosure url="https://www.buzzsprout.com/2062493/12174785-the-capability-approach-to-human-welfare-by-ryan-c-briggs.mp3" length="13297669" type="audio/mpeg"/>
      <guid isPermaLink="false">Buzzsprout-12174785</guid>
      <pubDate>Fri, 03 Feb 2023 06:00:00 +0000</pubDate>
      <podcast:chapters url="https://feeds.buzzsprout.com/2062493/12174785/chapters.json" type="application/json"/>
      <itunes:duration>1107</itunes:duration>
      <itunes:keywords/>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
    </item>
    <item>
      <itunes:title>EA Forum Weekly Summaries – Episode 15 (Jan. 23 - 29, 2023)</itunes:title>
      <title>EA Forum Weekly Summaries – Episode 15 (Jan. 23 - 29, 2023)</title>
      <description>&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;b&gt;Original article:&lt;br/&gt;&lt;/b&gt;&lt;a href='https://forum.effectivealtruism.org/posts/hzc26vGa4RLns7TvK/ea-and-lw-forum-weekly-summary-23rd-29th-jan-23'&gt;https://forum.effectivealtruism.org/posts/hzc26vGa4RLns7TvK/ea-and-lw-forum-weekly-summary-23rd-29th-jan-23&lt;/a&gt;&lt;/p&gt;&lt;p&gt;This is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection &lt;a href='https://forum.effectivealtruism.org/s/W4fhpuN26naxGCBbN'&gt;here&lt;/a&gt;. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.&lt;/p&gt;&lt;p&gt;Narrated by &lt;a href='https://twitter.com/SnellColeman'&gt;Coleman Jackson Snell&lt;/a&gt;. Summaries written by &lt;a href='https://forum.effectivealtruism.org/users/greyarea'&gt;Zoe Williams&lt;/a&gt; (&lt;a href='https://rethinkpriorities.org/'&gt;Rethink Priorities&lt;/a&gt;).&lt;/p&gt;&lt;p&gt;Published by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt; on behalf of the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded>&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;b&gt;Original article:&lt;br/&gt;&lt;/b&gt;&lt;a href='https://forum.effectivealtruism.org/posts/hzc26vGa4RLns7TvK/ea-and-lw-forum-weekly-summary-23rd-29th-jan-23'&gt;https://forum.effectivealtruism.org/posts/hzc26vGa4RLns7TvK/ea-and-lw-forum-weekly-summary-23rd-29th-jan-23&lt;/a&gt;&lt;/p&gt;&lt;p&gt;This is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection &lt;a href='https://forum.effectivealtruism.org/s/W4fhpuN26naxGCBbN'&gt;here&lt;/a&gt;. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.&lt;/p&gt;&lt;p&gt;Narrated by &lt;a href='https://twitter.com/SnellColeman'&gt;Coleman Jackson Snell&lt;/a&gt;. Summaries written by &lt;a href='https://forum.effectivealtruism.org/users/greyarea'&gt;Zoe Williams&lt;/a&gt; (&lt;a href='https://rethinkpriorities.org/'&gt;Rethink Priorities&lt;/a&gt;).&lt;/p&gt;&lt;p&gt;Published by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt; on behalf of the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <itunes:author/>
      <enclosure url="https://www.buzzsprout.com/2062493/12172317-ea-forum-weekly-summaries-episode-15-jan-23-29-2023.mp3" length="14508656" type="audio/mpeg"/>
      <guid isPermaLink="false">Buzzsprout-12172317</guid>
      <pubDate>Thu, 02 Feb 2023 21:00:00 +0000</pubDate>
      <itunes:duration>1208</itunes:duration>
      <itunes:keywords/>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
    </item>
    <item>
      <itunes:title>EA Forum Weekly Summaries – Episode 14 (Jan. 16 - 22, 2023)</itunes:title>
      <title>EA Forum Weekly Summaries – Episode 14 (Jan. 16 - 22, 2023)</title>
      <description>&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;b&gt;Original article:&lt;br/&gt;&lt;/b&gt;&lt;a href='https://forum.effectivealtruism.org/posts/6Ezg8HgHib9bpWCFr/ea-and-lw-forum-weekly-summary-16th-22nd-jan-23'&gt;https://forum.effectivealtruism.org/posts/6Ezg8HgHib9bpWCFr/ea-and-lw-forum-weekly-summary-16th-22nd-jan-23&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;This is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection &lt;a href='https://forum.effectivealtruism.org/s/W4fhpuN26naxGCBbN'&gt;here&lt;/a&gt;. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.&lt;br/&gt;&lt;br/&gt;Narrated by &lt;a href='https://twitter.com/SnellColeman'&gt;Coleman Jackson Snell&lt;/a&gt;. Summaries written by &lt;a href='https://forum.effectivealtruism.org/users/greyarea'&gt;Zoe Williams&lt;/a&gt; (&lt;a href='https://rethinkpriorities.org/'&gt;Rethink Priorities&lt;/a&gt;).&lt;br/&gt;&lt;br/&gt;Published by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt; on behalf of the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded>&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;b&gt;Original article:&lt;br/&gt;&lt;/b&gt;&lt;a href='https://forum.effectivealtruism.org/posts/6Ezg8HgHib9bpWCFr/ea-and-lw-forum-weekly-summary-16th-22nd-jan-23'&gt;https://forum.effectivealtruism.org/posts/6Ezg8HgHib9bpWCFr/ea-and-lw-forum-weekly-summary-16th-22nd-jan-23&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;This is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection &lt;a href='https://forum.effectivealtruism.org/s/W4fhpuN26naxGCBbN'&gt;here&lt;/a&gt;. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.&lt;br/&gt;&lt;br/&gt;Narrated by &lt;a href='https://twitter.com/SnellColeman'&gt;Coleman Jackson Snell&lt;/a&gt;. Summaries written by &lt;a href='https://forum.effectivealtruism.org/users/greyarea'&gt;Zoe Williams&lt;/a&gt; (&lt;a href='https://rethinkpriorities.org/'&gt;Rethink Priorities&lt;/a&gt;).&lt;br/&gt;&lt;br/&gt;Published by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt; on behalf of the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <itunes:author/>
      <enclosure url="https://www.buzzsprout.com/2062493/12136260-ea-forum-weekly-summaries-episode-14-jan-16-22-2023.mp3" length="13913064" type="audio/mpeg"/>
      <guid isPermaLink="false">Buzzsprout-12136260</guid>
      <pubDate>Sat, 28 Jan 2023 18:00:00 +0000</pubDate>
      <itunes:duration>1158</itunes:duration>
      <itunes:keywords/>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
    </item>
    <item>
      <itunes:title>"Rethink Priorities’ Welfare Range Estimates" by Bob Fischer</itunes:title>
      <title>"Rethink Priorities’ Welfare Range Estimates" by Bob Fischer</title>
      <description>&lt;p&gt;&lt;/p&gt;&lt;ul&gt;&lt;li&gt;We offer welfare range estimates for 11 farmed species: pigs, chickens, carp, salmon, octopuses, shrimp, crayfish, crabs, bees, black soldier flies, and silkworms.&lt;/li&gt;&lt;li&gt;These estimates are, essentially, estimates of the differences in the possible intensities of these animals&amp;apos; pleasures and pains relative to humans&amp;apos; pleasures and pains. Then, we add a number of controversial (albeit plausible) philosophical assumptions (including hedonism, valence symmetry, and others discussed here) to reach conclusions about animals&amp;apos; welfare ranges relative to human&amp;apos;s welfare range.&lt;/li&gt;&lt;li&gt;Given hedonism and conditional on sentience, we think (credence: 0.7) that none of the vertebrate nonhuman animals of interest have a welfare range that’s more than double the size of any of the others. While carp and salmon have lower scores than pigs and chickens, we suspect that’s largely due to a lack of research.&lt;/li&gt;&lt;li&gt;Given hedonism and conditional on sentience, we think (credence: 0.65) that the welfare ranges of humans and the vertebrate animals of interest are within an order of magnitude of one another. &lt;/li&gt;&lt;li&gt;Given hedonism and conditional on sentience, we think (credence 0.6) that all the invertebrates of interest have welfare ranges within two orders of magnitude of the vertebrate nonhuman animals of interest. Invertebrates are so diverse and we know so little about them; hence, our caution.&lt;/li&gt;&lt;li&gt;Our view is that the estimates we’ve provided should be seen as placeholders—albeit, we submit, the best such placeholders available. We’re providing a starting point for more rigorous, empirically-driven research into animals’ welfare ranges. At the same time, we’re offering guidance for decisions that have to be made long before that research is finished&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;&lt;br/&gt;&lt;b&gt;Original article:&lt;/b&gt;&lt;br/&gt;&lt;a href='https://forum.effectivealtruism.org/posts/Qk3hd6PrFManj8K6o/rethink-priorities-welfare-range-estimates'&gt;https://forum.effectivealtruism.org/posts/Qk3hd6PrFManj8K6o/rethink-priorities-welfare-range-estimates&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;Narrated for the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt; by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded>&lt;p&gt;&lt;/p&gt;&lt;ul&gt;&lt;li&gt;We offer welfare range estimates for 11 farmed species: pigs, chickens, carp, salmon, octopuses, shrimp, crayfish, crabs, bees, black soldier flies, and silkworms.&lt;/li&gt;&lt;li&gt;These estimates are, essentially, estimates of the differences in the possible intensities of these animals&amp;apos; pleasures and pains relative to humans&amp;apos; pleasures and pains. Then, we add a number of controversial (albeit plausible) philosophical assumptions (including hedonism, valence symmetry, and others discussed here) to reach conclusions about animals&amp;apos; welfare ranges relative to human&amp;apos;s welfare range.&lt;/li&gt;&lt;li&gt;Given hedonism and conditional on sentience, we think (credence: 0.7) that none of the vertebrate nonhuman animals of interest have a welfare range that’s more than double the size of any of the others. While carp and salmon have lower scores than pigs and chickens, we suspect that’s largely due to a lack of research.&lt;/li&gt;&lt;li&gt;Given hedonism and conditional on sentience, we think (credence: 0.65) that the welfare ranges of humans and the vertebrate animals of interest are within an order of magnitude of one another. &lt;/li&gt;&lt;li&gt;Given hedonism and conditional on sentience, we think (credence 0.6) that all the invertebrates of interest have welfare ranges within two orders of magnitude of the vertebrate nonhuman animals of interest. Invertebrates are so diverse and we know so little about them; hence, our caution.&lt;/li&gt;&lt;li&gt;Our view is that the estimates we’ve provided should be seen as placeholders—albeit, we submit, the best such placeholders available. We’re providing a starting point for more rigorous, empirically-driven research into animals’ welfare ranges. At the same time, we’re offering guidance for decisions that have to be made long before that research is finished&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;&lt;br/&gt;&lt;b&gt;Original article:&lt;/b&gt;&lt;br/&gt;&lt;a href='https://forum.effectivealtruism.org/posts/Qk3hd6PrFManj8K6o/rethink-priorities-welfare-range-estimates'&gt;https://forum.effectivealtruism.org/posts/Qk3hd6PrFManj8K6o/rethink-priorities-welfare-range-estimates&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;Narrated for the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt; by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <itunes:author>TYPE III AUDIO</itunes:author>
      <enclosure url="https://www.buzzsprout.com/2062493/12121027-rethink-priorities-welfare-range-estimates-by-bob-fischer.mp3" length="24842440" type="audio/mpeg"/>
      <guid isPermaLink="false">Buzzsprout-12121027</guid>
      <pubDate>Thu, 26 Jan 2023 02:00:00 +0000</pubDate>
      <podcast:chapters url="https://feeds.buzzsprout.com/2062493/12121027/chapters.json" type="application/json"/>
      <itunes:duration>2069</itunes:duration>
      <itunes:keywords/>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
    </item>
    <item>
      <itunes:title>"On Living Without Idols" by Rockwell</itunes:title>
      <title>"On Living Without Idols" by Rockwell</title>
      <description>&lt;p&gt;For many years, I&amp;apos;ve actively lived in avoidance of idolizing behavior and in pursuit of a nuanced view of even those I respect most deeply. I think this has helped me in numerous ways and has been of particular help in weathering the past few months within the EA community. Below, I discuss how I think about the act of idolizing behavior, some of my personal experiences, and how this mentality can be of use to others.&lt;br/&gt;&lt;br/&gt;Note: I want more people to post on the EA Forum and have their ideas taken seriously regardless of whether they conform to Forum stylistic norms. I&amp;apos;m perfectly capable of writing a version of this post in the style typical to the Forum, but this post is written the way I actually like to write. If this style doesn’t work for you, you might want to read the first section “Anarchists have no idols” and then skip ahead to the section “Living without idols, Pt. 1” toward the end. You’ll lose some of the insights contained in my anecdotes, but still get most of the core ideas I want to convey here.&lt;br/&gt;&lt;br/&gt;&lt;b&gt;Original article:&lt;/b&gt;&lt;br/&gt;&lt;a href='https://forum.effectivealtruism.org/posts/jgspXC8GKA7RtxMRE/on-living-without-idols'&gt;https://forum.effectivealtruism.org/posts/jgspXC8GKA7RtxMRE/on-living-without-idols&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;Narrated for the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt; by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt;.&lt;br/&gt;&lt;br/&gt;&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded>&lt;p&gt;For many years, I&amp;apos;ve actively lived in avoidance of idolizing behavior and in pursuit of a nuanced view of even those I respect most deeply. I think this has helped me in numerous ways and has been of particular help in weathering the past few months within the EA community. Below, I discuss how I think about the act of idolizing behavior, some of my personal experiences, and how this mentality can be of use to others.&lt;br/&gt;&lt;br/&gt;Note: I want more people to post on the EA Forum and have their ideas taken seriously regardless of whether they conform to Forum stylistic norms. I&amp;apos;m perfectly capable of writing a version of this post in the style typical to the Forum, but this post is written the way I actually like to write. If this style doesn’t work for you, you might want to read the first section “Anarchists have no idols” and then skip ahead to the section “Living without idols, Pt. 1” toward the end. You’ll lose some of the insights contained in my anecdotes, but still get most of the core ideas I want to convey here.&lt;br/&gt;&lt;br/&gt;&lt;b&gt;Original article:&lt;/b&gt;&lt;br/&gt;&lt;a href='https://forum.effectivealtruism.org/posts/jgspXC8GKA7RtxMRE/on-living-without-idols'&gt;https://forum.effectivealtruism.org/posts/jgspXC8GKA7RtxMRE/on-living-without-idols&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;Narrated for the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt; by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt;.&lt;br/&gt;&lt;br/&gt;&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <itunes:author/>
      <enclosure url="https://www.buzzsprout.com/2062493/12120124-on-living-without-idols-by-rockwell.mp3" length="7902451" type="audio/mpeg"/>
      <guid isPermaLink="false">Buzzsprout-12120124</guid>
      <pubDate>Wed, 25 Jan 2023 23:00:00 +0000</pubDate>
      <podcast:chapters url="https://feeds.buzzsprout.com/2062493/12120124/chapters.json" type="application/json"/>
      <itunes:duration>657</itunes:duration>
      <itunes:keywords/>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
    </item>
    <item>
      <itunes:title>EA Forum Weekly Summaries – Episode 13 (Jan. 9 - 15, 2023)</itunes:title>
      <title>EA Forum Weekly Summaries – Episode 13 (Jan. 9 - 15, 2023)</title>
      <description>&lt;p&gt;&lt;b&gt;Original article:&lt;/b&gt;&lt;br/&gt;&lt;a href='https://forum.effectivealtruism.org/posts/LdEPDqyZvucQkxhWH/ea-and-lw-forums-weekly-summary-28th-nov-4th-dec-22'&gt;https://forum.effectivealtruism.org/posts/DNWpFLrtrJXe4mted/ea-and-lw-forum-summaries-9th-jan-to-15th-jan-23&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;This is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection &lt;a href='https://forum.effectivealtruism.org/s/W4fhpuN26naxGCBbN'&gt;here&lt;/a&gt;. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.&lt;br/&gt;&lt;br/&gt;Narrated by &lt;a href='https://twitter.com/SnellColeman'&gt;Coleman Jackson Snell&lt;/a&gt;. Summaries written by &lt;a href='https://forum.effectivealtruism.org/users/greyarea'&gt;Zoe Williams&lt;/a&gt; (&lt;a href='https://rethinkpriorities.org/'&gt;Rethink Priorities&lt;/a&gt;).&lt;br/&gt;&lt;br/&gt;Published by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt; on behalf of the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded>&lt;p&gt;&lt;b&gt;Original article:&lt;/b&gt;&lt;br/&gt;&lt;a href='https://forum.effectivealtruism.org/posts/LdEPDqyZvucQkxhWH/ea-and-lw-forums-weekly-summary-28th-nov-4th-dec-22'&gt;https://forum.effectivealtruism.org/posts/DNWpFLrtrJXe4mted/ea-and-lw-forum-summaries-9th-jan-to-15th-jan-23&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;This is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection &lt;a href='https://forum.effectivealtruism.org/s/W4fhpuN26naxGCBbN'&gt;here&lt;/a&gt;. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.&lt;br/&gt;&lt;br/&gt;Narrated by &lt;a href='https://twitter.com/SnellColeman'&gt;Coleman Jackson Snell&lt;/a&gt;. Summaries written by &lt;a href='https://forum.effectivealtruism.org/users/greyarea'&gt;Zoe Williams&lt;/a&gt; (&lt;a href='https://rethinkpriorities.org/'&gt;Rethink Priorities&lt;/a&gt;).&lt;br/&gt;&lt;br/&gt;Published by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt; on behalf of the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <itunes:author>TYPE III AUDIO</itunes:author>
      <enclosure url="https://www.buzzsprout.com/2062493/12073104-ea-forum-weekly-summaries-episode-13-jan-9-15-2023.mp3" length="17989781" type="audio/mpeg"/>
      <guid isPermaLink="false">Buzzsprout-12073104</guid>
      <pubDate>Wed, 18 Jan 2023 23:00:00 +0000</pubDate>
      <itunes:duration>1496</itunes:duration>
      <itunes:keywords/>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
    </item>
    <item>
      <itunes:title>"What you can do to help stop violence against women and girls" by Akhil</itunes:title>
      <title>"What you can do to help stop violence against women and girls" by Akhil</title>
      <description>&lt;p&gt;I previously wrote an entry for the Open Philanthropy Cause Exploration Prize on why preventing violence against women and girls is a global priority. For an introduction to the area, I have written a brief summary below.  In this post, I will extend that work, diving deeper into the literature and the landscape of organisations in the field, as well as creating a cost-effectiveness model for some of the most promising preventative interventions. Based on this, I will offer some concrete recommendations that different stakeholders should take - from individuals looking to donate, to funders, to charity evaluators and incubators.&lt;br/&gt;&lt;br/&gt;&lt;b&gt;Original article:&lt;/b&gt;&lt;br/&gt;&lt;a href='https://forum.effectivealtruism.org/posts/uH9akQzJkzpBD5Duw/what-you-can-do-to-help-stop-violence-against-women-and'&gt;https://forum.effectivealtruism.org/posts/uH9akQzJkzpBD5Duw/what-you-can-do-to-help-stop-violence-against-women-and&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;Narrated for the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt; by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded>&lt;p&gt;I previously wrote an entry for the Open Philanthropy Cause Exploration Prize on why preventing violence against women and girls is a global priority. For an introduction to the area, I have written a brief summary below.  In this post, I will extend that work, diving deeper into the literature and the landscape of organisations in the field, as well as creating a cost-effectiveness model for some of the most promising preventative interventions. Based on this, I will offer some concrete recommendations that different stakeholders should take - from individuals looking to donate, to funders, to charity evaluators and incubators.&lt;br/&gt;&lt;br/&gt;&lt;b&gt;Original article:&lt;/b&gt;&lt;br/&gt;&lt;a href='https://forum.effectivealtruism.org/posts/uH9akQzJkzpBD5Duw/what-you-can-do-to-help-stop-violence-against-women-and'&gt;https://forum.effectivealtruism.org/posts/uH9akQzJkzpBD5Duw/what-you-can-do-to-help-stop-violence-against-women-and&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;Narrated for the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt; by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <itunes:author/>
      <enclosure url="https://www.buzzsprout.com/2062493/12066110-what-you-can-do-to-help-stop-violence-against-women-and-girls-by-akhil.mp3" length="24051359" type="audio/mpeg"/>
      <guid isPermaLink="false">Buzzsprout-12066110</guid>
      <pubDate>Wed, 18 Jan 2023 03:00:00 +0000</pubDate>
      <podcast:chapters url="https://feeds.buzzsprout.com/2062493/12066110/chapters.json" type="application/json"/>
      <itunes:duration>1980</itunes:duration>
      <itunes:keywords/>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
    </item>
    <item>
      <itunes:title>"AGI and the EMH: markets are not expecting aligned or unaligned AI in the next 30 years" by Trevor Chow, Basil Halperin, &amp; J. Zachary Mazlish</itunes:title>
      <title>"AGI and the EMH: markets are not expecting aligned or unaligned AI in the next 30 years" by Trevor Chow, Basil Halperin, &amp; J. Zachary Mazlish</title>
      <description>&lt;p&gt;In this post, we point out that short AI timelines would cause real interest rates to be high, and would do so under expectations of either unaligned or aligned AI. However, 30- to 50-year real interest rates are low. We argue that this suggests one of two possibilities:&lt;/p&gt;&lt;ol&gt;&lt;li&gt; Long(er) timelines. Financial markets are often highly effective information aggregators (the “efficient market hypothesis”), and therefore real interest rates accurately reflect that transformative AI is unlikely to be developed in the next 30-50 years.&lt;/li&gt;&lt;li&gt; Market inefficiency. Markets are radically underestimating how soon advanced AI technology will be developed, and real interest rates are therefore too low. There is thus an opportunity for philanthropists to borrow while real rates are low to cheaply do good today; and/or an opportunity for anyone to earn excess returns by betting that real rates will rise.&lt;/li&gt;&lt;/ol&gt;&lt;p&gt;In the rest of this post we flesh out this argument.&lt;br/&gt;&lt;b&gt;&lt;br/&gt;Original article:&lt;/b&gt;&lt;br/&gt;&lt;a href='https://forum.effectivealtruism.org/posts/8c7LycgtkypkgYjZx/agi-and-the-emh-markets-are-not-expecting-aligned-or'&gt;https://forum.effectivealtruism.org/posts/8c7LycgtkypkgYjZx/agi-and-the-emh-markets-are-not-expecting-aligned-or&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;Narrated for the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt; by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded>&lt;p&gt;In this post, we point out that short AI timelines would cause real interest rates to be high, and would do so under expectations of either unaligned or aligned AI. However, 30- to 50-year real interest rates are low. We argue that this suggests one of two possibilities:&lt;/p&gt;&lt;ol&gt;&lt;li&gt; Long(er) timelines. Financial markets are often highly effective information aggregators (the “efficient market hypothesis”), and therefore real interest rates accurately reflect that transformative AI is unlikely to be developed in the next 30-50 years.&lt;/li&gt;&lt;li&gt; Market inefficiency. Markets are radically underestimating how soon advanced AI technology will be developed, and real interest rates are therefore too low. There is thus an opportunity for philanthropists to borrow while real rates are low to cheaply do good today; and/or an opportunity for anyone to earn excess returns by betting that real rates will rise.&lt;/li&gt;&lt;/ol&gt;&lt;p&gt;In the rest of this post we flesh out this argument.&lt;br/&gt;&lt;b&gt;&lt;br/&gt;Original article:&lt;/b&gt;&lt;br/&gt;&lt;a href='https://forum.effectivealtruism.org/posts/8c7LycgtkypkgYjZx/agi-and-the-emh-markets-are-not-expecting-aligned-or'&gt;https://forum.effectivealtruism.org/posts/8c7LycgtkypkgYjZx/agi-and-the-emh-markets-are-not-expecting-aligned-or&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;Narrated for the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt; by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <itunes:author/>
      <enclosure url="https://www.buzzsprout.com/2062493/12056440-agi-and-the-emh-markets-are-not-expecting-aligned-or-unaligned-ai-in-the-next-30-years-by-trevor-chow-basil-halperin-j-zachary-mazlish.mp3" length="49063532" type="audio/mpeg"/>
      <guid isPermaLink="false">Buzzsprout-12056440</guid>
      <pubDate>Mon, 16 Jan 2023 23:00:00 +0000</pubDate>
      <podcast:chapters url="https://feeds.buzzsprout.com/2062493/12056440/chapters.json" type="application/json"/>
      <itunes:duration>4085</itunes:duration>
      <itunes:keywords/>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
    </item>
    <item>
      <itunes:title>EA Forum Weekly Summaries – Episode 12 (Dec. 19, 2022 to Jan. 8, 2023)</itunes:title>
      <title>EA Forum Weekly Summaries – Episode 12 (Dec. 19, 2022 to Jan. 8, 2023)</title>
      <description>&lt;p&gt;&lt;b&gt;Original article:&lt;/b&gt;&lt;br/&gt;&lt;a href='https://forum.effectivealtruism.org/posts/LdEPDqyZvucQkxhWH/ea-and-lw-forums-weekly-summary-28th-nov-4th-dec-22'&gt;https://forum.effectivealtruism.org/posts/JZuCg7TtfzzaX9bBY/ea-and-lw-forum-summaries-holiday-edition-19th-dec-8th-jan&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;This is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection &lt;a href='https://forum.effectivealtruism.org/s/W4fhpuN26naxGCBbN'&gt;here&lt;/a&gt;. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.&lt;br/&gt;&lt;br/&gt;Narrated by &lt;a href='https://twitter.com/SnellColeman'&gt;Coleman Jackson Snell&lt;/a&gt;. Summaries written by &lt;a href='https://forum.effectivealtruism.org/users/greyarea'&gt;Zoe Williams&lt;/a&gt; (&lt;a href='https://rethinkpriorities.org/'&gt;Rethink Priorities&lt;/a&gt;).&lt;br/&gt;&lt;br/&gt;Published by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt; on behalf of the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded>&lt;p&gt;&lt;b&gt;Original article:&lt;/b&gt;&lt;br/&gt;&lt;a href='https://forum.effectivealtruism.org/posts/LdEPDqyZvucQkxhWH/ea-and-lw-forums-weekly-summary-28th-nov-4th-dec-22'&gt;https://forum.effectivealtruism.org/posts/JZuCg7TtfzzaX9bBY/ea-and-lw-forum-summaries-holiday-edition-19th-dec-8th-jan&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;This is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection &lt;a href='https://forum.effectivealtruism.org/s/W4fhpuN26naxGCBbN'&gt;here&lt;/a&gt;. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.&lt;br/&gt;&lt;br/&gt;Narrated by &lt;a href='https://twitter.com/SnellColeman'&gt;Coleman Jackson Snell&lt;/a&gt;. Summaries written by &lt;a href='https://forum.effectivealtruism.org/users/greyarea'&gt;Zoe Williams&lt;/a&gt; (&lt;a href='https://rethinkpriorities.org/'&gt;Rethink Priorities&lt;/a&gt;).&lt;br/&gt;&lt;br/&gt;Published by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt; on behalf of the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <itunes:author>TYPE III AUDIO</itunes:author>
      <enclosure url="https://www.buzzsprout.com/2062493/12052304-ea-forum-weekly-summaries-episode-12-dec-19-2022-to-jan-8-2023.mp3" length="9618918" type="audio/mpeg"/>
      <guid isPermaLink="false">Buzzsprout-12052304</guid>
      <pubDate>Mon, 16 Jan 2023 13:00:00 +0000</pubDate>
      <itunes:duration>798</itunes:duration>
      <itunes:keywords/>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
    </item>
    <item>
      <itunes:title>"Why Anima International suspended the campaign to end live fish sales in Poland" by Jakub Stencel &amp; Weronika Zurek</itunes:title>
      <title>"Why Anima International suspended the campaign to end live fish sales in Poland" by Jakub Stencel &amp; Weronika Zurek</title>
      <description>&lt;p&gt;At Anima International, we recently decided to suspend our campaign against live fish sales in Poland indefinitely. After a few years of running the campaign, we are now concerned about the effects of our efforts, specifically the possibility of a net negative result for the lives of animals. We believe that by writing about it openly we can help foster a culture of intellectual honesty, information sharing and accountability. Ideally, our case can serve as a good example on reflecting on potential unintended consequences of advocacy interventions.&lt;br/&gt;&lt;br/&gt;&lt;b&gt;Original article:&lt;/b&gt;&lt;br/&gt;&lt;a href='https://forum.effectivealtruism.org/posts/snnfmepzrwpAsAoDT/why-anima-international-suspended-the-campaign-to-end-live'&gt;https://forum.effectivealtruism.org/posts/snnfmepzrwpAsAoDT/why-anima-international-suspended-the-campaign-to-end-live&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;This is a linkpost for &lt;a href='https://animainternational.org/blog/why-anima-international-suspended-the-campaign-to-end-live-fish-sales-in-poland'&gt;https://animainternational.org/blog/why-anima-international-suspended-the-campaign-to-end-live-fish-sales-in-poland&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;Narrated for the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt; by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded>&lt;p&gt;At Anima International, we recently decided to suspend our campaign against live fish sales in Poland indefinitely. After a few years of running the campaign, we are now concerned about the effects of our efforts, specifically the possibility of a net negative result for the lives of animals. We believe that by writing about it openly we can help foster a culture of intellectual honesty, information sharing and accountability. Ideally, our case can serve as a good example on reflecting on potential unintended consequences of advocacy interventions.&lt;br/&gt;&lt;br/&gt;&lt;b&gt;Original article:&lt;/b&gt;&lt;br/&gt;&lt;a href='https://forum.effectivealtruism.org/posts/snnfmepzrwpAsAoDT/why-anima-international-suspended-the-campaign-to-end-live'&gt;https://forum.effectivealtruism.org/posts/snnfmepzrwpAsAoDT/why-anima-international-suspended-the-campaign-to-end-live&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;This is a linkpost for &lt;a href='https://animainternational.org/blog/why-anima-international-suspended-the-campaign-to-end-live-fish-sales-in-poland'&gt;https://animainternational.org/blog/why-anima-international-suspended-the-campaign-to-end-live-fish-sales-in-poland&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;Narrated for the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt; by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <itunes:author/>
      <enclosure url="https://www.buzzsprout.com/2062493/12021203-why-anima-international-suspended-the-campaign-to-end-live-fish-sales-in-poland-by-jakub-stencel-weronika-zurek.mp3" length="22714210" type="audio/mpeg"/>
      <guid isPermaLink="false">Buzzsprout-12021203</guid>
      <pubDate>Wed, 11 Jan 2023 06:00:00 +0000</pubDate>
      <podcast:chapters url="https://feeds.buzzsprout.com/2062493/12021203/chapters.json" type="application/json"/>
      <itunes:duration>1889</itunes:duration>
      <itunes:keywords/>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
    </item>
    <item>
      <itunes:title>"StrongMinds should not be a top-rated charity (yet)" by Simon_M</itunes:title>
      <title>"StrongMinds should not be a top-rated charity (yet)" by Simon_M</title>
      <description>&lt;p&gt;This is a linkpost for &lt;a href='https://forum.effectivealtruism.org/out?url=https%3A%2F%2Fsimonm.substack.com%2Fp%2Fstrongminds-should-not-be-a-top-rated'&gt;https://simonm.substack.com/p/strongminds-should-not-be-a-top-rated&lt;/a&gt;&lt;/p&gt;&lt;p&gt;&lt;a href='https://www.givingwhatwecan.org/charities/strongminds'&gt;GWWC&lt;/a&gt; lists StrongMinds as a “top-rated” charity. Their reason for doing so is because Founders Pledge has determined they &lt;a href='https://founderspledge.com/stories/mental-health-report-summary'&gt;are cost-effective&lt;/a&gt; in their report into mental health.&lt;/p&gt;&lt;p&gt;I could say here, “and that report was written in 2019 - either they should update the report or remove the top rating” and we could all go home. In fact, most of what I’m about to say does consist of “the data really isn’t that clear yet”.&lt;/p&gt;&lt;p&gt;I think the strongest statement I can make (which I doubt StrongMinds would disagree with) is:&lt;/p&gt;&lt;p&gt;“StrongMinds have made limited effort to be quantitative in their self-evaluation, haven’t continued monitoring impact after intervention, haven’t done the research they once claimed they would. They have not been vetted sufficiently to be considered a top charity, and only one independent group has done the work to look into them.”&lt;/p&gt;&lt;p&gt;&lt;b&gt;Original article:&lt;/b&gt;&lt;br/&gt;&lt;a href='https://forum.effectivealtruism.org/posts/ffmbLCzJctLac3rDu/strongminds-should-not-be-a-top-rated-charity-yet'&gt;https://forum.effectivealtruism.org/posts/ffmbLCzJctLac3rDu/strongminds-should-not-be-a-top-rated-charity-yet&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;Narrated for the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt; by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded>&lt;p&gt;This is a linkpost for &lt;a href='https://forum.effectivealtruism.org/out?url=https%3A%2F%2Fsimonm.substack.com%2Fp%2Fstrongminds-should-not-be-a-top-rated'&gt;https://simonm.substack.com/p/strongminds-should-not-be-a-top-rated&lt;/a&gt;&lt;/p&gt;&lt;p&gt;&lt;a href='https://www.givingwhatwecan.org/charities/strongminds'&gt;GWWC&lt;/a&gt; lists StrongMinds as a “top-rated” charity. Their reason for doing so is because Founders Pledge has determined they &lt;a href='https://founderspledge.com/stories/mental-health-report-summary'&gt;are cost-effective&lt;/a&gt; in their report into mental health.&lt;/p&gt;&lt;p&gt;I could say here, “and that report was written in 2019 - either they should update the report or remove the top rating” and we could all go home. In fact, most of what I’m about to say does consist of “the data really isn’t that clear yet”.&lt;/p&gt;&lt;p&gt;I think the strongest statement I can make (which I doubt StrongMinds would disagree with) is:&lt;/p&gt;&lt;p&gt;“StrongMinds have made limited effort to be quantitative in their self-evaluation, haven’t continued monitoring impact after intervention, haven’t done the research they once claimed they would. They have not been vetted sufficiently to be considered a top charity, and only one independent group has done the work to look into them.”&lt;/p&gt;&lt;p&gt;&lt;b&gt;Original article:&lt;/b&gt;&lt;br/&gt;&lt;a href='https://forum.effectivealtruism.org/posts/ffmbLCzJctLac3rDu/strongminds-should-not-be-a-top-rated-charity-yet'&gt;https://forum.effectivealtruism.org/posts/ffmbLCzJctLac3rDu/strongminds-should-not-be-a-top-rated-charity-yet&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;Narrated for the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt; by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <itunes:author>TYPE III AUDIO</itunes:author>
      <enclosure url="https://www.buzzsprout.com/2062493/12005647-strongminds-should-not-be-a-top-rated-charity-yet-by-simon_m.mp3" length="11438441" type="audio/mpeg"/>
      <guid isPermaLink="false">Buzzsprout-12005647</guid>
      <pubDate>Mon, 09 Jan 2023 04:00:00 +0000</pubDate>
      <podcast:chapters url="https://feeds.buzzsprout.com/2062493/12005647/chapters.json" type="application/json"/>
      <itunes:duration>950</itunes:duration>
      <itunes:keywords/>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
    </item>
    <item>
      <itunes:title>"Let’s think about slowing down AI" by Katja Grace</itunes:title>
      <title>"Let’s think about slowing down AI" by Katja Grace</title>
      <description>&lt;p&gt;If you fear that someone will build a machine that will seize control of the world and annihilate humanity, then one kind of response is to try to build further machines that will seize control of the world even earlier without destroying it, forestalling the ruinous machine’s conquest. An alternative or complementary kind of response is to try to avert such machines being built at all, at least while the degree of their apocalyptic tendencies is ambiguous.&lt;br/&gt;&lt;br/&gt;The latter approach seems to me like the kind of basic and obvious thing worthy of at least consideration, and also in its favor, fits nicely in the genre ‘stuff that it isn’t that hard to imagine happening in the real world’. Yet my impression is that for people worried about extinction risk from artificial intelligence, strategies under the heading ‘actively slow down AI progress’ have historically been dismissed and ignored (though ‘don’t actively speed up AI progress’ is popular).&lt;br/&gt;&lt;br/&gt;&lt;b&gt;Original article:&lt;/b&gt;&lt;br/&gt;&lt;a href='https://forum.effectivealtruism.org/posts/vwK3v3Mekf6Jjpeep/let-s-think-about-slowing-down-ai-1'&gt;https://forum.effectivealtruism.org/posts/vwK3v3Mekf6Jjpeep/let-s-think-about-slowing-down-ai-1&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;Narrated for the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt; by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded>&lt;p&gt;If you fear that someone will build a machine that will seize control of the world and annihilate humanity, then one kind of response is to try to build further machines that will seize control of the world even earlier without destroying it, forestalling the ruinous machine’s conquest. An alternative or complementary kind of response is to try to avert such machines being built at all, at least while the degree of their apocalyptic tendencies is ambiguous.&lt;br/&gt;&lt;br/&gt;The latter approach seems to me like the kind of basic and obvious thing worthy of at least consideration, and also in its favor, fits nicely in the genre ‘stuff that it isn’t that hard to imagine happening in the real world’. Yet my impression is that for people worried about extinction risk from artificial intelligence, strategies under the heading ‘actively slow down AI progress’ have historically been dismissed and ignored (though ‘don’t actively speed up AI progress’ is popular).&lt;br/&gt;&lt;br/&gt;&lt;b&gt;Original article:&lt;/b&gt;&lt;br/&gt;&lt;a href='https://forum.effectivealtruism.org/posts/vwK3v3Mekf6Jjpeep/let-s-think-about-slowing-down-ai-1'&gt;https://forum.effectivealtruism.org/posts/vwK3v3Mekf6Jjpeep/let-s-think-about-slowing-down-ai-1&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;Narrated for the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt; by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <itunes:author>TYPE III AUDIO</itunes:author>
      <enclosure url="https://www.buzzsprout.com/2062493/11982641-let-s-think-about-slowing-down-ai-by-katja-grace.mp3" length="54036762" type="audio/mpeg"/>
      <guid isPermaLink="false">Buzzsprout-11982641</guid>
      <pubDate>Thu, 05 Jan 2023 02:00:00 +0000</pubDate>
      <podcast:chapters url="https://feeds.buzzsprout.com/2062493/11982641/chapters.json" type="application/json"/>
      <itunes:duration>4499</itunes:duration>
      <itunes:keywords/>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
    </item>
    <item>
      <itunes:title>"High-level hopes for AI alignment" by Holden Karnofsky</itunes:title>
      <title>"High-level hopes for AI alignment" by Holden Karnofsky</title>
      <description>&lt;p&gt;&lt;/p&gt;&lt;p&gt;In previous pieces, I argued that there&amp;apos;s a real and large risk of AI systems&amp;apos; &lt;a href='https://www.cold-takes.com/why-would-ai-aim-to-defeat-humanity/'&gt;aiming&lt;/a&gt; to defeat all of humanity combined - and &lt;a href='https://www.cold-takes.com/ai-could-defeat-all-of-us-combined/'&gt;succeeding&lt;/a&gt;.&lt;br/&gt;&lt;br/&gt;I first argued that this sort of catastrophe would be likely without specific countermeasures to prevent it. I then argued that countermeasures could be challenging, due to some &lt;a href='https://www.cold-takes.com/ai-safety-seems-hard-to-measure/'&gt;key difficulties of AI safety research.&lt;/a&gt;&lt;b&gt;&lt;br/&gt;&lt;br/&gt;&lt;/b&gt;But while I think misalignment risk is serious and presents major challenges, I don’t agree with sentiments along the lines of “We haven’t figured out how to align an AI, so if transformative AI comes soon, we’re doomed.” Here I’m going to talk about some of my &lt;b&gt;high-level hopes for how we might end up avoiding this risk.&lt;br/&gt;&lt;br/&gt;Original article:&lt;br/&gt;&lt;/b&gt;&lt;a href='https://forum.effectivealtruism.org/posts/rJRw78oihoT5paFGd/high-level-hopes-for-ai-alignment'&gt;https://forum.effectivealtruism.org/posts/rJRw78oihoT5paFGd/high-level-hopes-for-ai-alignment&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;Narrated by Holden Karnofsky for the &lt;a href='https://www.cold-takes.com/high-level-hopes-for-ai-alignment/'&gt;Cold Takes&lt;/a&gt; blog.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded>&lt;p&gt;&lt;/p&gt;&lt;p&gt;In previous pieces, I argued that there&amp;apos;s a real and large risk of AI systems&amp;apos; &lt;a href='https://www.cold-takes.com/why-would-ai-aim-to-defeat-humanity/'&gt;aiming&lt;/a&gt; to defeat all of humanity combined - and &lt;a href='https://www.cold-takes.com/ai-could-defeat-all-of-us-combined/'&gt;succeeding&lt;/a&gt;.&lt;br/&gt;&lt;br/&gt;I first argued that this sort of catastrophe would be likely without specific countermeasures to prevent it. I then argued that countermeasures could be challenging, due to some &lt;a href='https://www.cold-takes.com/ai-safety-seems-hard-to-measure/'&gt;key difficulties of AI safety research.&lt;/a&gt;&lt;b&gt;&lt;br/&gt;&lt;br/&gt;&lt;/b&gt;But while I think misalignment risk is serious and presents major challenges, I don’t agree with sentiments along the lines of “We haven’t figured out how to align an AI, so if transformative AI comes soon, we’re doomed.” Here I’m going to talk about some of my &lt;b&gt;high-level hopes for how we might end up avoiding this risk.&lt;br/&gt;&lt;br/&gt;Original article:&lt;br/&gt;&lt;/b&gt;&lt;a href='https://forum.effectivealtruism.org/posts/rJRw78oihoT5paFGd/high-level-hopes-for-ai-alignment'&gt;https://forum.effectivealtruism.org/posts/rJRw78oihoT5paFGd/high-level-hopes-for-ai-alignment&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;Narrated by Holden Karnofsky for the &lt;a href='https://www.cold-takes.com/high-level-hopes-for-ai-alignment/'&gt;Cold Takes&lt;/a&gt; blog.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <itunes:author/>
      <enclosure url="https://www.buzzsprout.com/2062493/11917018-high-level-hopes-for-ai-alignment-by-holden-karnofsky.mp3" length="17190387" type="audio/mpeg"/>
      <guid isPermaLink="false">Buzzsprout-11917018</guid>
      <pubDate>Thu, 22 Dec 2022 12:00:00 +0000</pubDate>
      <itunes:duration>1429</itunes:duration>
      <itunes:keywords/>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
    </item>
    <item>
      <itunes:title>EA Forum Weekly Summaries – Episode 10 (Dec. 5 - 11, 2022)</itunes:title>
      <title>EA Forum Weekly Summaries – Episode 10 (Dec. 5 - 11, 2022)</title>
      <description>&lt;p&gt;&lt;b&gt;Original article:&lt;/b&gt;&lt;br/&gt;&lt;a href='https://forum.effectivealtruism.org/posts/LdEPDqyZvucQkxhWH/ea-and-lw-forums-weekly-summary-28th-nov-4th-dec-22'&gt;https://forum.effectivealtruism.org/posts/8bcPkqdLYG78YbnTh/ea-and-lw-forums-weekly-summary-5th-dec-11th-dec-22&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;This is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection &lt;a href='https://forum.effectivealtruism.org/s/W4fhpuN26naxGCBbN'&gt;here&lt;/a&gt;. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.&lt;br/&gt;&lt;br/&gt;Narrated by &lt;a href='https://twitter.com/SnellColeman'&gt;Coleman Jackson Snell&lt;/a&gt;. Summaries written by &lt;a href='https://forum.effectivealtruism.org/users/greyarea'&gt;Zoe Williams&lt;/a&gt; (&lt;a href='https://rethinkpriorities.org/'&gt;Rethink Priorities&lt;/a&gt;).&lt;br/&gt;&lt;br/&gt;Published by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt; on behalf of the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded>&lt;p&gt;&lt;b&gt;Original article:&lt;/b&gt;&lt;br/&gt;&lt;a href='https://forum.effectivealtruism.org/posts/LdEPDqyZvucQkxhWH/ea-and-lw-forums-weekly-summary-28th-nov-4th-dec-22'&gt;https://forum.effectivealtruism.org/posts/8bcPkqdLYG78YbnTh/ea-and-lw-forums-weekly-summary-5th-dec-11th-dec-22&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;This is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection &lt;a href='https://forum.effectivealtruism.org/s/W4fhpuN26naxGCBbN'&gt;here&lt;/a&gt;. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.&lt;br/&gt;&lt;br/&gt;Narrated by &lt;a href='https://twitter.com/SnellColeman'&gt;Coleman Jackson Snell&lt;/a&gt;. Summaries written by &lt;a href='https://forum.effectivealtruism.org/users/greyarea'&gt;Zoe Williams&lt;/a&gt; (&lt;a href='https://rethinkpriorities.org/'&gt;Rethink Priorities&lt;/a&gt;).&lt;br/&gt;&lt;br/&gt;Published by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt; on behalf of the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <itunes:author>TYPE III AUDIO</itunes:author>
      <enclosure url="https://www.buzzsprout.com/2062493/11889719-ea-forum-weekly-summaries-episode-10-dec-5-11-2022.mp3" length="17552796" type="audio/mpeg"/>
      <guid isPermaLink="false">Buzzsprout-11889719</guid>
      <pubDate>Sat, 17 Dec 2022 20:00:00 +0000</pubDate>
      <itunes:duration>1462</itunes:duration>
      <itunes:keywords/>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
    </item>
    <item>
      <itunes:title>"The Welfare Range Table" by Bob Fischer</itunes:title>
      <title>"The Welfare Range Table" by Bob Fischer</title>
      <description>&lt;p&gt;&lt;b&gt;Key Takeaways&lt;/b&gt;&lt;/p&gt;&lt;ul&gt;&lt;li&gt;Our objective: estimate the welfare ranges of 11 farmed species.&lt;/li&gt;&lt;li&gt;Given hedonism, an individual’s welfare range is the difference between the welfare level associated with the most intense positively valenced state that the individual can realize and the welfare level associated with the most intense negatively valenced state that the individual can realize.&lt;/li&gt;&lt;li&gt;Given some prominent theories about the functions of valenced states, we identified over 90 empirical proxies that might provide evidence of variation in the potential intensities of those states. &lt;/li&gt;&lt;li&gt;There are many unknowns across many species.&lt;/li&gt;&lt;li&gt;It’s rare to have evidence that animals lack a given trait.&lt;/li&gt;&lt;li&gt;We know less about the presence or absence of traits as we move from terrestrial vertebrates to most invertebrates.&lt;/li&gt;&lt;li&gt;Many of the traits about which we know the least are affective traits.&lt;/li&gt;&lt;li&gt;We do have information about some significant traits for many animals.&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;&lt;br/&gt;&lt;b&gt;Original article:&lt;/b&gt;&lt;br/&gt;&lt;a href='https://forum.effectivealtruism.org/posts/tnSg6o7crcHFLc395/the-welfare-range-table'&gt;https://forum.effectivealtruism.org/posts/tnSg6o7crcHFLc395/the-welfare-range-table&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;Narrated for the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt; by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded>&lt;p&gt;&lt;b&gt;Key Takeaways&lt;/b&gt;&lt;/p&gt;&lt;ul&gt;&lt;li&gt;Our objective: estimate the welfare ranges of 11 farmed species.&lt;/li&gt;&lt;li&gt;Given hedonism, an individual’s welfare range is the difference between the welfare level associated with the most intense positively valenced state that the individual can realize and the welfare level associated with the most intense negatively valenced state that the individual can realize.&lt;/li&gt;&lt;li&gt;Given some prominent theories about the functions of valenced states, we identified over 90 empirical proxies that might provide evidence of variation in the potential intensities of those states. &lt;/li&gt;&lt;li&gt;There are many unknowns across many species.&lt;/li&gt;&lt;li&gt;It’s rare to have evidence that animals lack a given trait.&lt;/li&gt;&lt;li&gt;We know less about the presence or absence of traits as we move from terrestrial vertebrates to most invertebrates.&lt;/li&gt;&lt;li&gt;Many of the traits about which we know the least are affective traits.&lt;/li&gt;&lt;li&gt;We do have information about some significant traits for many animals.&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;&lt;br/&gt;&lt;b&gt;Original article:&lt;/b&gt;&lt;br/&gt;&lt;a href='https://forum.effectivealtruism.org/posts/tnSg6o7crcHFLc395/the-welfare-range-table'&gt;https://forum.effectivealtruism.org/posts/tnSg6o7crcHFLc395/the-welfare-range-table&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;Narrated for the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt; by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <itunes:author/>
      <enclosure url="https://www.buzzsprout.com/2062493/11861511-the-welfare-range-table-by-bob-fischer.mp3" length="15167310" type="audio/mpeg"/>
      <guid isPermaLink="false">Buzzsprout-11861511</guid>
      <pubDate>Tue, 13 Dec 2022 02:00:00 +0000</pubDate>
      <podcast:chapters url="https://feeds.buzzsprout.com/2062493/11861511/chapters.json" type="application/json"/>
      <itunes:duration>1260</itunes:duration>
      <itunes:keywords/>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
    </item>
    <item>
      <itunes:title>"Some observations from an EA-adjacent charitable effort" by Patrick McKenzie</itunes:title>
      <title>"Some observations from an EA-adjacent charitable effort" by Patrick McKenzie</title>
      <description>&lt;p&gt;Hiya folks! I&amp;apos;m Patrick McKenzie, better known on the Internets as patio11. (Proof.) Long-time-listener, first-time-caller; I don&amp;apos;t think I would consider myself an EA but I&amp;apos;ve been reading y&amp;apos;all, and adjacent intellectual spaces, for some time now.&lt;br/&gt;&lt;br/&gt;Epistemic status: Arbitrarily high confidence with regards to facts of the VaccinateCA experience (though speaking only for myself), moderately high confidence with respect to inferences made about vaccine policy and mechanisms for impact last year, one geek&amp;apos;s opinion with respect to implicit advice to you all going forward.&lt;br/&gt;&lt;b&gt;&lt;br/&gt;A Thing That Happened Last Year&lt;/b&gt;&lt;br/&gt;&lt;br/&gt;As some of the California-based EAs may remember, the rollout of the covid-19 vaccines in California and across the U.S. was... not optimal. I accidentally ended up founding a charity, VaccinateCA, which ran the national shadow vaccine location information infrastructure for 6 months.&lt;br/&gt;&lt;br/&gt;The core product at the start of the sprint, which some of you may be familiar with, was a site which listed places to get the vaccine in California, sourced by a volunteer-driven operation to conduct an ongoing census of medical providers by calling them. Importantly, that was not our primary vector for impact, though it was very important to our trajectory.&lt;br/&gt;&lt;br/&gt;&lt;b&gt;Original article:&lt;/b&gt;&lt;br/&gt;&lt;a href='https://forum.effectivealtruism.org/posts/NkPghabDd54nkG3kX/some-observations-from-an-ea-adjacent-charitable-effort'&gt;https://forum.effectivealtruism.org/posts/NkPghabDd54nkG3kX/some-observations-from-an-ea-adjacent-charitable-effort&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;Narrated for the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt; by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded>&lt;p&gt;Hiya folks! I&amp;apos;m Patrick McKenzie, better known on the Internets as patio11. (Proof.) Long-time-listener, first-time-caller; I don&amp;apos;t think I would consider myself an EA but I&amp;apos;ve been reading y&amp;apos;all, and adjacent intellectual spaces, for some time now.&lt;br/&gt;&lt;br/&gt;Epistemic status: Arbitrarily high confidence with regards to facts of the VaccinateCA experience (though speaking only for myself), moderately high confidence with respect to inferences made about vaccine policy and mechanisms for impact last year, one geek&amp;apos;s opinion with respect to implicit advice to you all going forward.&lt;br/&gt;&lt;b&gt;&lt;br/&gt;A Thing That Happened Last Year&lt;/b&gt;&lt;br/&gt;&lt;br/&gt;As some of the California-based EAs may remember, the rollout of the covid-19 vaccines in California and across the U.S. was... not optimal. I accidentally ended up founding a charity, VaccinateCA, which ran the national shadow vaccine location information infrastructure for 6 months.&lt;br/&gt;&lt;br/&gt;The core product at the start of the sprint, which some of you may be familiar with, was a site which listed places to get the vaccine in California, sourced by a volunteer-driven operation to conduct an ongoing census of medical providers by calling them. Importantly, that was not our primary vector for impact, though it was very important to our trajectory.&lt;br/&gt;&lt;br/&gt;&lt;b&gt;Original article:&lt;/b&gt;&lt;br/&gt;&lt;a href='https://forum.effectivealtruism.org/posts/NkPghabDd54nkG3kX/some-observations-from-an-ea-adjacent-charitable-effort'&gt;https://forum.effectivealtruism.org/posts/NkPghabDd54nkG3kX/some-observations-from-an-ea-adjacent-charitable-effort&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;Narrated for the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt; by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <itunes:author/>
      <enclosure url="https://www.buzzsprout.com/2062493/11860744-some-observations-from-an-ea-adjacent-charitable-effort-by-patrick-mckenzie.mp3" length="11070029" type="audio/mpeg"/>
      <guid isPermaLink="false">Buzzsprout-11860744</guid>
      <pubDate>Mon, 12 Dec 2022 23:00:00 +0000</pubDate>
      <podcast:chapters url="https://feeds.buzzsprout.com/2062493/11860744/chapters.json" type="application/json"/>
      <itunes:duration>919</itunes:duration>
      <itunes:keywords/>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
    </item>
    <item>
      <itunes:title>EA Forum Weekly Summaries – Episode 9 (Nov. 28 - Dec. 4, 2022)</itunes:title>
      <title>EA Forum Weekly Summaries – Episode 9 (Nov. 28 - Dec. 4, 2022)</title>
      <description>&lt;p&gt;&lt;b&gt;Original article:&lt;/b&gt;&lt;br/&gt;&lt;a href='https://forum.effectivealtruism.org/posts/LdEPDqyZvucQkxhWH/ea-and-lw-forums-weekly-summary-28th-nov-4th-dec-22'&gt;https://forum.effectivealtruism.org/posts/LdEPDqyZvucQkxhWH/ea-and-lw-forums-weekly-summary-28th-nov-4th-dec-22&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;This is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection &lt;a href='https://forum.effectivealtruism.org/s/W4fhpuN26naxGCBbN'&gt;here&lt;/a&gt;. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.&lt;br/&gt;&lt;br/&gt;Narrated by &lt;a href='https://twitter.com/SnellColeman'&gt;Coleman Jackson Snell&lt;/a&gt;. Summaries written by &lt;a href='https://forum.effectivealtruism.org/users/greyarea'&gt;Zoe Williams&lt;/a&gt; (&lt;a href='https://rethinkpriorities.org/'&gt;Rethink Priorities&lt;/a&gt;). &lt;br/&gt;&lt;br/&gt;Published by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt; on behalf of the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded>&lt;p&gt;&lt;b&gt;Original article:&lt;/b&gt;&lt;br/&gt;&lt;a href='https://forum.effectivealtruism.org/posts/LdEPDqyZvucQkxhWH/ea-and-lw-forums-weekly-summary-28th-nov-4th-dec-22'&gt;https://forum.effectivealtruism.org/posts/LdEPDqyZvucQkxhWH/ea-and-lw-forums-weekly-summary-28th-nov-4th-dec-22&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;This is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection &lt;a href='https://forum.effectivealtruism.org/s/W4fhpuN26naxGCBbN'&gt;here&lt;/a&gt;. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.&lt;br/&gt;&lt;br/&gt;Narrated by &lt;a href='https://twitter.com/SnellColeman'&gt;Coleman Jackson Snell&lt;/a&gt;. Summaries written by &lt;a href='https://forum.effectivealtruism.org/users/greyarea'&gt;Zoe Williams&lt;/a&gt; (&lt;a href='https://rethinkpriorities.org/'&gt;Rethink Priorities&lt;/a&gt;). &lt;br/&gt;&lt;br/&gt;Published by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt; on behalf of the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <itunes:author>TYPE III AUDIO</itunes:author>
      <enclosure url="https://www.buzzsprout.com/2062493/11842750-ea-forum-weekly-summaries-episode-9-nov-28-dec-4-2022.mp3" length="18274105" type="audio/mpeg"/>
      <guid isPermaLink="false">Buzzsprout-11842750</guid>
      <pubDate>Fri, 09 Dec 2022 14:00:00 +0000</pubDate>
      <itunes:duration>1519</itunes:duration>
      <itunes:keywords/>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
    </item>
    <item>
      <itunes:title>"Why Neuron Counts Shouldn't Be Used as Proxies for Moral Weight" by Adam Shriver</itunes:title>
      <title>"Why Neuron Counts Shouldn't Be Used as Proxies for Moral Weight" by Adam Shriver</title>
      <description>&lt;p&gt;&lt;b&gt;&lt;br/&gt;&lt;/b&gt;Key Takeaways:&lt;br/&gt;&lt;br/&gt;&lt;/p&gt;&lt;ul&gt;&lt;li&gt;Several influential EAs have suggested using neuron counts as rough proxies for animals’ relative moral weights. We challenge this suggestion.&lt;/li&gt;&lt;li&gt;We take the following ideas to be the strongest reasons in favor of a neuron count proxy:&lt;ul&gt;&lt;li&gt;neuron counts are correlated with intelligence and intelligence is correlated with moral weight,&lt;/li&gt;&lt;li&gt;additional neurons result in “more consciousness” or “more valenced consciousness,” and&lt;/li&gt;&lt;li&gt;increasing numbers of neurons are required to reach thresholds of minimal information capacity required for morally relevant cognitive abilities.&lt;/li&gt;&lt;/ul&gt;&lt;/li&gt;&lt;li&gt;However:&lt;ul&gt;&lt;li&gt;in regards to intelligence, we can question both&lt;b&gt; &lt;/b&gt;the extent to which more neurons are correlated with intelligence and whether more intelligence in fact predicts greater moral weight; &lt;/li&gt;&lt;li&gt;many ways of arguing that more neurons results in more valenced consciousness seem incompatible with our current understanding of how the brain is likely to work; and&lt;/li&gt;&lt;li&gt;there is no straightforward empirical evidence or compelling conceptual arguments indicating that relative differences in neuron counts within or between species reliably predicts welfare relevant functional capacities.&lt;/li&gt;&lt;/ul&gt;&lt;/li&gt;&lt;li&gt;Overall, we suggest that neuron counts should not be used as a sole proxy for moral weight, but cannot be dismissed entirely. Rather, neuron counts should be combined with other metrics in an overall weighted score that includes information about whether different species have welfare-relevant capacities. &lt;/li&gt;&lt;/ul&gt;&lt;p&gt;&lt;b&gt;Original article:&lt;/b&gt;&lt;br/&gt;&lt;a href='https://forum.effectivealtruism.org/posts/Mfq7KxQRvkeLnJvoB/why-neuron-counts-shouldn-t-be-used-as-proxies-for-moral'&gt;https://forum.effectivealtruism.org/posts/Mfq7KxQRvkeLnJvoB/why-neuron-counts-shouldn-t-be-used-as-proxies-for-moral&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;This is a linkpost for &lt;a href='https://forum.effectivealtruism.org/out?url=https%3A%2F%2Fdocs.google.com%2Fdocument%2Fd%2F1p50vw84-ry2taYmyOIl4B91j7wkCurlB%2Fedit%3Frtpof%3Dtrue%26sd%3Dtrue'&gt;https://docs.google.com/document/d/1p50vw84-ry2taYmyOIl4B91j7wkCurlB/edit?rtpof=true&amp;amp;sd=true&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;Narrated for the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt; by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt;.&lt;br/&gt;&lt;br/&gt;&lt;br/&gt;&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded>&lt;p&gt;&lt;b&gt;&lt;br/&gt;&lt;/b&gt;Key Takeaways:&lt;br/&gt;&lt;br/&gt;&lt;/p&gt;&lt;ul&gt;&lt;li&gt;Several influential EAs have suggested using neuron counts as rough proxies for animals’ relative moral weights. We challenge this suggestion.&lt;/li&gt;&lt;li&gt;We take the following ideas to be the strongest reasons in favor of a neuron count proxy:&lt;ul&gt;&lt;li&gt;neuron counts are correlated with intelligence and intelligence is correlated with moral weight,&lt;/li&gt;&lt;li&gt;additional neurons result in “more consciousness” or “more valenced consciousness,” and&lt;/li&gt;&lt;li&gt;increasing numbers of neurons are required to reach thresholds of minimal information capacity required for morally relevant cognitive abilities.&lt;/li&gt;&lt;/ul&gt;&lt;/li&gt;&lt;li&gt;However:&lt;ul&gt;&lt;li&gt;in regards to intelligence, we can question both&lt;b&gt; &lt;/b&gt;the extent to which more neurons are correlated with intelligence and whether more intelligence in fact predicts greater moral weight; &lt;/li&gt;&lt;li&gt;many ways of arguing that more neurons results in more valenced consciousness seem incompatible with our current understanding of how the brain is likely to work; and&lt;/li&gt;&lt;li&gt;there is no straightforward empirical evidence or compelling conceptual arguments indicating that relative differences in neuron counts within or between species reliably predicts welfare relevant functional capacities.&lt;/li&gt;&lt;/ul&gt;&lt;/li&gt;&lt;li&gt;Overall, we suggest that neuron counts should not be used as a sole proxy for moral weight, but cannot be dismissed entirely. Rather, neuron counts should be combined with other metrics in an overall weighted score that includes information about whether different species have welfare-relevant capacities. &lt;/li&gt;&lt;/ul&gt;&lt;p&gt;&lt;b&gt;Original article:&lt;/b&gt;&lt;br/&gt;&lt;a href='https://forum.effectivealtruism.org/posts/Mfq7KxQRvkeLnJvoB/why-neuron-counts-shouldn-t-be-used-as-proxies-for-moral'&gt;https://forum.effectivealtruism.org/posts/Mfq7KxQRvkeLnJvoB/why-neuron-counts-shouldn-t-be-used-as-proxies-for-moral&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;This is a linkpost for &lt;a href='https://forum.effectivealtruism.org/out?url=https%3A%2F%2Fdocs.google.com%2Fdocument%2Fd%2F1p50vw84-ry2taYmyOIl4B91j7wkCurlB%2Fedit%3Frtpof%3Dtrue%26sd%3Dtrue'&gt;https://docs.google.com/document/d/1p50vw84-ry2taYmyOIl4B91j7wkCurlB/edit?rtpof=true&amp;amp;sd=true&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;Narrated for the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt; by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt;.&lt;br/&gt;&lt;br/&gt;&lt;br/&gt;&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <itunes:author/>
      <enclosure url="https://www.buzzsprout.com/2062493/11793032-why-neuron-counts-shouldn-t-be-used-as-proxies-for-moral-weight-by-adam-shriver.mp3" length="13289550" type="audio/mpeg"/>
      <guid isPermaLink="false">Buzzsprout-11793032</guid>
      <pubDate>Fri, 02 Dec 2022 00:00:00 +0000</pubDate>
      <podcast:chapters url="https://feeds.buzzsprout.com/2062493/11793032/chapters.json" type="application/json"/>
      <itunes:duration>1104</itunes:duration>
      <itunes:keywords/>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
    </item>
    <item>
      <itunes:title>"Notes on effective altruism" by Michael Nielsen</itunes:title>
      <title>"Notes on effective altruism" by Michael Nielsen</title>
      <description>&lt;p&gt;Long and rough notes on Effective Altruism (EA). Written to help me get to the bottom of several questions: what do I like and think is important about EA? Why do I find the mindset so foreign? Why am I not an EA? And to start me thinking about: what do alternatives to EA look like? The notes are not aimed at effective altruists, though they may perhaps be of interest to EA-adjacent people. Thoughtful, informed comments and corrections welcome (especially detailed, specific corrections!) - see the comment area at the bottom.&lt;br/&gt;&lt;br/&gt;&amp;quot;Using evidence and reason to figure out how to benefit others as much as possible, and taking action on that basis&amp;quot;: that&amp;apos;s the idea at the foundation of the Effective Altruism (EA) ideology and movement. Over the past two decades it has gone from being an idea batted about by a few moral philosophers to being a core part of the life philosophy of thousands or tens of thousands of people, including several of the world&amp;apos;s most powerful and wealthy individuals. These are my rough working notes on EA. The notes are long and quickly written: disorganized rough thinking, not a polished essay.&lt;br/&gt;&lt;br/&gt;&lt;strong&gt;Original article:&lt;/strong&gt;&lt;br/&gt;&lt;a href='https://forum.effectivealtruism.org/posts/JBAPssaYMMRfNqYt7/michael-nielsen-s-notes-on-effective-altruism'&gt;https://michaelnotebook.com/eanotes/&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;Narrated for the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt; by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt;.&lt;br/&gt;&lt;br/&gt;&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded>&lt;p&gt;Long and rough notes on Effective Altruism (EA). Written to help me get to the bottom of several questions: what do I like and think is important about EA? Why do I find the mindset so foreign? Why am I not an EA? And to start me thinking about: what do alternatives to EA look like? The notes are not aimed at effective altruists, though they may perhaps be of interest to EA-adjacent people. Thoughtful, informed comments and corrections welcome (especially detailed, specific corrections!) - see the comment area at the bottom.&lt;br/&gt;&lt;br/&gt;&amp;quot;Using evidence and reason to figure out how to benefit others as much as possible, and taking action on that basis&amp;quot;: that&amp;apos;s the idea at the foundation of the Effective Altruism (EA) ideology and movement. Over the past two decades it has gone from being an idea batted about by a few moral philosophers to being a core part of the life philosophy of thousands or tens of thousands of people, including several of the world&amp;apos;s most powerful and wealthy individuals. These are my rough working notes on EA. The notes are long and quickly written: disorganized rough thinking, not a polished essay.&lt;br/&gt;&lt;br/&gt;&lt;strong&gt;Original article:&lt;/strong&gt;&lt;br/&gt;&lt;a href='https://forum.effectivealtruism.org/posts/JBAPssaYMMRfNqYt7/michael-nielsen-s-notes-on-effective-altruism'&gt;https://michaelnotebook.com/eanotes/&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;Narrated for the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt; by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt;.&lt;br/&gt;&lt;br/&gt;&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <itunes:author/>
      <enclosure url="https://www.buzzsprout.com/2062493/11668244-notes-on-effective-altruism-by-michael-nielsen.mp3" length="40314101" type="audio/mpeg"/>
      <guid isPermaLink="false">Buzzsprout-11668244</guid>
      <pubDate>Wed, 30 Nov 2022 06:00:00 +0000</pubDate>
      <podcast:chapters url="https://feeds.buzzsprout.com/2062493/11668244/chapters.json" type="application/json"/>
      <itunes:duration>3356</itunes:duration>
      <itunes:keywords/>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
    </item>
    <item>
      <itunes:title>"Population ethics without axiology: A framework" by Lukas_Gloor</itunes:title>
      <title>"Population ethics without axiology: A framework" by Lukas_Gloor</title>
      <description>&lt;p&gt;This post introduces a framework for thinking about population ethics: “population ethics without axiology.” In its last section, I sketch the implications of adopting my framework for evaluating the thesis of longtermism. Before explaining what’s different about my proposal, I’ll describe what I understand to be the standard approach it seeks to replace, which I call “axiology-focused.”&lt;br/&gt;&lt;br/&gt;&lt;b&gt;Original article:&lt;/b&gt;&lt;br/&gt;&lt;a href='https://forum.effectivealtruism.org/posts/dQvDxDMyueLyydHw4/population-ethics-without-axiology-a-framework'&gt;https://forum.effectivealtruism.org/posts/dQvDxDMyueLyydHw4/population-ethics-without-axiology-a-framework&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;Narrated for the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt; by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded>&lt;p&gt;This post introduces a framework for thinking about population ethics: “population ethics without axiology.” In its last section, I sketch the implications of adopting my framework for evaluating the thesis of longtermism. Before explaining what’s different about my proposal, I’ll describe what I understand to be the standard approach it seeks to replace, which I call “axiology-focused.”&lt;br/&gt;&lt;br/&gt;&lt;b&gt;Original article:&lt;/b&gt;&lt;br/&gt;&lt;a href='https://forum.effectivealtruism.org/posts/dQvDxDMyueLyydHw4/population-ethics-without-axiology-a-framework'&gt;https://forum.effectivealtruism.org/posts/dQvDxDMyueLyydHw4/population-ethics-without-axiology-a-framework&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;Narrated for the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt; by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <itunes:author/>
      <enclosure url="https://www.buzzsprout.com/2062493/11622305-population-ethics-without-axiology-a-framework-by-lukas_gloor.mp3" length="53146950" type="audio/mpeg"/>
      <guid isPermaLink="false">Buzzsprout-11622305</guid>
      <pubDate>Wed, 30 Nov 2022 06:00:00 +0000</pubDate>
      <podcast:chapters url="https://feeds.buzzsprout.com/2062493/11622305/chapters.json" type="application/json"/>
      <itunes:duration>4425</itunes:duration>
      <itunes:keywords/>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
    </item>
    <item>
      <itunes:title>"How bad could a war get?" by Stephen Clare &amp; Rani Martin</itunes:title>
      <title>"How bad could a war get?" by Stephen Clare &amp; Rani Martin</title>
      <description>&lt;p&gt;In “How Likely is World War III?”, Stephen suggested the chance of an extinction-level war occurring sometime this century is just under 1%. This was a simple, rough estimate, made in the following steps:&lt;br/&gt;&lt;br/&gt;&lt;/p&gt;&lt;ol&gt;&lt;li&gt;Assume that wars, i.e. conflicts that cause at least 1000 battle deaths, continue to break out at their historical average rate of one about every two years. &lt;/li&gt;&lt;li&gt;Assume that the distribution of battle deaths in wars follows a power law. &lt;/li&gt;&lt;li&gt;Use parameters for the power law distribution estimated by Bear Braumoeller in Only the Dead to calculate the chance that any given war escalates to 8 billion battle deaths&lt;/li&gt;&lt;li&gt;Work out the likelihood of such a war given the expected number of wars between now and 2100.&lt;/li&gt;&lt;/ol&gt;&lt;div&gt;Not everybody was convinced. I (Stephen) have to admit that some skepticism is justified. An extinction-level war would be 30-to-100 times larger than World War II, the most severe war humanity has experienced so far. Is it reasonable to just assume number go up? Would the same escalatory dynamics that shape smaller wars apply at this scale? &lt;br/&gt;&lt;br/&gt;&lt;strong&gt;Original article:&lt;/strong&gt;&lt;br/&gt;&lt;a href='https://forum.effectivealtruism.org/posts/PyZCqLrDTJrQofEf7/how-bad-could-a-war-get'&gt;https://forum.effectivealtruism.org/posts/PyZCqLrDTJrQofEf7/how-bad-could-a-war-get&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;Narrated for the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt; by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt;.&lt;br/&gt;&lt;br/&gt;&lt;/div&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded>&lt;p&gt;In “How Likely is World War III?”, Stephen suggested the chance of an extinction-level war occurring sometime this century is just under 1%. This was a simple, rough estimate, made in the following steps:&lt;br/&gt;&lt;br/&gt;&lt;/p&gt;&lt;ol&gt;&lt;li&gt;Assume that wars, i.e. conflicts that cause at least 1000 battle deaths, continue to break out at their historical average rate of one about every two years. &lt;/li&gt;&lt;li&gt;Assume that the distribution of battle deaths in wars follows a power law. &lt;/li&gt;&lt;li&gt;Use parameters for the power law distribution estimated by Bear Braumoeller in Only the Dead to calculate the chance that any given war escalates to 8 billion battle deaths&lt;/li&gt;&lt;li&gt;Work out the likelihood of such a war given the expected number of wars between now and 2100.&lt;/li&gt;&lt;/ol&gt;&lt;div&gt;Not everybody was convinced. I (Stephen) have to admit that some skepticism is justified. An extinction-level war would be 30-to-100 times larger than World War II, the most severe war humanity has experienced so far. Is it reasonable to just assume number go up? Would the same escalatory dynamics that shape smaller wars apply at this scale? &lt;br/&gt;&lt;br/&gt;&lt;strong&gt;Original article:&lt;/strong&gt;&lt;br/&gt;&lt;a href='https://forum.effectivealtruism.org/posts/PyZCqLrDTJrQofEf7/how-bad-could-a-war-get'&gt;https://forum.effectivealtruism.org/posts/PyZCqLrDTJrQofEf7/how-bad-could-a-war-get&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;Narrated for the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt; by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt;.&lt;br/&gt;&lt;br/&gt;&lt;/div&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <itunes:author>Peter</itunes:author>
      <enclosure url="https://www.buzzsprout.com/2062493/11712009-how-bad-could-a-war-get-by-stephen-clare-rani-martin.mp3" length="19078864" type="audio/mpeg"/>
      <guid isPermaLink="false">Buzzsprout-11712009</guid>
      <pubDate>Wed, 30 Nov 2022 05:00:00 +0000</pubDate>
      <podcast:chapters url="https://feeds.buzzsprout.com/2062493/11712009/chapters.json" type="application/json"/>
      <itunes:duration>1586</itunes:duration>
      <itunes:keywords/>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
    </item>
    <item>
      <itunes:title>"Are you really in a race? The cautionary tales of Szilárd and Ellsberg" by Haydn Belfield</itunes:title>
      <title>"Are you really in a race? The cautionary tales of Szilárd and Ellsberg" by Haydn Belfield</title>
      <description>&lt;p&gt;In both the 1940s and 1950s, well-meaning and good people – the brightest of their generation – were convinced they were in an existential race with an expansionary, totalitarian regime. Because of this belief, they advocated for and participated in a ‘sprint’ race: the Manhattan Project to develop a US atomic bomb (1939-1945); and the ‘missile gap’ project to build up a US ICBM capability (1957-1962). These were both based on a mistake, however - the Nazis decided against a Manhattan Project in 1942, and the Soviets decided against an ICBM build-up in 1958. The main consequence of both was to unilaterally speed up dangerous developments and increase existential risk. Key participants, such as Albert Einstein and Daniel Ellsberg, described their involvement as the greatest mistake of their life.&lt;br/&gt;&lt;br/&gt;Our current situation with AGI shares certain striking similarities and certain lessons suggest themselves: make sure you’re actually in a race (information on whether you are is very valuable), be careful when secrecy is emphasised, and don’t give up your power as an expert too easily.&lt;br/&gt;&lt;br/&gt;&lt;b&gt;Original article:&lt;/b&gt;&lt;br/&gt;&lt;a href='https://forum.effectivealtruism.org/posts/cXBznkfoPJAjacFoT/are-you-really-in-a-race-the-cautionary-tales-of-szilard-and'&gt;https://forum.effectivealtruism.org/posts/cXBznkfoPJAjacFoT/are-you-really-in-a-race-the-cautionary-tales-of-szilard-and&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;Narrated for the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt; by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt;.&lt;br/&gt;&lt;br/&gt;&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded>&lt;p&gt;In both the 1940s and 1950s, well-meaning and good people – the brightest of their generation – were convinced they were in an existential race with an expansionary, totalitarian regime. Because of this belief, they advocated for and participated in a ‘sprint’ race: the Manhattan Project to develop a US atomic bomb (1939-1945); and the ‘missile gap’ project to build up a US ICBM capability (1957-1962). These were both based on a mistake, however - the Nazis decided against a Manhattan Project in 1942, and the Soviets decided against an ICBM build-up in 1958. The main consequence of both was to unilaterally speed up dangerous developments and increase existential risk. Key participants, such as Albert Einstein and Daniel Ellsberg, described their involvement as the greatest mistake of their life.&lt;br/&gt;&lt;br/&gt;Our current situation with AGI shares certain striking similarities and certain lessons suggest themselves: make sure you’re actually in a race (information on whether you are is very valuable), be careful when secrecy is emphasised, and don’t give up your power as an expert too easily.&lt;br/&gt;&lt;br/&gt;&lt;b&gt;Original article:&lt;/b&gt;&lt;br/&gt;&lt;a href='https://forum.effectivealtruism.org/posts/cXBznkfoPJAjacFoT/are-you-really-in-a-race-the-cautionary-tales-of-szilard-and'&gt;https://forum.effectivealtruism.org/posts/cXBznkfoPJAjacFoT/are-you-really-in-a-race-the-cautionary-tales-of-szilard-and&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;Narrated for the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt; by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt;.&lt;br/&gt;&lt;br/&gt;&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <itunes:author>Peter</itunes:author>
      <enclosure url="https://www.buzzsprout.com/2062493/11644157-are-you-really-in-a-race-the-cautionary-tales-of-szilard-and-ellsberg-by-haydn-belfield.mp3" length="28753121" type="audio/mpeg"/>
      <guid isPermaLink="false">Buzzsprout-11644157</guid>
      <pubDate>Wed, 30 Nov 2022 05:00:00 +0000</pubDate>
      <podcast:chapters url="https://feeds.buzzsprout.com/2062493/11644157/chapters.json" type="application/json"/>
      <itunes:duration>2393</itunes:duration>
      <itunes:keywords/>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
    </item>
    <item>
      <itunes:title>"My take on What We Owe the Future" by Eli Lifland</itunes:title>
      <title>"My take on What We Owe the Future" by Eli Lifland</title>
      <description>&lt;p&gt;What We Owe The Future (WWOTF) by Will MacAskill has recently been released with much fanfare. While I strongly agree that future people matter morally and we should act based on this, I think the book isn’t clear enough about MacAskill’s views on longtermist priorities, and to the extent it is it presents a mistaken view of the most promising longtermist interventions.&lt;br/&gt;&lt;br/&gt;I argue that MacAskill:&lt;br/&gt;&lt;br/&gt;&lt;/p&gt;&lt;ol&gt;&lt;li&gt;Underestimates risk of misaligned AI takeover. &lt;/li&gt;&lt;li&gt;Overestimates risk from stagnation. &lt;/li&gt;&lt;li&gt;Isn’t clear enough about longtermist priorities. &lt;/li&gt;&lt;/ol&gt;&lt;div&gt;I highlight and expand on these disagreements in part to contribute to the debate on these topics, but also make a practical recommendation.&lt;br/&gt;&lt;br/&gt;&lt;strong&gt;Original article:&lt;/strong&gt;&lt;br/&gt;&lt;a href='https://forum.effectivealtruism.org/posts/9Y6Y6qoAigRC7A8eX/my-take-on-what-we-owe-the-future'&gt;https://forum.effectivealtruism.org/posts/9Y6Y6qoAigRC7A8eX/my-take-on-what-we-owe-the-future&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;Narrated for the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt; by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt;.&lt;br/&gt;&lt;br/&gt;&lt;/div&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded>&lt;p&gt;What We Owe The Future (WWOTF) by Will MacAskill has recently been released with much fanfare. While I strongly agree that future people matter morally and we should act based on this, I think the book isn’t clear enough about MacAskill’s views on longtermist priorities, and to the extent it is it presents a mistaken view of the most promising longtermist interventions.&lt;br/&gt;&lt;br/&gt;I argue that MacAskill:&lt;br/&gt;&lt;br/&gt;&lt;/p&gt;&lt;ol&gt;&lt;li&gt;Underestimates risk of misaligned AI takeover. &lt;/li&gt;&lt;li&gt;Overestimates risk from stagnation. &lt;/li&gt;&lt;li&gt;Isn’t clear enough about longtermist priorities. &lt;/li&gt;&lt;/ol&gt;&lt;div&gt;I highlight and expand on these disagreements in part to contribute to the debate on these topics, but also make a practical recommendation.&lt;br/&gt;&lt;br/&gt;&lt;strong&gt;Original article:&lt;/strong&gt;&lt;br/&gt;&lt;a href='https://forum.effectivealtruism.org/posts/9Y6Y6qoAigRC7A8eX/my-take-on-what-we-owe-the-future'&gt;https://forum.effectivealtruism.org/posts/9Y6Y6qoAigRC7A8eX/my-take-on-what-we-owe-the-future&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;Narrated for the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt; by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt;.&lt;br/&gt;&lt;br/&gt;&lt;/div&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <itunes:author>Peter</itunes:author>
      <enclosure url="https://www.buzzsprout.com/2062493/11734472-my-take-on-what-we-owe-the-future-by-eli-lifland.mp3" length="50315250" type="audio/mpeg"/>
      <guid isPermaLink="false">Buzzsprout-11734472</guid>
      <pubDate>Tue, 29 Nov 2022 06:00:00 +0000</pubDate>
      <podcast:chapters url="https://feeds.buzzsprout.com/2062493/11734472/chapters.json" type="application/json"/>
      <itunes:duration>4189</itunes:duration>
      <itunes:keywords>curated</itunes:keywords>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
    </item>
    <item>
      <itunes:title>"Effective altruism in the garden of ends" by tyleralterman</itunes:title>
      <title>"Effective altruism in the garden of ends" by tyleralterman</title>
      <description>&lt;p&gt;This essay is a reconciliation of moral commitment and the good life. Here is its essence in two paragraphs:&lt;br/&gt;&lt;br/&gt;Totalized by an ought, I sought its source outside myself. I found nothing. The ought came from me, an internal whip toward a thing which, confusingly, I already wanted – to see others flourish. I dropped the whip. My want now rested, commensurate, amidst others of its kind – terminal wants for ends-in-themselves: loving, dancing, and the other spiritual requirements of my particular life. To say that these were lesser seemed to say, “It is more vital and urgent to eat well than to drink or sleep well.” No – I will eat, sleep, and drink well to feel alive; so too will I love and dance as well as help.&lt;br/&gt;&lt;br/&gt;Once, the material requirements of life were in competition: If we spent time building shelter it might jeopardize daylight that could have been spent hunting. We built communities to take the material requirements of life out of competition. For many of us, the task remains to do the same for our spirits. Particularly so for those working outside of organized religion on huge, consuming causes. I suggest such a community might practice something like “fractal altruism,” taking the good life at the scale of its individuals out of competition with impact at the scale of the world.&lt;br/&gt;&lt;br/&gt;&lt;b&gt;Original article:&lt;/b&gt;&lt;br/&gt;&lt;a href='https://forum.effectivealtruism.org/posts/AjxqsDmhGiW9g8ju6/effective-altruism-in-the-garden-of-ends'&gt;https://forum.effectivealtruism.org/posts/AjxqsDmhGiW9g8ju6/effective-altruism-in-the-garden-of-ends&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;Narrated for the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt; by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded>&lt;p&gt;This essay is a reconciliation of moral commitment and the good life. Here is its essence in two paragraphs:&lt;br/&gt;&lt;br/&gt;Totalized by an ought, I sought its source outside myself. I found nothing. The ought came from me, an internal whip toward a thing which, confusingly, I already wanted – to see others flourish. I dropped the whip. My want now rested, commensurate, amidst others of its kind – terminal wants for ends-in-themselves: loving, dancing, and the other spiritual requirements of my particular life. To say that these were lesser seemed to say, “It is more vital and urgent to eat well than to drink or sleep well.” No – I will eat, sleep, and drink well to feel alive; so too will I love and dance as well as help.&lt;br/&gt;&lt;br/&gt;Once, the material requirements of life were in competition: If we spent time building shelter it might jeopardize daylight that could have been spent hunting. We built communities to take the material requirements of life out of competition. For many of us, the task remains to do the same for our spirits. Particularly so for those working outside of organized religion on huge, consuming causes. I suggest such a community might practice something like “fractal altruism,” taking the good life at the scale of its individuals out of competition with impact at the scale of the world.&lt;br/&gt;&lt;br/&gt;&lt;b&gt;Original article:&lt;/b&gt;&lt;br/&gt;&lt;a href='https://forum.effectivealtruism.org/posts/AjxqsDmhGiW9g8ju6/effective-altruism-in-the-garden-of-ends'&gt;https://forum.effectivealtruism.org/posts/AjxqsDmhGiW9g8ju6/effective-altruism-in-the-garden-of-ends&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;Narrated for the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt; by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <itunes:author/>
      <enclosure url="https://www.buzzsprout.com/2062493/11622061-effective-altruism-in-the-garden-of-ends-by-tyleralterman.mp3" length="44638440" type="audio/mpeg"/>
      <guid isPermaLink="false">Buzzsprout-11622061</guid>
      <pubDate>Tue, 29 Nov 2022 04:00:00 +0000</pubDate>
      <podcast:chapters url="https://feeds.buzzsprout.com/2062493/11622061/chapters.json" type="application/json"/>
      <itunes:duration>3716</itunes:duration>
      <itunes:keywords/>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
    </item>
    <item>
      <itunes:title>"Measuring Good Better" by Michael Plant, GiveWell, Jason Schukraft, Matt Lerner, and Innovations for Poverty Action</itunes:title>
      <title>"Measuring Good Better" by Michael Plant, GiveWell, Jason Schukraft, Matt Lerner, and Innovations for Poverty Action</title>
      <description>&lt;div&gt;&lt;strong&gt;Excerpt:&lt;/strong&gt;&lt;br/&gt;At EA Global: San Francisco 2022, the following organisations held a joint session to discuss their different approaches to measuring ‘good’: &lt;br/&gt;&lt;br/&gt;&lt;/div&gt;&lt;ul&gt;&lt;li&gt;GiveWell&lt;/li&gt;&lt;li&gt;Open Philanthropy&lt;/li&gt;&lt;li&gt;Happier Lives Institute&lt;/li&gt;&lt;li&gt;Founders Pledge&lt;/li&gt;&lt;li&gt;Innovations for Poverty Action&lt;/li&gt;&lt;/ul&gt;&lt;div&gt;A representative from each organisation gave a five-minute lightning talk summarising their approach before the audience broke out into table discussions. &lt;br/&gt;&lt;br/&gt;&lt;br/&gt;&lt;/div&gt;&lt;div&gt;&lt;strong&gt;Original article:&lt;/strong&gt;&lt;br/&gt;&lt;a href='https://forum.effectivealtruism.org/posts/8whqn2GrJfvTjhov6/measuring-good-better-1'&gt;https://forum.effectivealtruism.org/posts/8whqn2GrJfvTjhov6/measuring-good-better-1&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;Edited for the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt; by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt;.&lt;br/&gt;&lt;br/&gt;&lt;br/&gt;&lt;/div&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded>&lt;div&gt;&lt;strong&gt;Excerpt:&lt;/strong&gt;&lt;br/&gt;At EA Global: San Francisco 2022, the following organisations held a joint session to discuss their different approaches to measuring ‘good’: &lt;br/&gt;&lt;br/&gt;&lt;/div&gt;&lt;ul&gt;&lt;li&gt;GiveWell&lt;/li&gt;&lt;li&gt;Open Philanthropy&lt;/li&gt;&lt;li&gt;Happier Lives Institute&lt;/li&gt;&lt;li&gt;Founders Pledge&lt;/li&gt;&lt;li&gt;Innovations for Poverty Action&lt;/li&gt;&lt;/ul&gt;&lt;div&gt;A representative from each organisation gave a five-minute lightning talk summarising their approach before the audience broke out into table discussions. &lt;br/&gt;&lt;br/&gt;&lt;br/&gt;&lt;/div&gt;&lt;div&gt;&lt;strong&gt;Original article:&lt;/strong&gt;&lt;br/&gt;&lt;a href='https://forum.effectivealtruism.org/posts/8whqn2GrJfvTjhov6/measuring-good-better-1'&gt;https://forum.effectivealtruism.org/posts/8whqn2GrJfvTjhov6/measuring-good-better-1&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;Edited for the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt; by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt;.&lt;br/&gt;&lt;br/&gt;&lt;br/&gt;&lt;/div&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <itunes:author>Peter</itunes:author>
      <enclosure url="https://www.buzzsprout.com/2062493/11788655-measuring-good-better-by-michael-plant-givewell-jason-schukraft-matt-lerner-and-innovations-for-poverty-action.mp3" length="20682894" type="audio/mpeg"/>
      <guid isPermaLink="false">Buzzsprout-11788655</guid>
      <pubDate>Mon, 28 Nov 2022 15:00:00 +0000</pubDate>
      <itunes:duration>1720</itunes:duration>
      <itunes:keywords/>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
    </item>
    <item>
      <itunes:title>"AGI and lock-in" by Lukas Finnveden, Jess Riedel, &amp; Carl Shulman</itunes:title>
      <title>"AGI and lock-in" by Lukas Finnveden, Jess Riedel, &amp; Carl Shulman</title>
      <description>&lt;p&gt;The long-term future of intelligent life is currently unpredictable and undetermined. In the linked document, we argue that the invention of artificial general intelligence (AGI) could change this by making extreme types of lock-in technologically feasible. In particular, we argue that AGI would make it technologically feasible to (i) perfectly preserve nuanced specifications of a wide variety of values or goals far into the future, and (ii) develop AGI-based institutions that would (with high probability) competently pursue any such values for at least millions, and plausibly trillions, of years.&lt;/p&gt;&lt;p&gt;The rest of this post contains the summary (6 pages), with links to relevant sections of &lt;a href='https://docs.google.com/document/d/1mkLFhxixWdT5peJHq4rfFzq4QbHyfZtANH1nou68q88/edit#'&gt;the main document&lt;/a&gt; (40 pages) for readers who want more details.&lt;/p&gt;&lt;p&gt;&lt;b&gt;Original article:&lt;/b&gt;&lt;br/&gt;&lt;a href='https://forum.effectivealtruism.org/posts/KqCybin8rtfP3qztq/agi-and-lock-in'&gt;https://forum.effectivealtruism.org/posts/KqCybin8rtfP3qztq/agi-and-lock-in&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;Narrated for the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt; by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded>&lt;p&gt;The long-term future of intelligent life is currently unpredictable and undetermined. In the linked document, we argue that the invention of artificial general intelligence (AGI) could change this by making extreme types of lock-in technologically feasible. In particular, we argue that AGI would make it technologically feasible to (i) perfectly preserve nuanced specifications of a wide variety of values or goals far into the future, and (ii) develop AGI-based institutions that would (with high probability) competently pursue any such values for at least millions, and plausibly trillions, of years.&lt;/p&gt;&lt;p&gt;The rest of this post contains the summary (6 pages), with links to relevant sections of &lt;a href='https://docs.google.com/document/d/1mkLFhxixWdT5peJHq4rfFzq4QbHyfZtANH1nou68q88/edit#'&gt;the main document&lt;/a&gt; (40 pages) for readers who want more details.&lt;/p&gt;&lt;p&gt;&lt;b&gt;Original article:&lt;/b&gt;&lt;br/&gt;&lt;a href='https://forum.effectivealtruism.org/posts/KqCybin8rtfP3qztq/agi-and-lock-in'&gt;https://forum.effectivealtruism.org/posts/KqCybin8rtfP3qztq/agi-and-lock-in&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;Narrated for the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt; by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <itunes:author/>
      <enclosure url="https://www.buzzsprout.com/2062493/11621743-agi-and-lock-in-by-lukas-finnveden-jess-riedel-carl-shulman.mp3" length="17080990" type="audio/mpeg"/>
      <guid isPermaLink="false">Buzzsprout-11621743</guid>
      <pubDate>Mon, 28 Nov 2022 03:00:00 +0000</pubDate>
      <podcast:chapters url="https://feeds.buzzsprout.com/2062493/11621743/chapters.json" type="application/json"/>
      <itunes:duration>1420</itunes:duration>
      <itunes:keywords/>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
    </item>
    <item>
      <itunes:title>"Counterarguments to the basic AI risk case" by Katja_Grace</itunes:title>
      <title>"Counterarguments to the basic AI risk case" by Katja_Grace</title>
      <description>&lt;p&gt;This is going to be a list of holes I see in the basic argument for existential risk from superhuman AI systems. &lt;br/&gt;&lt;br/&gt;To start, here’s an outline of what I take to be the basic case:&lt;br/&gt;I. If superhuman AI systems are built, any given system is likely to be ‘goal-directed’&lt;br/&gt;II. If goal-directed superhuman AI systems are built, their desired outcomes will probably be about as bad as an empty universe by human lights&lt;br/&gt;III. If most goal-directed superhuman AI systems have bad goals, the future will very likely be bad &lt;br/&gt;&lt;br/&gt;&lt;b&gt;Original article:&lt;/b&gt;&lt;br/&gt;&lt;a href='https://forum.effectivealtruism.org/posts/zoWypGfXLmYsDFivk/counterarguments-to-the-basic-ai-risk-case'&gt;https://forum.effectivealtruism.org/posts/zoWypGfXLmYsDFivk/counterarguments-to-the-basic-ai-risk-case&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;Narrated for the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt; by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded>&lt;p&gt;This is going to be a list of holes I see in the basic argument for existential risk from superhuman AI systems. &lt;br/&gt;&lt;br/&gt;To start, here’s an outline of what I take to be the basic case:&lt;br/&gt;I. If superhuman AI systems are built, any given system is likely to be ‘goal-directed’&lt;br/&gt;II. If goal-directed superhuman AI systems are built, their desired outcomes will probably be about as bad as an empty universe by human lights&lt;br/&gt;III. If most goal-directed superhuman AI systems have bad goals, the future will very likely be bad &lt;br/&gt;&lt;br/&gt;&lt;b&gt;Original article:&lt;/b&gt;&lt;br/&gt;&lt;a href='https://forum.effectivealtruism.org/posts/zoWypGfXLmYsDFivk/counterarguments-to-the-basic-ai-risk-case'&gt;https://forum.effectivealtruism.org/posts/zoWypGfXLmYsDFivk/counterarguments-to-the-basic-ai-risk-case&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;Narrated for the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt; by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <itunes:author>Peter</itunes:author>
      <enclosure url="https://www.buzzsprout.com/2062493/11620448-counterarguments-to-the-basic-ai-risk-case-by-katja_grace.mp3" length="54071116" type="audio/mpeg"/>
      <guid isPermaLink="false">Buzzsprout-11620448</guid>
      <pubDate>Sun, 27 Nov 2022 23:00:00 +0000</pubDate>
      <podcast:chapters url="https://feeds.buzzsprout.com/2062493/11620448/chapters.json" type="application/json"/>
      <itunes:duration>4502</itunes:duration>
      <itunes:keywords/>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
    </item>
    <item>
      <itunes:title>"Does economic growth meaningfully improve well-being? An optimistic re-analysis of Easterlin’s research: Founders Pledge" by Vadim Albinsky</itunes:title>
      <title>"Does economic growth meaningfully improve well-being? An optimistic re-analysis of Easterlin’s research: Founders Pledge" by Vadim Albinsky</title>
      <description>&lt;p&gt;Understanding the relationship between wellbeing and economic growth is a topic that is of key importance to Effective Altruism (e.g. see Hillebrandt and Hallstead, Clare and Goth). In particular, a key disagreement regards the Easterlin Paradox; the finding that happiness varies with income across countries and between individuals, but does not seem to vary significantly with a country’s income as it changes over time. Michael Plant recently wrote an excellent post summarizing this research. He ends up mostly agreeing with Richard Easterlin’s latest paper arguing that the Easterlin Paradox still holds; suggesting that we should look to approaches other than economic growth to boost happiness. I agree with Michael Plant that life satisfaction is a valid and reliable measure, that it should be a key goal of policy and philanthropy, and that boosting income does not increase it as much as we might naively expect. In fact, we at Founders Pledge highly value and regularly use Michael Plant’s and Happier Lives Institute’s (HLI) research; and we believe income is only a small part of what interventions should aim at. However, my interpretation of the practical implications of Easterlin’s research differ from Easterlin’s in three ways which I argue in this post.&lt;br/&gt;&lt;br/&gt;&lt;b&gt;Original article:&lt;/b&gt;&lt;br/&gt;&lt;a href='https://forum.effectivealtruism.org/posts/coryFCkmcMKdJb7Pz/does-economic-growth-meaningfully-improve-well-being-an'&gt;https://forum.effectivealtruism.org/posts/coryFCkmcMKdJb7Pz/does-economic-growth-meaningfully-improve-well-being-an&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;Narrated for the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt; by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded>&lt;p&gt;Understanding the relationship between wellbeing and economic growth is a topic that is of key importance to Effective Altruism (e.g. see Hillebrandt and Hallstead, Clare and Goth). In particular, a key disagreement regards the Easterlin Paradox; the finding that happiness varies with income across countries and between individuals, but does not seem to vary significantly with a country’s income as it changes over time. Michael Plant recently wrote an excellent post summarizing this research. He ends up mostly agreeing with Richard Easterlin’s latest paper arguing that the Easterlin Paradox still holds; suggesting that we should look to approaches other than economic growth to boost happiness. I agree with Michael Plant that life satisfaction is a valid and reliable measure, that it should be a key goal of policy and philanthropy, and that boosting income does not increase it as much as we might naively expect. In fact, we at Founders Pledge highly value and regularly use Michael Plant’s and Happier Lives Institute’s (HLI) research; and we believe income is only a small part of what interventions should aim at. However, my interpretation of the practical implications of Easterlin’s research differ from Easterlin’s in three ways which I argue in this post.&lt;br/&gt;&lt;br/&gt;&lt;b&gt;Original article:&lt;/b&gt;&lt;br/&gt;&lt;a href='https://forum.effectivealtruism.org/posts/coryFCkmcMKdJb7Pz/does-economic-growth-meaningfully-improve-well-being-an'&gt;https://forum.effectivealtruism.org/posts/coryFCkmcMKdJb7Pz/does-economic-growth-meaningfully-improve-well-being-an&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;Narrated for the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt; by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <itunes:author>Peter</itunes:author>
      <enclosure url="https://www.buzzsprout.com/2062493/11578692-does-economic-growth-meaningfully-improve-well-being-an-optimistic-re-analysis-of-easterlin-s-research-founders-pledge-by-vadim-albinsky.mp3" length="20201873" type="audio/mpeg"/>
      <guid isPermaLink="false">Buzzsprout-11578692</guid>
      <pubDate>Sat, 26 Nov 2022 11:00:00 +0000</pubDate>
      <podcast:chapters url="https://feeds.buzzsprout.com/2062493/11578692/chapters.json" type="application/json"/>
      <itunes:duration>1680</itunes:duration>
      <itunes:keywords/>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
    </item>
    <item>
      <itunes:title>"What happens on the average day?" by rosehadshar</itunes:title>
      <title>"What happens on the average day?" by rosehadshar</title>
      <description>&lt;p&gt;I want to know what’s going on in the world. I’m a human; I’m interested in what other humans are up to; I value them, care about their triumphs and mourn their deaths.&lt;br/&gt;But:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;There’s far too much going on for me to keep track of all of it&lt;/li&gt;&lt;li&gt;I think that some parts of what’s going are likely far more important than others&lt;/li&gt;&lt;li&gt;I don’t think that regular news providers are picking the important bits to report on&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;I would really like there to be a scope sensitive news provider which was making a good faith attempt to report on the things which most matter in the world. But as far as I know, this doesn’t exist.&lt;br/&gt;&lt;br/&gt;In the absence of such a provider, I’ve spent a small amount of time trying to find out some basic context on what happens in the world on the average day. I think of this as a bit like a cheat sheet: some information to have in the back of my mind when reading whatever regular news stories are coming at me, to ground me in something that feels a bit closer to what’s actually going on.&lt;br/&gt;&lt;br/&gt;&lt;b&gt;Original article:&lt;/b&gt;&lt;br/&gt;&lt;a href='https://forum.effectivealtruism.org/posts/rXYW9GPsmwZYu3doX/what-happens-on-the-average-day'&gt;https://forum.effectivealtruism.org/posts/rXYW9GPsmwZYu3doX/what-happens-on-the-average-day&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;Narrated for the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt; by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded>&lt;p&gt;I want to know what’s going on in the world. I’m a human; I’m interested in what other humans are up to; I value them, care about their triumphs and mourn their deaths.&lt;br/&gt;But:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;There’s far too much going on for me to keep track of all of it&lt;/li&gt;&lt;li&gt;I think that some parts of what’s going are likely far more important than others&lt;/li&gt;&lt;li&gt;I don’t think that regular news providers are picking the important bits to report on&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;I would really like there to be a scope sensitive news provider which was making a good faith attempt to report on the things which most matter in the world. But as far as I know, this doesn’t exist.&lt;br/&gt;&lt;br/&gt;In the absence of such a provider, I’ve spent a small amount of time trying to find out some basic context on what happens in the world on the average day. I think of this as a bit like a cheat sheet: some information to have in the back of my mind when reading whatever regular news stories are coming at me, to ground me in something that feels a bit closer to what’s actually going on.&lt;br/&gt;&lt;br/&gt;&lt;b&gt;Original article:&lt;/b&gt;&lt;br/&gt;&lt;a href='https://forum.effectivealtruism.org/posts/rXYW9GPsmwZYu3doX/what-happens-on-the-average-day'&gt;https://forum.effectivealtruism.org/posts/rXYW9GPsmwZYu3doX/what-happens-on-the-average-day&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;Narrated for the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt; by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <itunes:author>Peter</itunes:author>
      <enclosure url="https://www.buzzsprout.com/2062493/11577926-what-happens-on-the-average-day-by-rosehadshar.mp3" length="16790496" type="audio/mpeg"/>
      <guid isPermaLink="false">Buzzsprout-11577926</guid>
      <pubDate>Fri, 25 Nov 2022 06:00:00 +0000</pubDate>
      <podcast:chapters url="https://feeds.buzzsprout.com/2062493/11577926/chapters.json" type="application/json"/>
      <itunes:duration>1396</itunes:duration>
      <itunes:keywords/>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
    </item>
    <item>
      <itunes:title>“500 million, but not a single one more” by jai</itunes:title>
      <title>“500 million, but not a single one more” by jai</title>
      <description>&lt;p&gt;We will never know their names. The first victim could not have been recorded, for there was no written language to record it. They were someone’s daughter, or son, and someone’s friend, and they were loved by those around them. And they were in pain, covered in rashes, confused, scared, not knowing why this was happening to them or what they could do about it — victims of a mad, inhuman god. There was nothing to be done — humanity was not strong enough, not aware enough, not knowledgeable enough, to fight back against a monster that could not be seen.&lt;br/&gt;&lt;br/&gt;&lt;strong&gt;Original article:&lt;/strong&gt;&lt;br/&gt;&lt;a href='https://forum.effectivealtruism.org/posts/jk7A3NMdbxp65kcJJ/500-million-but-not-a-single-one-more'&gt;https://forum.effectivealtruism.org/posts/jk7A3NMdbxp65kcJJ/500-million-but-not-a-single-one-more&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;Narrated for the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt; by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt;.&lt;br/&gt;&lt;br/&gt;&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded>&lt;p&gt;We will never know their names. The first victim could not have been recorded, for there was no written language to record it. They were someone’s daughter, or son, and someone’s friend, and they were loved by those around them. And they were in pain, covered in rashes, confused, scared, not knowing why this was happening to them or what they could do about it — victims of a mad, inhuman god. There was nothing to be done — humanity was not strong enough, not aware enough, not knowledgeable enough, to fight back against a monster that could not be seen.&lt;br/&gt;&lt;br/&gt;&lt;strong&gt;Original article:&lt;/strong&gt;&lt;br/&gt;&lt;a href='https://forum.effectivealtruism.org/posts/jk7A3NMdbxp65kcJJ/500-million-but-not-a-single-one-more'&gt;https://forum.effectivealtruism.org/posts/jk7A3NMdbxp65kcJJ/500-million-but-not-a-single-one-more&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;Narrated for the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt; by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt;.&lt;br/&gt;&lt;br/&gt;&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <itunes:author>Peter</itunes:author>
      <enclosure url="https://www.buzzsprout.com/2062493/11711808-500-million-but-not-a-single-one-more-by-jai.mp3" length="4248258" type="audio/mpeg"/>
      <guid isPermaLink="false">Buzzsprout-11711808</guid>
      <pubDate>Fri, 25 Nov 2022 04:00:00 +0000</pubDate>
      <itunes:duration>351</itunes:duration>
      <itunes:keywords/>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
    </item>
    <item>
      <itunes:title>"Biological Anchors external review" by Jennifer Lin</itunes:title>
      <title>"Biological Anchors external review" by Jennifer Lin</title>
      <description>&lt;p&gt;In this note I’ll summarize the bio-anchors report, describe my initial reactions to it, and take a closer look at two disagreements that I have with background assumptions used by (readers of) the report. &lt;br/&gt;&lt;br/&gt;This report attempts to forecast the year when the amount of compute required to train a transformative AI (TAI) model will first become available, as the year when a forecast for the amount of compute required to train TAI in a given year will intersect a forecast for the amount of compute that will be available for a training run of a single project in a given year.&lt;br/&gt;&lt;br/&gt;&lt;b&gt;Original article:&lt;/b&gt;&lt;br/&gt;&lt;a href='https://docs.google.com/document/d/1_GqOrCo29qKly1z48-mR86IV7TUDfzaEXxD3lGFQ8Wk/edit#'&gt;https://docs.google.com/document/d/1_GqOrCo29qKly1z48-mR86IV7TUDfzaEXxD3lGFQ8Wk/edit#&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;Narrated for the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt; by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt;.&lt;br/&gt;&lt;br/&gt;&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded>&lt;p&gt;In this note I’ll summarize the bio-anchors report, describe my initial reactions to it, and take a closer look at two disagreements that I have with background assumptions used by (readers of) the report. &lt;br/&gt;&lt;br/&gt;This report attempts to forecast the year when the amount of compute required to train a transformative AI (TAI) model will first become available, as the year when a forecast for the amount of compute required to train TAI in a given year will intersect a forecast for the amount of compute that will be available for a training run of a single project in a given year.&lt;br/&gt;&lt;br/&gt;&lt;b&gt;Original article:&lt;/b&gt;&lt;br/&gt;&lt;a href='https://docs.google.com/document/d/1_GqOrCo29qKly1z48-mR86IV7TUDfzaEXxD3lGFQ8Wk/edit#'&gt;https://docs.google.com/document/d/1_GqOrCo29qKly1z48-mR86IV7TUDfzaEXxD3lGFQ8Wk/edit#&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;Narrated for the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt; by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt;.&lt;br/&gt;&lt;br/&gt;&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <itunes:author/>
      <enclosure url="https://www.buzzsprout.com/2062493/11672648-biological-anchors-external-review-by-jennifer-lin.mp3" length="38162167" type="audio/mpeg"/>
      <guid isPermaLink="false">Buzzsprout-11672648</guid>
      <pubDate>Thu, 24 Nov 2022 22:00:00 +0000</pubDate>
      <podcast:chapters url="https://feeds.buzzsprout.com/2062493/11672648/chapters.json" type="application/json"/>
      <itunes:duration>3177</itunes:duration>
      <itunes:keywords/>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
    </item>
    <item>
      <itunes:title>"Case for emergency response teams" by Gavin, Jan_Kulveit</itunes:title>
      <title>"Case for emergency response teams" by Gavin, Jan_Kulveit</title>
      <description>&lt;p&gt;So far, long-termist efforts to change the trajectory of the world focus on far-off events. This is on the assumption that we foresee some important problem and influence its outcome by working on the problem for longer. We thus start working on it sooner than others, we lay the groundwork for future research, we raise awareness, and so on. &lt;br/&gt;&lt;br/&gt;Many longtermists propose that we now live at the “hinge of history”, usually understood on the timescale of critical centuries, or critical decades. But ”hinginess” is likely not constant: some short periods will be significantly more eventful than others. It is also possible that these periods will present even more leveraged opportunities for changing the world’s trajectory.&lt;br/&gt;&lt;br/&gt;These “maximally hingey” moments might be best influenced by sustained efforts long before them (as described above). But it seems plausible that in many cases, the best realistic chance to influence them is “while they are happening”, via a concentrated effort at that moment.&lt;br/&gt;&lt;br/&gt;&lt;b&gt;Original article:&lt;/b&gt;&lt;br/&gt;&lt;a href='https://forum.effectivealtruism.org/posts/sgcxDwyD2KL6BHH2C/case-for-emergency-response-teams'&gt;https://forum.effectivealtruism.org/posts/sgcxDwyD2KL6BHH2C/case-for-emergency-response-teams&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;Narrated for the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt; by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded>&lt;p&gt;So far, long-termist efforts to change the trajectory of the world focus on far-off events. This is on the assumption that we foresee some important problem and influence its outcome by working on the problem for longer. We thus start working on it sooner than others, we lay the groundwork for future research, we raise awareness, and so on. &lt;br/&gt;&lt;br/&gt;Many longtermists propose that we now live at the “hinge of history”, usually understood on the timescale of critical centuries, or critical decades. But ”hinginess” is likely not constant: some short periods will be significantly more eventful than others. It is also possible that these periods will present even more leveraged opportunities for changing the world’s trajectory.&lt;br/&gt;&lt;br/&gt;These “maximally hingey” moments might be best influenced by sustained efforts long before them (as described above). But it seems plausible that in many cases, the best realistic chance to influence them is “while they are happening”, via a concentrated effort at that moment.&lt;br/&gt;&lt;br/&gt;&lt;b&gt;Original article:&lt;/b&gt;&lt;br/&gt;&lt;a href='https://forum.effectivealtruism.org/posts/sgcxDwyD2KL6BHH2C/case-for-emergency-response-teams'&gt;https://forum.effectivealtruism.org/posts/sgcxDwyD2KL6BHH2C/case-for-emergency-response-teams&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;Narrated for the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt; by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <itunes:author>Peter</itunes:author>
      <enclosure url="https://www.buzzsprout.com/2062493/11577911-case-for-emergency-response-teams-by-gavin-jan_kulveit.mp3" length="10133376" type="audio/mpeg"/>
      <guid isPermaLink="false">Buzzsprout-11577911</guid>
      <pubDate>Thu, 24 Nov 2022 06:00:00 +0000</pubDate>
      <podcast:chapters url="https://feeds.buzzsprout.com/2062493/11577911/chapters.json" type="application/json"/>
      <itunes:duration>841</itunes:duration>
      <itunes:keywords/>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
    </item>
    <item>
      <itunes:title>"What matters to shrimps? Factors affecting shrimp welfare in aquaculture" by Lucas Lewit-Mendes &amp; Aaron Boddy</itunes:title>
      <title>"What matters to shrimps? Factors affecting shrimp welfare in aquaculture" by Lucas Lewit-Mendes &amp; Aaron Boddy</title>
      <description>&lt;p&gt;&lt;a href='https://www.shrimpwelfareproject.org/'&gt;Shrimp Welfare Project&lt;/a&gt; produced this report to guide our decision making on funding for further research into shrimp welfare and on which interventions to allocate our resources. We are cross-posting this on the forum because we think it may be useful to share the complexity of understanding the needs of beneficiaries who cannot communicate with us. We also hope it will be useful for other organisations working on shrimp welfare, and it’s also hopefully an interesting read!&lt;br/&gt;&lt;br/&gt;&lt;strong&gt;Original article:&lt;/strong&gt;&lt;br/&gt;&lt;a href='https://forum.effectivealtruism.org/posts/nGrmemHzQvBpnXkNX/what-matters-to-shrimps-factors-affecting-shrimp-welfare-in'&gt;https://forum.effectivealtruism.org/posts/nGrmemHzQvBpnXkNX/what-matters-to-shrimps-factors-affecting-shrimp-welfare-in&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;Narrated for the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt; by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt;.&lt;br/&gt;&lt;br/&gt;&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded>&lt;p&gt;&lt;a href='https://www.shrimpwelfareproject.org/'&gt;Shrimp Welfare Project&lt;/a&gt; produced this report to guide our decision making on funding for further research into shrimp welfare and on which interventions to allocate our resources. We are cross-posting this on the forum because we think it may be useful to share the complexity of understanding the needs of beneficiaries who cannot communicate with us. We also hope it will be useful for other organisations working on shrimp welfare, and it’s also hopefully an interesting read!&lt;br/&gt;&lt;br/&gt;&lt;strong&gt;Original article:&lt;/strong&gt;&lt;br/&gt;&lt;a href='https://forum.effectivealtruism.org/posts/nGrmemHzQvBpnXkNX/what-matters-to-shrimps-factors-affecting-shrimp-welfare-in'&gt;https://forum.effectivealtruism.org/posts/nGrmemHzQvBpnXkNX/what-matters-to-shrimps-factors-affecting-shrimp-welfare-in&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;Narrated for the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt; by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt;.&lt;br/&gt;&lt;br/&gt;&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <itunes:author>Peter</itunes:author>
      <enclosure url="https://www.buzzsprout.com/2062493/11668143-what-matters-to-shrimps-factors-affecting-shrimp-welfare-in-aquaculture-by-lucas-lewit-mendes-aaron-boddy.mp3" length="45408015" type="audio/mpeg"/>
      <guid isPermaLink="false">Buzzsprout-11668143</guid>
      <pubDate>Wed, 23 Nov 2022 06:00:00 +0000</pubDate>
      <podcast:chapters url="https://feeds.buzzsprout.com/2062493/11668143/chapters.json" type="application/json"/>
      <itunes:duration>3780</itunes:duration>
      <itunes:keywords/>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
    </item>
    <item>
      <itunes:title>EA Forum Weekly Summaries – Episode 8 (Nov. 7 - 13, 2022)</itunes:title>
      <title>EA Forum Weekly Summaries – Episode 8 (Nov. 7 - 13, 2022)</title>
      <description>&lt;p&gt;&lt;b&gt;Original article:&lt;/b&gt;&lt;br/&gt;&lt;a href='https://forum.effectivealtruism.org/s/W4fhpuN26naxGCBbN/p/cwa5m5pJQh857GE7C'&gt;https://forum.effectivealtruism.org/s/W4fhpuN26naxGCBbN/p/cwa5m5pJQh857GE7C&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;This is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection &lt;a href='https://forum.effectivealtruism.org/s/W4fhpuN26naxGCBbN'&gt;here&lt;/a&gt;. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.&lt;br/&gt;&lt;br/&gt;Narrated by &lt;a href='https://twitter.com/SnellColeman'&gt;Coleman Jackson Snell&lt;/a&gt;. Summaries written by &lt;a href='https://forum.effectivealtruism.org/users/greyarea'&gt;Zoe Williams&lt;/a&gt; (&lt;a href='https://rethinkpriorities.org/'&gt;Rethink Priorities&lt;/a&gt;).&lt;br/&gt;&lt;br/&gt;Published by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt; on behalf of the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded>&lt;p&gt;&lt;b&gt;Original article:&lt;/b&gt;&lt;br/&gt;&lt;a href='https://forum.effectivealtruism.org/s/W4fhpuN26naxGCBbN/p/cwa5m5pJQh857GE7C'&gt;https://forum.effectivealtruism.org/s/W4fhpuN26naxGCBbN/p/cwa5m5pJQh857GE7C&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;This is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection &lt;a href='https://forum.effectivealtruism.org/s/W4fhpuN26naxGCBbN'&gt;here&lt;/a&gt;. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.&lt;br/&gt;&lt;br/&gt;Narrated by &lt;a href='https://twitter.com/SnellColeman'&gt;Coleman Jackson Snell&lt;/a&gt;. Summaries written by &lt;a href='https://forum.effectivealtruism.org/users/greyarea'&gt;Zoe Williams&lt;/a&gt; (&lt;a href='https://rethinkpriorities.org/'&gt;Rethink Priorities&lt;/a&gt;).&lt;br/&gt;&lt;br/&gt;Published by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt; on behalf of the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <itunes:author>Peter</itunes:author>
      <enclosure url="https://www.buzzsprout.com/2062493/11796894-ea-forum-weekly-summaries-episode-8-nov-7-13-2022.mp3" length="20586533" type="audio/mpeg"/>
      <guid isPermaLink="false">Buzzsprout-11796894</guid>
      <pubDate>Sun, 13 Nov 2022 17:00:00 +0000</pubDate>
      <itunes:duration>1715</itunes:duration>
      <itunes:keywords/>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
    </item>
    <item>
      <itunes:title>EA Forum Weekly Summaries – Episode 7 (Oct. 31 - Nov. 6, 2022)</itunes:title>
      <title>EA Forum Weekly Summaries – Episode 7 (Oct. 31 - Nov. 6, 2022)</title>
      <description>&lt;p&gt;&lt;b&gt;Original article:&lt;/b&gt;&lt;br/&gt;&lt;a href='https://forum.effectivealtruism.org/s/W4fhpuN26naxGCBbN/p/tm3RMfxetLsmcwftQ'&gt;https://forum.effectivealtruism.org/s/W4fhpuN26naxGCBbN/p/tm3RMfxetLsmcwftQ&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;This is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection &lt;a href='https://forum.effectivealtruism.org/s/W4fhpuN26naxGCBbN'&gt;here&lt;/a&gt;. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.&lt;br/&gt;&lt;br/&gt;Narrated by &lt;a href='https://twitter.com/SnellColeman'&gt;Coleman Jackson Snell&lt;/a&gt;. Summaries written by &lt;a href='https://forum.effectivealtruism.org/users/greyarea'&gt;Zoe Williams&lt;/a&gt; (&lt;a href='https://rethinkpriorities.org/'&gt;Rethink Priorities&lt;/a&gt;).&lt;br/&gt;&lt;br/&gt;Published by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt; on behalf of the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded>&lt;p&gt;&lt;b&gt;Original article:&lt;/b&gt;&lt;br/&gt;&lt;a href='https://forum.effectivealtruism.org/s/W4fhpuN26naxGCBbN/p/tm3RMfxetLsmcwftQ'&gt;https://forum.effectivealtruism.org/s/W4fhpuN26naxGCBbN/p/tm3RMfxetLsmcwftQ&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;This is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection &lt;a href='https://forum.effectivealtruism.org/s/W4fhpuN26naxGCBbN'&gt;here&lt;/a&gt;. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.&lt;br/&gt;&lt;br/&gt;Narrated by &lt;a href='https://twitter.com/SnellColeman'&gt;Coleman Jackson Snell&lt;/a&gt;. Summaries written by &lt;a href='https://forum.effectivealtruism.org/users/greyarea'&gt;Zoe Williams&lt;/a&gt; (&lt;a href='https://rethinkpriorities.org/'&gt;Rethink Priorities&lt;/a&gt;).&lt;br/&gt;&lt;br/&gt;Published by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt; on behalf of the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <itunes:author>TYPE III AUDIO</itunes:author>
      <enclosure url="https://www.buzzsprout.com/2062493/11802944-ea-forum-weekly-summaries-episode-7-oct-31-nov-6-2022.mp3" length="19150566" type="audio/mpeg"/>
      <guid isPermaLink="false">Buzzsprout-11802944</guid>
      <pubDate>Sun, 06 Nov 2022 17:00:00 +0000</pubDate>
      <itunes:duration>1592</itunes:duration>
      <itunes:keywords/>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
    </item>
    <item>
      <itunes:title>EA Forum Weekly Summaries – Episode 6 (Oct. 24 - 30, 2022)</itunes:title>
      <title>EA Forum Weekly Summaries – Episode 6 (Oct. 24 - 30, 2022)</title>
      <description>&lt;p&gt;&lt;b&gt;Original article:&lt;/b&gt;&lt;br/&gt;&lt;a href='https://forum.effectivealtruism.org/s/W4fhpuN26naxGCBbN/p/YxiXZcddn4kEqGdr9'&gt;https://forum.effectivealtruism.org/s/W4fhpuN26naxGCBbN/p/YxiXZcddn4kEqGdr9&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;This is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection &lt;a href='https://forum.effectivealtruism.org/s/W4fhpuN26naxGCBbN'&gt;here&lt;/a&gt;. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.&lt;br/&gt;&lt;br/&gt;Narrated by &lt;a href='https://twitter.com/SnellColeman'&gt;Coleman Jackson Snell&lt;/a&gt;. Summaries written by &lt;a href='https://forum.effectivealtruism.org/users/greyarea'&gt;Zoe Williams&lt;/a&gt; (&lt;a href='https://rethinkpriorities.org/'&gt;Rethink Priorities&lt;/a&gt;).&lt;br/&gt;&lt;br/&gt;Published by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt; on behalf of the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded>&lt;p&gt;&lt;b&gt;Original article:&lt;/b&gt;&lt;br/&gt;&lt;a href='https://forum.effectivealtruism.org/s/W4fhpuN26naxGCBbN/p/YxiXZcddn4kEqGdr9'&gt;https://forum.effectivealtruism.org/s/W4fhpuN26naxGCBbN/p/YxiXZcddn4kEqGdr9&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;This is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection &lt;a href='https://forum.effectivealtruism.org/s/W4fhpuN26naxGCBbN'&gt;here&lt;/a&gt;. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.&lt;br/&gt;&lt;br/&gt;Narrated by &lt;a href='https://twitter.com/SnellColeman'&gt;Coleman Jackson Snell&lt;/a&gt;. Summaries written by &lt;a href='https://forum.effectivealtruism.org/users/greyarea'&gt;Zoe Williams&lt;/a&gt; (&lt;a href='https://rethinkpriorities.org/'&gt;Rethink Priorities&lt;/a&gt;).&lt;br/&gt;&lt;br/&gt;Published by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt; on behalf of the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <itunes:author>TYPE III AUDIO</itunes:author>
      <enclosure url="https://www.buzzsprout.com/2062493/11802937-ea-forum-weekly-summaries-episode-6-oct-24-30-2022.mp3" length="17846517" type="audio/mpeg"/>
      <guid isPermaLink="false">Buzzsprout-11802937</guid>
      <pubDate>Sun, 30 Oct 2022 16:00:00 +0000</pubDate>
      <itunes:duration>1486</itunes:duration>
      <itunes:keywords/>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
    </item>
    <item>
      <itunes:title>EA Forum Weekly Summaries – Episode 5 (Oct. 17 - 23, 2022)</itunes:title>
      <title>EA Forum Weekly Summaries – Episode 5 (Oct. 17 - 23, 2022)</title>
      <description>&lt;p&gt;&lt;b&gt;Original article:&lt;/b&gt;&lt;br/&gt;&lt;a href='https://forum.effectivealtruism.org/s/W4fhpuN26naxGCBbN/p/Hi5z6tm9d2keHALgv'&gt;https://forum.effectivealtruism.org/s/W4fhpuN26naxGCBbN/p/Hi5z6tm9d2keHALgv&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;This is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection &lt;a href='https://forum.effectivealtruism.org/s/W4fhpuN26naxGCBbN'&gt;here&lt;/a&gt;. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.&lt;br/&gt;&lt;br/&gt;Narrated by &lt;a href='https://twitter.com/SnellColeman'&gt;Coleman Jackson Snell&lt;/a&gt;. Summaries written by &lt;a href='https://forum.effectivealtruism.org/users/greyarea'&gt;Zoe Williams&lt;/a&gt; (&lt;a href='https://rethinkpriorities.org/'&gt;Rethink Priorities&lt;/a&gt;).&lt;br/&gt;&lt;br/&gt;Published by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt; on behalf of the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded>&lt;p&gt;&lt;b&gt;Original article:&lt;/b&gt;&lt;br/&gt;&lt;a href='https://forum.effectivealtruism.org/s/W4fhpuN26naxGCBbN/p/Hi5z6tm9d2keHALgv'&gt;https://forum.effectivealtruism.org/s/W4fhpuN26naxGCBbN/p/Hi5z6tm9d2keHALgv&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;This is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection &lt;a href='https://forum.effectivealtruism.org/s/W4fhpuN26naxGCBbN'&gt;here&lt;/a&gt;. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.&lt;br/&gt;&lt;br/&gt;Narrated by &lt;a href='https://twitter.com/SnellColeman'&gt;Coleman Jackson Snell&lt;/a&gt;. Summaries written by &lt;a href='https://forum.effectivealtruism.org/users/greyarea'&gt;Zoe Williams&lt;/a&gt; (&lt;a href='https://rethinkpriorities.org/'&gt;Rethink Priorities&lt;/a&gt;).&lt;br/&gt;&lt;br/&gt;Published by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt; on behalf of the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <itunes:author>TYPE III AUDIO</itunes:author>
      <enclosure url="https://www.buzzsprout.com/2062493/11802930-ea-forum-weekly-summaries-episode-5-oct-17-23-2022.mp3" length="12142001" type="audio/mpeg"/>
      <guid isPermaLink="false">Buzzsprout-11802930</guid>
      <pubDate>Sun, 23 Oct 2022 17:00:00 +0100</pubDate>
      <itunes:duration>1011</itunes:duration>
      <itunes:keywords/>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
    </item>
    <item>
      <itunes:title>EA Forum Weekly Summaries – Episode 4 (Oct. 10 - 16, 2022)</itunes:title>
      <title>EA Forum Weekly Summaries – Episode 4 (Oct. 10 - 16, 2022)</title>
      <description>&lt;p&gt;&lt;b&gt;Original article:&lt;/b&gt;&lt;br/&gt;&lt;a href='https://forum.effectivealtruism.org/s/W4fhpuN26naxGCBbN/p/pmJRXG3cTgrt779Ep'&gt;https://forum.effectivealtruism.org/s/W4fhpuN26naxGCBbN/p/pmJRXG3cTgrt779Ep&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;This is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection &lt;a href='https://forum.effectivealtruism.org/s/W4fhpuN26naxGCBbN'&gt;here&lt;/a&gt;. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.&lt;br/&gt;&lt;br/&gt;Narrated by &lt;a href='https://twitter.com/SnellColeman'&gt;Coleman Jackson Snell&lt;/a&gt;. Summaries written by &lt;a href='https://forum.effectivealtruism.org/users/greyarea'&gt;Zoe Williams&lt;/a&gt; (&lt;a href='https://rethinkpriorities.org/'&gt;Rethink Priorities&lt;/a&gt;).&lt;br/&gt;&lt;br/&gt;Published by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt; on behalf of the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded>&lt;p&gt;&lt;b&gt;Original article:&lt;/b&gt;&lt;br/&gt;&lt;a href='https://forum.effectivealtruism.org/s/W4fhpuN26naxGCBbN/p/pmJRXG3cTgrt779Ep'&gt;https://forum.effectivealtruism.org/s/W4fhpuN26naxGCBbN/p/pmJRXG3cTgrt779Ep&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;This is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection &lt;a href='https://forum.effectivealtruism.org/s/W4fhpuN26naxGCBbN'&gt;here&lt;/a&gt;. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.&lt;br/&gt;&lt;br/&gt;Narrated by &lt;a href='https://twitter.com/SnellColeman'&gt;Coleman Jackson Snell&lt;/a&gt;. Summaries written by &lt;a href='https://forum.effectivealtruism.org/users/greyarea'&gt;Zoe Williams&lt;/a&gt; (&lt;a href='https://rethinkpriorities.org/'&gt;Rethink Priorities&lt;/a&gt;).&lt;br/&gt;&lt;br/&gt;Published by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt; on behalf of the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <itunes:author>TYPE III AUDIO</itunes:author>
      <enclosure url="https://www.buzzsprout.com/2062493/11802881-ea-forum-weekly-summaries-episode-4-oct-10-16-2022.mp3" length="17418953" type="audio/mpeg"/>
      <guid isPermaLink="false">Buzzsprout-11802881</guid>
      <pubDate>Sun, 16 Oct 2022 17:00:00 +0100</pubDate>
      <itunes:duration>1448</itunes:duration>
      <itunes:keywords/>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
    </item>
    <item>
      <itunes:title>EA Forum Weekly Summaries – Episode 3 (Sept 26. - Oct. 9, 2022)</itunes:title>
      <title>EA Forum Weekly Summaries – Episode 3 (Sept 26. - Oct. 9, 2022)</title>
      <description>&lt;p&gt;&lt;b&gt;Original article:&lt;/b&gt;&lt;br/&gt;&lt;a href='https://forum.effectivealtruism.org/s/W4fhpuN26naxGCBbN/p/qFqNaLAkMdmwKNBbs'&gt;https://forum.effectivealtruism.org/s/W4fhpuN26naxGCBbN/p/qFqNaLAkMdmwKNBbs&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;This is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection &lt;a href='https://forum.effectivealtruism.org/s/W4fhpuN26naxGCBbN'&gt;here&lt;/a&gt;. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.&lt;br/&gt;&lt;br/&gt;Narrated by &lt;a href='https://twitter.com/SnellColeman'&gt;Coleman Jackson Snell&lt;/a&gt;. Summaries written by &lt;a href='https://forum.effectivealtruism.org/users/greyarea'&gt;Zoe Williams&lt;/a&gt; (&lt;a href='https://rethinkpriorities.org/'&gt;Rethink Priorities&lt;/a&gt;).&lt;br/&gt;&lt;br/&gt;Published by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt; on behalf of the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded>&lt;p&gt;&lt;b&gt;Original article:&lt;/b&gt;&lt;br/&gt;&lt;a href='https://forum.effectivealtruism.org/s/W4fhpuN26naxGCBbN/p/qFqNaLAkMdmwKNBbs'&gt;https://forum.effectivealtruism.org/s/W4fhpuN26naxGCBbN/p/qFqNaLAkMdmwKNBbs&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;This is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection &lt;a href='https://forum.effectivealtruism.org/s/W4fhpuN26naxGCBbN'&gt;here&lt;/a&gt;. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.&lt;br/&gt;&lt;br/&gt;Narrated by &lt;a href='https://twitter.com/SnellColeman'&gt;Coleman Jackson Snell&lt;/a&gt;. Summaries written by &lt;a href='https://forum.effectivealtruism.org/users/greyarea'&gt;Zoe Williams&lt;/a&gt; (&lt;a href='https://rethinkpriorities.org/'&gt;Rethink Priorities&lt;/a&gt;).&lt;br/&gt;&lt;br/&gt;Published by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt; on behalf of the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <itunes:author>TYPE III AUDIO</itunes:author>
      <enclosure url="https://www.buzzsprout.com/2062493/11802859-ea-forum-weekly-summaries-episode-3-sept-26-oct-9-2022.mp3" length="15953799" type="audio/mpeg"/>
      <guid isPermaLink="false">Buzzsprout-11802859</guid>
      <pubDate>Sun, 09 Oct 2022 17:00:00 +0100</pubDate>
      <itunes:duration>1328</itunes:duration>
      <itunes:keywords/>
      <itunes:season>1</itunes:season>
      <itunes:episode>3</itunes:episode>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
    </item>
    <item>
      <itunes:title>EA Forum Weekly Summaries – Episode 2 (Sept. 19 - 25, 2022)</itunes:title>
      <title>EA Forum Weekly Summaries – Episode 2 (Sept. 19 - 25, 2022)</title>
      <description>&lt;p&gt;&lt;b&gt;Original article:&lt;/b&gt;&lt;br/&gt;&lt;a href='https://forum.effectivealtruism.org/posts/tokGikSg3fSJun4Lw/ea-and-lw-forums-weekly-summary-19-25-sep-22'&gt;https://forum.effectivealtruism.org/posts/tokGikSg3fSJun4Lw/ea-and-lw-forums-weekly-summary-19-25-sep-22&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;This is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection &lt;a href='https://forum.effectivealtruism.org/s/W4fhpuN26naxGCBbN'&gt;here&lt;/a&gt;. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.&lt;br/&gt;&lt;br/&gt;Narrated by &lt;a href='https://twitter.com/SnellColeman'&gt;Coleman Jackson Snell&lt;/a&gt;. Summaries written by &lt;a href='https://forum.effectivealtruism.org/users/greyarea'&gt;Zoe Williams&lt;/a&gt; (&lt;a href='https://rethinkpriorities.org/'&gt;Rethink Priorities&lt;/a&gt;).&lt;br/&gt;&lt;br/&gt;Published by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt; on behalf of the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded>&lt;p&gt;&lt;b&gt;Original article:&lt;/b&gt;&lt;br/&gt;&lt;a href='https://forum.effectivealtruism.org/posts/tokGikSg3fSJun4Lw/ea-and-lw-forums-weekly-summary-19-25-sep-22'&gt;https://forum.effectivealtruism.org/posts/tokGikSg3fSJun4Lw/ea-and-lw-forums-weekly-summary-19-25-sep-22&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;This is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection &lt;a href='https://forum.effectivealtruism.org/s/W4fhpuN26naxGCBbN'&gt;here&lt;/a&gt;. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.&lt;br/&gt;&lt;br/&gt;Narrated by &lt;a href='https://twitter.com/SnellColeman'&gt;Coleman Jackson Snell&lt;/a&gt;. Summaries written by &lt;a href='https://forum.effectivealtruism.org/users/greyarea'&gt;Zoe Williams&lt;/a&gt; (&lt;a href='https://rethinkpriorities.org/'&gt;Rethink Priorities&lt;/a&gt;).&lt;br/&gt;&lt;br/&gt;Published by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt; on behalf of the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <itunes:author>TYPE III AUDIO</itunes:author>
      <enclosure url="https://www.buzzsprout.com/2062493/11802841-ea-forum-weekly-summaries-episode-2-sept-19-25-2022.mp3" length="20090960" type="audio/mpeg"/>
      <guid isPermaLink="false">Buzzsprout-11802841</guid>
      <pubDate>Sun, 25 Sep 2022 17:00:00 +0100</pubDate>
      <itunes:duration>1673</itunes:duration>
      <itunes:keywords/>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
    </item>
    <item>
      <itunes:title>EA Forum Weekly Summaries – Introduction &amp; Episode 1 (Sept. 19 - 25, 2022)</itunes:title>
      <title>EA Forum Weekly Summaries – Introduction &amp; Episode 1 (Sept. 19 - 25, 2022)</title>
      <description>&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;b&gt;Note from Coleman Snell:&lt;br/&gt;&lt;/b&gt;Thanks for listening to the very first episode of EA Forum Summaries Weekly! Please note that this podcast will only contain summaries of EA Forum posts, and not LessWrong posts. This is to keep the episodes short &amp;amp; sweet for a weekly series. Other options would include raising the karma threshold on both.&lt;br/&gt;&lt;br/&gt;&lt;b&gt;Original article:&lt;/b&gt;&lt;br/&gt;&lt;a href='https://forum.effectivealtruism.org/posts/5wzhWsHrZSLwXxc5q/ea-and-lw-forums-weekly-summary-12-18-sep-22'&gt;https://forum.effectivealtruism.org/posts/5wzhWsHrZSLwXxc5q/ea-and-lw-forums-weekly-summary-12-18-sep-22&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;This is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection &lt;a href='https://forum.effectivealtruism.org/s/W4fhpuN26naxGCBbN'&gt;here&lt;/a&gt;. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.&lt;br/&gt;&lt;br/&gt;Narrated by &lt;a href='https://twitter.com/SnellColeman'&gt;Coleman Jackson Snell&lt;/a&gt;. Summaries written by &lt;a href='https://forum.effectivealtruism.org/users/greyarea'&gt;Zoe Williams&lt;/a&gt; (&lt;a href='https://rethinkpriorities.org/'&gt;Rethink Priorities&lt;/a&gt;).&lt;br/&gt;&lt;br/&gt;Published by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt; on behalf of the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded>&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;b&gt;Note from Coleman Snell:&lt;br/&gt;&lt;/b&gt;Thanks for listening to the very first episode of EA Forum Summaries Weekly! Please note that this podcast will only contain summaries of EA Forum posts, and not LessWrong posts. This is to keep the episodes short &amp;amp; sweet for a weekly series. Other options would include raising the karma threshold on both.&lt;br/&gt;&lt;br/&gt;&lt;b&gt;Original article:&lt;/b&gt;&lt;br/&gt;&lt;a href='https://forum.effectivealtruism.org/posts/5wzhWsHrZSLwXxc5q/ea-and-lw-forums-weekly-summary-12-18-sep-22'&gt;https://forum.effectivealtruism.org/posts/5wzhWsHrZSLwXxc5q/ea-and-lw-forums-weekly-summary-12-18-sep-22&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;This is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection &lt;a href='https://forum.effectivealtruism.org/s/W4fhpuN26naxGCBbN'&gt;here&lt;/a&gt;. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.&lt;br/&gt;&lt;br/&gt;Narrated by &lt;a href='https://twitter.com/SnellColeman'&gt;Coleman Jackson Snell&lt;/a&gt;. Summaries written by &lt;a href='https://forum.effectivealtruism.org/users/greyarea'&gt;Zoe Williams&lt;/a&gt; (&lt;a href='https://rethinkpriorities.org/'&gt;Rethink Priorities&lt;/a&gt;).&lt;br/&gt;&lt;br/&gt;Published by &lt;a href='https://type3.audio/'&gt;TYPE III AUDIO&lt;/a&gt; on behalf of the &lt;a href='https://forum.effectivealtruism.org/'&gt;Effective Altruism Forum&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a href='https://docs.google.com/forms/d/e/1FAIpQLScC_z9tBDWzxoS3D0EfNT0Kcjs_T4rNN3FyVwElSdyanel0rA/viewform?usp=pp_url&amp;amp;entry.584848066=https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo'&gt;&lt;span style=''&gt;Share feedback on this narration&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;</content:encoded>
      <itunes:author>TYPE III AUDIO</itunes:author>
      <enclosure url="https://www.buzzsprout.com/2062493/11802793-ea-forum-weekly-summaries-introduction-episode-1-sept-19-25-2022.mp3" length="13887117" type="audio/mpeg"/>
      <guid isPermaLink="false">Buzzsprout-11802793</guid>
      <pubDate>Sun, 25 Sep 2022 17:00:00 +0100</pubDate>
      <itunes:duration>1156</itunes:duration>
      <itunes:keywords/>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:explicit>false</itunes:explicit>
    </item>
  </channel>
</rss>