Every four years, we come together as a country to rejoice that in a mere handful of days we will no longer be receiving texts asking us to 10x our donations to fund ads in Pennsylvania.
I rejoined Divia Eden on the Mutual Understanding Podcast for an emergency election podcast. Lots of fun being back with her on the pod - we talked about the ascendency of prediction markets, the zigs and zags of this interminably long election, and our final forecasts for the race.
From a reasoned, model perspective, it’s a coin flip. From a vibes perspective, I think Trump wins.
On the pod I said expect it to be a big win one way or another (multiple states to break together for one candidate or another). I hold this *very lightly*.
I expect the Republicans to win the Senate and House.
While it would be terrible for the country, and almost certainly won’t happen, a part of me is rooting for a 269-269 electoral split. Lets finally see the electoral college in all its glory!
#links
Machines of Loving Grace: Dario Amodei, co-founder and CEO of Anthropic, has written an essay on the the possible glorious future of humanity with advances in AI AI.
I really do think it’s important to discuss what a good world with powerful AI could look like, while doing our best to avoid the above pitfalls. In fact I think it is critical to have a genuinely inspiring vision of the future, and not just a plan to fight fires.
…To summarize the above, my basic prediction is that AI-enabled biology and medicine will allow us to compress the progress that human biologists would have achieved over the next 50-100 years into 5-10 years. I’ll refer to this as the “compressed 21st century”: the idea that after powerful AI is developed, we will in a few years make all the progress in biology and medicine that we would have made in the whole 21st century.
…While that might sound crazy, the fact is that civilization has successfully navigated major economic shifts in the past: from hunter-gathering to farming, farming to feudalism, and feudalism to industrialism. I suspect that some new and stranger thing will be needed, and that it’s something no one today has done a good job of envisioning.
Related: All Watched Over by Machines of Loving Grace. There’s an unironic earnestness in this poem that I think would be labeled cringe if it were tweeted out today, but that I find really refreshing in a green mana kind of way.
Tyler Cowen’s1 been going off on how people with doom tinged AI views should have portfolios that include hedges for doom scenarios (ex. shorting the market), and if you don’t its indicative of a lack of seriousness. Personally, as someone with non-negligible doom probabilities, I find this pretty silly! Many doom scenarios are coupled with runaway economic growth followed by a sharp dive into badness; not many great options to get paid back from those bets after the world ends, or clarity for how bad outcomes for human values translates to corporate profits.
That being said I do think it’s great to go through the work of translating your beliefs about the future into bets; a friend and I spent some time working through our own views here. I ended up purchasing long dated out of the money options on the SPX, as I think theres’s a good chance that transformative AI will cause a sharp uptick in economic growth that isn’t being properly priced in. You can read a writeup here; feedback very much appreciated!
Vishal provides advice about startups that’s similar to advice I’ve shared with friends:
"i almost always advise friends not to raise VC for their mid-2020s startup.
base rate of meaningful exits is already low (~1%)
moats will be much harder in late 2020s due to AI progress
meanwhile, building a small business w $100ks-$1Ms revenue has never been easier(!)"
Also venture backed businesses seem, for most people, to be less personally fulfilling and meaningful. I think it can be the right choice, in particular if you have a clear compelling reason your startup is likely to be acquired quickly or will be complementary to AI progress.
SLS is still a national disgrace: NASA seems to be totally out of control, unable to deliver anything on time or on budget. It’s particularly damning when compared to the success of SpaceX; it feels like a microcosm of the general failure of institutions. They were founded in an age of heroes, they achieved huge success, and now decades later they’ve ossified, grown fat and complacent, and can’t follow the plot. Sad!
Everyone at NASA knows the SLS is a looming catastrophe, but no-one can say it. Officially, it’s still the most powerful rocket ever built (except for Starship) and our official vehicle to the Moon and Mars! In reality, it’s insanely expensive, dangerous, and underpowered and can barely lift a reasonable payload to LEO.
Four years ago, I wrote that the best time to cancel the SLS was 20 years before, and the second best time was then. Four years on, the program has consumed another $20b with nothing to show for it. $20b, bringing total development cost to over $100b. This program burns $12m per day!
In the meantime, NASA has abandoned all pretense of caring about or delivering cost control on any major project, with scope, schedule, and budget blowouts affecting practically every major program and forcing the cancellation of many of them. This is symptomatic of an agency who, compromising their technical integrity on their flagship program, subsequently lost the ability to maintain technical integrity anywhere else.
The Magic Laptop Thought Experiment: Tom Kalil of Schmidt Futures describes his favorite though experiment and argument that you should always have your ‘big, most important problem’ clearly articulable:
Imagine that you have a magic laptop. The power of the laptop is that any press release that you write will come true.
You have to write a headline (goal statement), several paragraphs to provide context, and 1-2 paragraph descriptions of who is agreeing to do what (in the form organization A takes action B to achieve goal C). The individuals or organizations could be federal agencies, the Congress, companies, philanthropists, investors, research universities, non-profits, skilled volunteers, etc. The constraint, however, is that it has to be plausible that the organizations would be both willing and able to take the action. For example, a for-profit company is not going to take actions that are directly contrary to the interests of their shareholders.
… I’ve been in roles where I can occasionally serve as a “force multiplier” for other people’s ideas. The best way to have a good idea is to be exposed to many ideas.
When I was in the White House, I would meet with a lot of people who would tell me that what they worked on was very important, and deserved greater attention from policy-makers.
But when I asked them what they wanted the Administration to consider doing, they didn’t always have a specific response. Sometimes people would have the kernel of a good idea, but I would need to play “20 questions” with them to refine it. This thought experiment would occasionally help me elicit answers to basic questions like who, what, how and why.
Mine? umm… well… gun to my head I’d say the idea that I’m least conflicted about would be a major investment in infosec for AI. Related: I’m helping organize a workshop for security professionals interested in securing AI on Nov 16th.
Transluce, an open source safety case and interpretability lab, debuted this month. I’m excited about safety cases as a prosaic AI Safety intervention - define empirical testable claims with a robust theoretical model claims for why a powerful AI system is safe in a given domain.
I think this is the type of work that governments and frontier labs can scale up, and fits well into the national security states existing mental models of risk management.
Related: Geoffrey Irving highlights the challenge of automating AI Safety case construction.
Epoch AI’s Machine Learning Hardware Database: I was very surprised to learn from their announcement post that leading AI chips have been 30% more cost effective each year, and that algorithmic style changes like switching number formats can boost performance by >10x.
The Failed Concepts That Brought Israel to October 7: A year after the October 7th attack, the authors at Mosaic magazine presented a clear eyed examination of how one of the worst episodes in Israel’s history happened. They point to Netanyahu’s delays, the failure to prioritize the state’s strategic interest over the interests of a minority of settlers, the self-delusion of the peace process, and the grotesque incentives of the international community. A few quotes:
These mental models weren’t just products of ignorance or applications of prejudice. They were comprehensive conceptual toolkits for assimilating new information and processing policy dilemmas. On October 7, they failed completely. An honest appraisal of them is crucial for any postwar policymaking.
Skepticism, deferral, and an obsession with messaging are how Netanyahu does politics. They are how he does policy. They are how he processes events and formulates actions. They are not quirks, but rather foundational principles, reinforced by years of both political success and occasional critical examination of political failure.
This particular delusion followed its peace processor predecessor to an unoriginal extreme. Hamas, we were told around 2017, had actually updated its charter and no longer called for the elimination of Israel. But as in 1998, this was mostly wishful thinking, as no such thing had happened, and it shouldn’t have taken a close critical reading in either case to make the determination, yet those who did were the ones routinely accused of bad faith.
More apt would be constitutional. Gaza has a peculiar constitution. Military and police powers are vested in Hamas, a terrorist organization doctrinally committed to the elimination of Israel. Other state-like competences lie outside the remit of Gaza’s de-facto rulers…Education and welfare is largely taken up by UNRWA, the world’s only refugee agency forbidden from rehabilitating or resettling the displaced persons in its care.
The constitutions of Hizballah in southern Lebanon, Fatah in the West Bank, and Hamas in Gaza can be best described as anti-sovereign governance, and, for all the variation among them, they are all the creation of the international community and its unique approach to the Arab conflict with Israel. These constitutions are anti-sovereign in two senses, an internal one and an external. Internally, they exercise stable political and military power without full sovereignty and without any of the responsibilities that come with full sovereignty.
They’ve also included a larger symposium of authors analyzing it from a number of angles. If you’re following the war closely it’s worth reading.
I love this quote from Bertrand Russell:
#good-content
My Chemical Romance: I attended the When We Were Young festival and, I’m excited to announce, MCR’s still got it. Absolutely stellar performance. In truth I think the entire lineup is worth listening to and recalling, fondly, when music reached it’s peak (when I was 15).
It’s not just a phase.
Ben
I feel a specific form of Gell-Man Amnesia reading Marginal Revolution. Cowen’s links seem good, but when he writes longer posts about a field I know, the superficiality of his bangladeshi* train arguments are off putting:
*the original tweet has been removed, but here’s the banger of a take:
alex tabarrok MR post: very detailed argument explaining policy failure, lots of supporting evidence. Restrained yet forceful commentary
tyler cowen MR post: *esoteric quote on 1920s bangladashian train policy* "this explains a lot right now, for those of you paying attention"
you should add the Adam Curtis documentary for good measure: https://en.wikipedia.org/wiki/All_Watched_Over_by_Machines_of_Loving_Grace_(TV_series)