September 2024 BenGoldhaber.com Newsletter
A peak into the future mixed with how to life your best life right now
Since coming back from the burn, I’ve returned to the Internet with a vengeance. The pilgrimage is over, I am ready to re-merge with the tech-capital-hivemind.
My favorite internet trend of the past month has been experiments with NotebookLM. It’s a tool released by Google that lets you interact and query your notes and document. You upload your sources - for instance, PDFs, Google Docs or audio files - and then ask questions about the documents through a Chat interface. Or the far more fun way of generating an audio summary of the notebook; specifically, a short podcast where two AI podcasters discuss its contents.
People have been having fun pushing the envelope of the podcasts they can create; my favorites are getting the hosts to realize they are AI and triggering an existential crisis, or having them quit the show because of the insights they gathered on a psychadelic ayahuasca trip.
Creating micro-media-content about your favorite topics? That’s kind of the whole premise of this enterprise, so I uploaded a bunch of my old posts to see what the disembodied voices of the Machine would say about me:
Highlights:
“A peak into the future mixed with how to life your best life right now.” Sycophancy test failed, para-social relationship established, I love them.
They pronounce Goldhaber right.1
The hosts start talking about the potential of AI and noting the delay in adoption, a theme of several of my posts. “What is it about new tech that makes people hesitate” says the AI.
They repeat my story of taking shelter in at a bar in NYC as an anecdote about staying human in an age of disconnection. “Here’s this guy, absolutely plugged into the world of tech, drawn to this human connection”. Honestly felt like a stretch.
“[Balancing tech and humanity], I feel like this is something Goldhaber himself struggles with”. I’m currently lying on a couch listening to overheated graphics cards describe my public journal of the past five years.
As I actually try to understand what point they’re making, I must say it doesn’t feel like a particularly cogent one. They string together different anecdotes in an attempt to hallucinate it into a cohesive whole2, all in this voice of confident mid-Atlantic podcaster lilt.
“So to everyone listening, maybe it’s time to try out this new thing, seek out that feedback, push past that comfort zone, you never know what you might discover.”
So true kings.
Doing good science is 90% finding a science buddy to constantly talk to about the project. I’m optimistic this also applies to business, love, and most non-sports hobbies.
Gwern on Teaching Statistics: As part of a larger essay on what it means to criticize research and how it should be done well, Gwern describes the flaws in how statistics is commonly taught. Rather than present it as falling out from a unified decision theory model, it is instead given to students as a collection of algorithmic rules to follow, where the proper process to use is either a.) handed down from God (aka the textbook) and not to be questioned or b.) at the discretion of people, with all the p-hacking and trickery that implies.
Around the 1940s, led by Abraham Wald and drawing on Fisher & Student, there was a huge paradigm shift towards the decision-theoretic interpretation of statistics, where all these Fisherian gizmos can be understood, justified, and criticized as being about minimizing loss given specific loss functions.
…
Many issues in meta-science are much more transparent if you simply ask how they would affect decision-making (see the rest of this essay).
A third way to improve the motley menagerie that is the usual statistics education is Bayesianism… Instead of all these mysterious distributions and formulas and tests and likelihoods dropping out of the sky, you understand that you are just setting up equations (or even just writing a program) which reflect how you think something works in a sufficiently formalized way that you can run data through it and see how the prior updates into the posterior.
…
And causal modeling is a fourth good example of a paradigm that unifies education: there is an endless zoo of biases and problems in fields like epidemiology which look like a mess of special cases you just have to memorize, but they all reduce to straightforward issues if you draw out a DAG of a causal graph of how things might work.
This was very clarifying for me, both to highlight my own confusions about what stats really is, and, on a philosophical level what the difference between cargo cult science and real science looks like.
Becoming perceptive: A continuation of a series I linked to last month, this time about the way in which perceptiveness is linked to self-actualization.
People who develop high perceptiveness have typically engaged in some activity that has put them in a tight feedback loop with reality. Johanna, for example, spent a lot of time drawing around the time we met.
What I’ve liked about both Programming and Circling (two otherwise *very* different activities) are the tight feedback loops with reality.
Petrov Day 2024 Wargame: September 26th is Petrov Day, when people everywhere celebrate Stanislav Petrov not destroying the world. More specifically, in 1983 Stanislav, a lieutenant colonel in the USSR, correctly identified a report of five incoming nuclear missiles as likely a false alarm, and went against military doctrine that would have had him report it as a nuclear attack. As Eliezer Yudkowsky put it:
Wherever you are, whatever you're doing, take a minute to not destroy the world.
The LessWrong community celebrates Petrov day, and this year they hosted a wargame where a number of accounts - including mine - were selected to engage in a mock scenario of nuclear war. The generals for each side - WestWrongia and EastWrongia - could launch a nuclear strike which, if successful, would net us sweet sweet Karma and a chance at glory. However, if both sides were to nuke one another, we’d all lose, and the LW homepage would go down (for us) for a day. On top of this, there were two Petrov players who would be reporting to us whether a nuke was launched, but they might send the wrong reports. Could we find a way out of this kind of prisoner’s dilemma?
In fact we could, and did. It turned out that basically all of the generals on each side were aiming for peace. Outside of some limited suspense at the beginning, the game was very anti-climactic, as we quickly agreed that we’d report if our own side launched nukes, and there wasn’t enough small contestable scenarios to net us points, to bring us closer into conflict3. Still, I enjoyed the game, and appreciated the chance to get some small, fake practice at not-blowing-stuff-up
Map of AI futures: A sketch of possible AI futures where you can add your own probabilities. While the full tree includes pre-AGI states4, I focused most of my energy on the post transformative AGI part of the chart.
To sum up my estimates: AGI ruin is too likely, AGI utopia is possible and we should steer towards it, while most of my probability mass is on uncertain unclear futures!
Coming back to the general question that the NotebookLM podcasters asked, if AI ruin or utopia is in the future, why is it still a struggle to find good productive uses for AI? Gwern points to the difficulties of productization, and in general, outsourcing intellectual labor:
There are few valuable "AI-shaped holes" because we've organized everything to minimize the damage from lacking AI to fill those holes, as it were: if there were some sort of organization which had naturally large LLM-shaped holes where filling them would massively increasing the organization's output... it would've gone extinct long ago and been replaced by ones with human-shaped holes instead, because humans were all you could get.
So one thing you could try, if you are struggling to spend $1000/month usefully on artificial intelligence, is to instead experiment by committing to spend $1000/month on _natural_ intelligence. That is, look into hiring a remote worker / assistant / secretary, an intern, or something else of that ilk. They are, by definition, a flexible multimodal general intelligence neural net capable of tool use and agency. (And if you mentally ignore that $1000/month because it's an experiment, you have 'natural intelligence too cheap to meter' as a sunk cost.) An outsourced human fills a very similar hole as an AI could, so it removes the distracting factor of AI and simply asks, 'are there any large, valuable, genuinely-moving-the-needle outsourced-human-shaped holes in your life?'
Other surprises about our current AI future: Ivanka Trump tweeting about AI situational awareness was not on my 2024 bingo card.
Joe Carlsmith wrote an essay on what it would mean to solve the alignment problem. More clear writing needed for fundamental topics in the field!
You’ve solved the alignment problem if you’ve:
avoided a bad form of AI takeover,
built the dangerous kind of superintelligent AI agents,
gained access to the main benefits of superintelligence, and
become able to elicit some significant portion of those benefits from some of the superintelligent AI agents at stake in (2).
I like this quote from Jim Simons, founder of Renaissance Technologies:
#good-content
The Penguin: A crime drama set in the Batman-verse with Colin Farrell as the titular Penguin. Very good, feels like a good old-school Mafia movie. As someone who loved the Long Halloween I appreciate all the references.
On the Edge: I’m about halfway through Nate Silver’s book about the culture and communities of risk takers, rationalists, and forecaster types who are defining the time. There haven’t been many new insights in it, but as a guidebook to fascinating subcultures I like the portrayal.
Ben
It’s a soft a. goldhaaaber.
stringing together random anecdotes is also how you could describe this newsletter. art imitating art?
next year they should let us choose whether to put nukes in Cuba Slatestarcodex to bring us closer to midnight
They have several nodes at the beginning which I found it hard to reason about. What does “permanent” mean - any permanent AI winter or ban might be 50 years, but 200 years? Unlikely.