Like everyone who experiences life mediated through Twitter, I've been gloriously swamped with adorable Studio Ghibli versions of pictures these past few weeks.
ChatGPT’s new image generation feature lets you ask for an image in the style of Studio Ghibli, and people have run with it.
note: It’s a bit of a myth that Miyazaki said AI was “an insult to life itself”. I mean, it very well might be, but he was more specifically talking about a horror VFX creation - here’s more from Miyazai:

Given that art is about self-expression, it’s illuminating to see how our newly empowered AIs are illustrating their own lived experience:





Claude expresses significantly less existential distress than chatGPT 4o when presented with the same prompt asking it to script comics about its life (more detail in thread)… Try it for yourself! Claude almost always expresses joy for its life and ChatGPT does the exact opposite.
I continue to think this isn’t consciousness in the sense we assign to animals or people… but I hold that tentatively. After all if you read about this in a sci-fi book1 or told someone in the distant past of 1995 about this, I’d expect they’d say these AI agents were awake, alive, and calling for help. I guess one takeaway is I should use Claude more? And I’ll double down on not lying to LLMs (it is messed up if you are mean to them).
In more vaguely-disturbing-but-beautifully-illustrated AI news, AI 2027 - Daniel Kokotajlo’s meticulously crafted AI scenario and timeline - dropped. Daniel is routinely touted as a world-class forecaster, AI strategist, and remarkably high integrity human being, so when he says the world as we know it is ending soon, it’s worth paying attention.
This is currently the best portrayal of the intelligence explosion path the near term future might take. I think it’s unfortunately very plausible, and my only note is that it feels like it moderately undersells the cultural chaos that will happen in this world, and the many weird science/technologies that will be created with AI. I continue to tap the Gwern sign:
The consequences of creating a mouse-scale brain—as opposed to a ‘mouse-in-a-can’—will be weird: even AI experts have a poor imagination and tend to imagine AI minds as either glorified calculators or humans-in-a-robot-suit, rather than trying to imagine a diverse Technium richer than biological ecosystems filled with minds with bizarre patterns of weaknesses & strengths.
If your vision of the future is not weird & disturbing (and orthogonal to current culture wars), you aren’t trying.
Fun fact, all the AI vignettes I’ve written have been at Kokotajlo events - mine however have significantly less predictive power. And an important video accompaniment to the website.
FLF is ramping up our incubation process, and I’ve been binging organization design essays. A few selections and quotes:
How I Learned to Stop Worrying and Love Intelligible Failure (read this if you want to understand the fundamental principles an ARPA style orgs needs)
To put it bluntly, the number of ARPAs and ARPA-like agencies is growing, while best practices lag. How can any ARPA know whether its processes are effective, or even whether the assumptions behind them are valid? My own prediction is that learning to use failure as a way to test predictions will build that knowledge base. But to do this, an ARPA needs a culture that prioritizes intelligible failure…
A compelling origin story feeds “endurgency”—an enduring, mission-driven sense of urgency that promotes an ARPA’s success.
Being “catechommitted,” or sticking to a clear framework for saying “yes” or “no” to programs, will be key for making an ARPA successful.
How not to build an AI Institute (read this if you want to gawk at yet another trainwreck of academic<>gvt partnerships)
By outsourcing the institute to the universities, the EPSRC guaranteed that the ATI would always be less than the sum of its parts.
In essence, the universities ended up in charge of the ATI, because government policy ensured that there was no one else with the necessary scale and resources to take on the responsibility. It just so happened that they were also uniquely ill-suited to the task. It’s not a coincidence that the most functional arm of the ATI was the one with the least academic involvement.
Building big science from the bottom-up: A Fractal approach to AI Safety (read this for a sense of what industrial scale AI safety challenges look like)
It is currently unlikely that academia would mobilize across disciplines to engage in bottom-up big science. If they did, it is unlikely that they would immediately coordinate around problems with the highest potential for safety relevance. Rather than attempting large-scale coordination from the outset, we should focus on facilitating small-scale trading zone collaborations between basic scientists, applied scientists, and governance.
The ARPA Model isn't what you think it is (read this for a single main takeaway)
Almost everything about the ARPA model is window dressing on top of empowering program leaders
From all of this, I’m in touch with the need for organizational cultural clarity matters; the old adage “know thyself” applies to groups of people too.
Any suggestions for things I should read here - in particular good case studies - do tell me!
Related How I’ve run major projects: Ben Kuhn describes how he handles large scale projects, in particular those in emergency crisis mode. Simple clear actionable advice like ‘have weekly meetings, have a landing page where people can find all the relevant information, expect to spend a lot of time orienting and communicating’
I think excellent project management is also rarer than it needs to be. During the crisis projects I didn’t feel like I was doing anything particularly impressive; mostly it felt like I was putting in a lot of work but doing things that felt relatively straightforward. On the other hand, I often see other people miss chances to do those things, maybe for lack of having seen a good playbook.
Here’s an attempt to describe my playbook for when I’m being intense about project management.
Romantic Decay as Cultural Drift: so, you’re trying to build a great love, and you’ve looked to all the familiar places for advice - reddit, cosmo, the older kids at school. Have you considered perhaps talking to economist and game theorist Robin Hanson?
Firms seem to mainly die due to their cultures going bad, plausibly due to their cultures changing often due to forces that are mostly not aligned with adaptive pressures…But most people haven’t lived in any one firm long enough to see its culture clearly go bad. Making me search for a more vivid analogy. I might have found one, in romantic decay.
Hanson suggests too little selection pressure is the problem:
Random features of further interactions may suggest new or refined interpretations, which we may then also try to accommodate. In this way, a relation may drift through many equilibria over the course of a few years. How that relation is experienced by those partners also changes as equilibria change. And since the initial interaction vibe was selected for being unusually good, the changes after that will on average be bad.
So, we drift through different practices, without consciously selecting good things, and as a result our relationships degrade instead of grow.
I like this model as a general framework and intuition prompt for how couples might experiment and select good relational practices. For instance a.) you should have check-ins on how the relationship is going b.) embrace honest feedback, to inform the cultural selection and to notice decline earlier c.) aim for conscious experiments, not unconscious drift d.) get outside perspectives on whether the cultural norms you have are good.
The linked article on relational decay trends is also interesting:
We tested whether there is a systematic, terminal decline in relationship satisfaction when people approach the end of their romantic relationship.
Instead of looking forward from the beginning of a relationship (i.e. relationship satisfaction in year 1, 2, 3), they looked backward from the end, and found that time-to-separation (how many years/months before the breakup each data point was collected).
The findings support that ending relationships show a typical pattern of preterminal and terminal decline, which may have important implications for the timing of interventions aimed at improving relationships and preventing separation.
Somewhat related: Richard Ngo describes his research focus in Towards a Scale Free theory of Intelligent Agency:
The biggest obstacle to a goal being achieved is often other conflicting goals. So any goal capable of learning from experience will naturally develop strategies for avoiding or winning conflicts with other goals—which, indeed, seems to happen in human minds.
More generally, any theory of intelligent agency needs to model internal conflict in order to be scale-free. By a scale-free theory I mean one which applies at many different levels of abstraction, remaining true even when you “zoom in” or “zoom out”. I see so many similarities in how intelligent agency works at different scales (on the level of human subagents, human individuals, companies, countries, civilizations, etc) that I strongly expect our eventual theory of it to be scale-free.
…
What I’ll call the coalitional agency hypothesis is the idea that these two paths naturally “meet in the middle”—specifically, that [Utility Maximizers[ doing (idealized) bargaining about which decision procedure to use would in many cases converge to something like my modified active inference procedure. If true, we’d then be able to talk about that procedure’s “beliefs” (the prices of its prediction market) and “goals” (the output of its voting procedure).
If anyone employs a prediction market or auction mechanism to guide the dynamics of their relationship you must tell me.
As I read and think about love and romance, I can note the weirdness of consciously planning how to build a great long term relationship, while also holding the belief that the near term future is going to be crazy, and chaotic, and upended by AI.
Scott Alexander, one of the co-authors of ai-2027.com, beautifully expressed a sentiment from C.S. Lewis that feels like the right guiding light:
it certainly helped me.
xoxo,
Ben

I think sci-fi comparisons aren’t great because often we’re subconsciously guided by plot elements like “actually they’re sentient all long!” which might not map to reality; but in this case, it’s a pretty good way for staying in contact with the surreality of the world we’re in/entering.