Discover more from Ben Goldhaber's Newsletter
September 2023 BenGoldhaber.com Newsletter
This month we're having the Talk and Focusing on What Matters (Yellowjackets)
I joined Professor Michael Munger on his podcast to talk about transaction costs and Effective Altruism - he was a great interviewer and it was quite fun dishing about EA and trying to explain this weird intellectual scene. Even as someone who has been in/around it all for many years, this was the first time I had considered the way EA as a philosophy is tightly tied with transaction costs, where reducing the costs provide an affordance for to care more about the global poor, as well as future individuals. Good, sharp take!
This has made the rounds over the past few weeks, but it continues to blow my mind: very natural, high fidelity translation of video between languages. In the clip the tech translates from English to French / German, and changes the lips to match the new words. I wouldn’t have been able to tell the person was not a native speaker. It’s not realtime, but still… what are the implications of undoing the tower of babel? What does this do to transaction costs?? It seems like an accelerant to cultural globalization, where we should expect far more intellectual exchange between the english world and elsewhere. Also maybe there’s an edge in getting into this early and arbitraging between english and foreign films.
The Talk: a brief explanation of sexual dimorphism. A lucid and funny overview of why humans, and many lifeforms, reproduce sexually instead of asexually and the runaway processes that resulted.
Most of the articles I read about the evolution of sex ask "what are the advantages of sexual reproduction?", then proceed to explain what are the advantages of sexual reproduction. The problem with this approach is that, if sexual reproduction really had such clear advantages, nobody would do asexual reproduction any more. But, to this day, asexual species are still very much around and successful. What we need to know is, "in what ways does sexual reproduction give access to new evolutionary niches?"
(Note 1: As always with evolutionary biology, everything in this article is subject to uncertainty, controversy and mystery. Always keep in mind the Golden Rules of biology: all models are wrong; everything has exceptions; don't talk about fungi; mitochondria is the powerhouse of the cell.)
I sometimes write in this newsletter about risks from AI. I’ve taken to heart one of the criticisms that Michael Nielsen laid out in his recent essay on X-Risk, that the meme of p(doom) - a shorthand some people use to describe the probability that AI destroys all of us - is bad:
"So, what's your probability of doom?" I think the concept is badly misleading. The outcomes humanity gets depend on choices we can make. We can make choices that make doom almost inevitable, on a timescale of decades – indeed, we don't need ASI for that, we can likely4 arrange it in other ways (nukes, engineered viruses, …). We can also make choices that make doom extremely unlikely. The trick is to figure out what's likely to lead to flourishing, and to do those things. The term "probability of doom" began frustrating me after starting to routinely hear people at AI companies use it fatalistically, ignoring the fact that their choices can change the outcomes. "Probability of doom" is an example of a conceptual hazard5 – a case where merely using the concept may lead to mistakes in your thinking. Its main use seems to be as marketing: if widely-respected people say forcefully that they have a high or low probability of doom, that may cause other people to stop and consider why. But I dislike concepts which are good for marketing, but bad for understanding; they foster collective misunderstanding, and are likely to eventually lead to collective errors in action.
The whole essay is, of course, worth reading.
Do I have an optimistic case for how AGI goes well? In the spirit of not just talking about p(doom) and also thinking about flourishing, I’d like to at least offer a napkin sketch of a plan for how the good AGI outcome could be achieved1.
Slow down and halt the development of agentic AI systems: use regulation and and general consciousness raising to underscore the fact that creating alien intelligent agents running on their own is in fact bad.
Promote narrow, tool AIs: By tightly defining the scope the domains that AI’s operate in we can get more guarantees of their performance, and reduce the chance of runaway agents. Using improvements in automated modeling we can apply tool AI systems in new areas.
Develop new ways for people, companies, and governments to coordinate: Avoid a race to AGI by finding positive sum trades between winners and losers from AI. Establish global coordination around compute monitoring regimes; don’t descend into a global totalitarian panopticon in the process plz.
Reap the benefits of powerful narrow AI in areas like health, manufacturing, and entertainment. Use productivity gains to create legitimacy for the coordinating institutions, to keep public support, and punish defectors.
Slowly, very slowly, start to approach the singularity. Treat it like they do in the Culture novels - as something worth thinking a lot about for a long time before doing.
Do I know anyone in private equity who can answer question of why we don't see many (almost any?) private equity takeovers in tech where the takeover team brings some technical expertise to the table and drives changes that drastically reduce cost without damaging revenue and/or increases revenue via boring/predictable product work that drives growth?
The takeovers I know of that seemed successful generally leveraged the non-technical side or "ate the seed corn", but those techniques seem high risk compared to a lot of the tech opportunities out there.
In principle, it seems like it should be easier to drive that kind of work after a PE takeover than by being an activist investor with a small stake but, in practice, it doesn't seem like many (almost any?) PE firms really drive this kind of work after takeovers. What gives?
It seems like many tech companies have left a lot of money on the table, and that motivated outside which that can focus a company to build features tied to growth/revenue could make a killing. Twitter (X.com) being an example of a kind of crazy version of this happening. Some respondents noted that Vista Equity Partners operated in this way; I hope to learn more about them.
Oldie but goodie - Peter Thiel’s management philosophy that everyone in the company should only have one priority. I like the description of the value of this attitude:
The insight behind this is that most people will solve problems that they understand how to solve. Roughly speaking, they will solve B+ problems instead of A+ problems. A+ problems are high-impact problems for your company but they’re difficult—you don’t wake up in the morning with a solution to them, so you tend to procrastinate… If you have a company that’s always solving B+ problems, you’ll grow and add value, but you’ll never create the breakthrough idea because no one is spending 100% of their time banging their head against the wall every day until they solve it.
People in Utah have the least zero-sum attitude. Also what’s wrong Delaware, who hurt you?
The usual way to avoid being taken by surprise by something is to be consciously aware of it. Back when life was more precarious, people used to be aware of death to a degree that would now seem a bit morbid. I'm not sure why, but it doesn't seem the right answer to be constantly reminding oneself of the grim reaper hovering at everyone's shoulder. Perhaps a better solution is to look at the problem from the other end. Cultivate a habit of impatience about the things you most want to do. Don't wait before climbing that mountain or writing that book or visiting your mother. You don't need to be constantly reminding yourself why you shouldn't wait. Just don't wait.
Last year, around this time, I was in North Carolina, after traveling across the country from San Diego - now, I’m in Berkeley. It doesn’t exactly feel like life is short, but it does feel like life moves fast, and so I strongly agree with Paul Graham’s point that with the general speed of it all, you should try to minimize spending time on pointless bullshit, concentrate on what matters, and enjoy the now.
Counterpoint: Life could be much longer, and a few people are working to make that happen, like the newly announced longevity fund. Laura Deming is inspirational - we should obviously be working to extend healthy lifespans and give everyone a chance to not have to choose between doing things that matter *and* wasting time on twitter.
Cool post describing how the author handcoded a transformer - the dominant ML architecture - to better understand how they work.
So I decided to make a transformer to predict a simple sequence (specifically, a decoder-only transformer with a similar architecture to GPT-2) manually—not by training one, or using pretrained weights, but instead by assigning each weight, by hand, over an evening. And—it worked! I feel like I understand transformers much better now, and hopefully after reading this, so will you.
Yellowjackets: I’m hooked, just about to finish Season 1. It’s like a more horrifying version of Lost, with a group of high school star soccer players surviving a plane crash leaving them stranded deep in the woods. Things start to get spooky.
This is my far less detailed, poor pastiche of the proposals from folks like Davidad, Tegmark, and Drexler, who are all worth reading. I am sure many of my readers will note the underpants gnome nature of my outline, which I will readily cop to; and yet, this is the thrust of a plan that I often find myself returning to as the one worth working towards!