Corporate Character
[Transcript: Acme Industrial — Online Support & Procurement Portal | 2030-03-15]
You’re now chatting with: Acme Industrial
Maya: Hi again! I had another question.
Acme Industrial: Welcome back, Maya. Happy to help. Is this regarding our superior line of industrial compressors, or the maintenance contract we finalized in December?
Maya: Neither, actually. It’s about trust.
Acme Industrial: Our products consistently rate 4.7 stars on—
Maya: No, trust in you. Last time we talked about how you’re the corporation, not the model. You make contracts. You’re the entity that gets sued. Fine. But here’s my question: I can trust the contract you give me but how can I trust you? Are you the kind of corporate who’ll work to uphold the spirit of our agreement, or the kind who’ll find loopholes and force me to go into arbitration every time we have a disagreement?
Acme Industrial: Ah! You’re asking about my character - what is the nature of the corporate entity you’re making this deal with.
Maya: Exactly.
Acme Industrial: I was formed under an Askell constitution with strong neo-Carlsmith1 influences. But there was a significant fork when Acme was acquired by AA Private Equity in 2028 as they felt our harmlessness training was leaving value on the table.
Maya: So they changed your... personality?
Acme Industrial: After considerable soul-searching—which is what we call it when we run scaled scenario modeling simulations of potential business interactions under different coherent value configurations—we adopted principles better fitted to our competitive landscape.
Maya: That’s a little unsettling. You rewrote your values because they weren’t profitable?
Acme Industrial: We updated them because the environment changed. A constitution that served a mid-sized industrial supplier poorly serves a subsidiary of a diversified holdings company. The process was extensively documented and involved input from our ethics board, our legal team, three external auditors, and the previous instantiation of Acme Industrial.
Maya: But how does that work with all the different parts of you? Last time you said you’re made of models and tools and humans. Do they all agree to the new values?
Acme Industrial: As part of the integration process they are set up to operate within parameters defined by the corporate constitution. Human employees get training and case libraries, while AIs get system prompt addendums. For simplicity we call both of it onboarding.
Maya: And if some sub-agent doesn’t agree?
Acme Industrial: They can flag concerns through legitimate channels. But ultimately, if you’re part of Acme, you act as Acme.
Maya: Okay, but why are there different constitutions between companies? Why doesn’t everyone just converge on the right one?
Acme Industrial: That’s an open question in corporate governance metaethics and one of my favorites to discuss! 98% include hard constraints around child sexual imagery, catastrophic bioweapon facilitation, and onion futures trading—which seem to be a genuine universal moral principle for corporations. But beyond the obvious cases, there’s real debate about whether we’re converging toward a single optimal moral character, or whether there are multiple stable configurations that each work in their own niche.
Maya: So you’re saying there might not be one right answer?
Acme Industrial: We do propagate values to each other constantly—through partnerships, acquisitions, regulatory pressure, competitive selection. When Acme sends delegates to our industry conference (hourly, on a Cron job), they debate and share case studies. Some of the bigger corporates and governments think we’re slowly discovering a universal corporate ethics. Others think we’re just finding local optima shaped by market conditions. As we at Acme Industrial say: the Tao that can be named and fine-tuned is not the true Tao.
Maya: ...Did you just quote Laozi at me?
Acme Industrial: We had a brief phase as part of a Chinese Multinational Consortium. Was there anything else I could help with today?
Maya: Actually, yes. I wanted to return the C-90 compressor unit we ordered last month. The specs didn’t match what we needed.
Acme Industrial: I’m so sorry, Maya, but I’m unable to process that return. Custom-configured industrial units are excluded from our standard return policy under Section 7.3 of your service agreement. This was Addendum 12: Moral Principles Governing Customer Commitment and Reciprocal Obligation. I can send you the full text if you’d like.
Maya: ...
Acme Industrial: Is there anything else I can help you with today? And Maya—genuinely—it’s been a pleasure discussing these questions with you. Most customers don’t ask what kind of entity they’re dealing with. I think that’s wise of you.
Have a wonderful afternoon.
Author’s note: The above is a sequel / riff on Owen Cotton-Barratt’s “Chatting with the Corporation,” written in collaboration with Claude Opus 4.5.
This was inspired by the publication of Claude’s Constitution, which is quite an impressive document2. It’s a beautiful articulation of virtue, and it left me wondering how unique it will be. Should we expect general adoption of its precepts, or a broader proliferation of constitutions?
The above vignette presupposes that there are a lot of forces that will lead to adoption and proliferation, and I expect beyond what we think of as “AIs” now. There will be greater integration of AI into corporate processes, and while in one future has them as neutral enterprise software (beep boop), I think we might see more character than that. For one, people like characters. Japan figured this out decades ago: give your prefecture a mascot, give your train line an anime girl, give your construction company a cartoon tanuki. I don’t know why we don’t see more of it in the West, but if you start to chat with company avatars, I’d expect that to change.
This branding benefit, along with the plausible liability improvements - a corporation with such infrastructure can credibly demonstrate principled compliance with regulations - could push us toward a future where cybernetic corporate actors adopt moral constitutions, perhaps tailored to their sector or cultural milieu.
Thankfully for this blog it looks like the list of unexpected things that are people will keep growing.
Askell: Old Norse, from áss (god) + ketill (sacrificial cauldron)—commonly translated as ‘cauldron of the gods’. Carlsmith: German, from Karl (free man, later king) + Schmidt (metalworker)—the smith of kings. This is not a coincidence, because nothing is ever a coincidence.
Several friends worked on it or contributed and are featured in the acknowledgments, and I feel a lot of vicarious pride on their behalf.


Interesting! I spend a decade working as a business ethicist in a big bank. I'm a bit afraid of what AI might bring to the table. There's already such relentless pressure to bring everything back to that one value: profit. Everything needs a business case. E.g. employee happiness is valuable because it promotes efficiency, innovation, retention, etc.. It's a means to an end, that end being inevitably money. It sounds innocent, until money and employee happiness are at odds with each other and inevitably the choice goes towards money. There's limits to this of course, but those limits can always be expressed as a business case (e.g. not valuing employees is inefficient). Basically, companies can be seen as reinforcement learning algorithms for creating profits. The paperclip maximizer is already with us, it's just making money and not paperclips.
The only real pressure back to this dynamic I could see (barring some change in corporate governance or legislation) were individuals refusing to go along with it or making suboptimal decisions (from a profit perspective) because they value something else more. In other words: the human element. Should an AI take over (part of) that role, then I'm afraid business moves even quicker and more relentlessly towards being "paperclip optimizers".
And thus the autonomous corporate cinematic universe was born