Discussion about this post

User's avatar
Herman Dolk's avatar

Interesting! I spend a decade working as a business ethicist in a big bank. I'm a bit afraid of what AI might bring to the table. There's already such relentless pressure to bring everything back to that one value: profit. Everything needs a business case. E.g. employee happiness is valuable because it promotes efficiency, innovation, retention, etc.. It's a means to an end, that end being inevitably money. It sounds innocent, until money and employee happiness are at odds with each other and inevitably the choice goes towards money. There's limits to this of course, but those limits can always be expressed as a business case (e.g. not valuing employees is inefficient). Basically, companies can be seen as reinforcement learning algorithms for creating profits. The paperclip maximizer is already with us, it's just making money and not paperclips.

The only real pressure back to this dynamic I could see (barring some change in corporate governance or legislation) were individuals refusing to go along with it or making suboptimal decisions (from a profit perspective) because they value something else more. In other words: the human element. Should an AI take over (part of) that role, then I'm afraid business moves even quicker and more relentlessly towards being "paperclip optimizers".

Bryant's avatar

And thus the autonomous corporate cinematic universe was born

2 more comments...

No posts

Ready for more?