The madness

On 17 November 2023 Sam Altman was fired from his position as CEO of OpenAI. Documents indicate the board told the leadership team that allowing the company to be destroyed ‘would be consistent with the mission’ [of the company].

This rather scant set of facts has led some to believe OpenAI has created a superintelligent AI called Q*. Altman was fired for recklessly creating a danger to all of humanity, and possibly desires for world domination, or for being the puppet of Q*, or some other James Bond villain behavior. The purpose of this post is to explain why I believe this to be false.

I believe Sam Altman was fired because the board of OpenAI realized, too late, its company had been reshaped from a small non-profit research group into a Silicon Valley startup where their services would no longer be needed.

Altman has been skillfully moving towards this end for years. First creating the for-profit entity to offer stock options to attract top talent, and then bringing Microsoft on as an investor. OpenAI now resembles and operates like a dot com startup. Its non-profit origins are a vestigial organ no longer of use to the host. This outcome should not be surprising. Altman has no experience with anything but this type of company. He has behaved in a manner consistent with past performance.

Saftey, Secutiry, and Super antigens

So why the cryptic statements about allowing the company to be destroyed? Because safety, and security, are super antigens in the business world.

Antigens are the molecular signatures the immune system uses to identify pathogens. They are the first step in marshaling a defense against the invader. Antigens are important, so important some bacteria exploit them by producing super antigens. These super antigens cause an overreaction so intense the now hyper zealous immune system will destroy its own cells for fear they might be the pathogen. In this chaos the bacteria thrive.

Safety is important, so important it is perfect for exploitation. Security is similarly invoked. I have sat through many painful meetings where IT explains an especially ridiculous policy is needed for security. They cannot explain what is being made more secure or what threat is being addressed, but unless you want to be responsible for the business ending you need to uphold the policy.

Firing your CEO for safety concerns sounds reasonable, much better than admitting you don’t like the company he has created, or the fact you will soon be obsolete.

What about all the experts who believe in Q*?

AI safety proponents are like crypto proponents. A few are genuinely interested in ensuring their technology is beneficial, and the rest are scammers. AI doomers get engagement and AI safety is a somewhat more elegant way to cash in on that engagement.

Imagine if I were to tell you I was a faster than light starship safety proponent and I had created a unilateral phase detractor to make FTL travel safer. Would you be interested in hearing what I have to say, or better yet employing me at your company? Probably not, since FTL does not exist I could not have tested my unilateral phase detractor. So you probably not be too interested in my unproven creation for a system that does not yet exist.

Unlike FTL travel AI has not had the same amount of time to be processed by our culture. We are therefore more willing to accept a more interesting fiction over mundane reality. Cities were supposed to be rebuilt because of the Segue, blockchain was going to destroy the entire financial system, and Theranos would revolutionize medicine. Progress is hard. Even revolutionary technologies take decades to deploy. But in the infancy of a new technology we are seduced into irrationality. 

The way forward

Spending energy on speculative future harms weakens us against those of the present. Learn the motives of those who have your attention. Do not give fuel to those who benefit from your fears. Do not add your voice to the chorus of the ignorant.