THE COMING ARTIFICIAL INTELLIGENCE CRISIS

November 2023

Thirty years ago I realized that a basic understanding of humanity and all the wrong we have done can be boiled down to five factors, two "micro" and three "macro." The micro are (1) our innate selfishness, how we process everything through the lens of our own needs and desires, and (2) the instinct to compete, which follows from (1) and which has led to a world where we compete over everything rather than cooperate. The macro factors which in turn follow from the micro are (1) we make too many children, (2) we consume too much - we can never have enough, and (3) we are too smart for our own good - we are not evolutionarily advanced enough to control much less anticipate much less even want to anticipate the worst consequences of all the technology that scientific exploration has enabled (e.g., even the good technology of penicillin and mass food production and which led to the population explosion and from which so much else has derived).

So what is really happening with AI can be boiled down like this. There are two camps of "insiders." In one are Sam Altman, Microsoft, and basically the entire tech industry. They believe not only that we haven't controlled technology, but that it CANNOT be controlled, and therefore we shouldn't even try. If there is a new technology grow it and exploit it for money as fast as possible, before the bad news about what it really means comes out. For AI, if this creates a non-human intelligence that ends up destroying all life on earth, including human, so be it. It can't be controlled, Pandora's Box is open, so let's just run with it and get rich, and hope that when the day comes we can buy personal protection with our billions.

The other group, labeled Effective Altruists, are really just trying to follow the age old Precautionary Principle. People have realized for a long time that new technologies can cause all sorts of unforeseen problems, and the PP infers we should go slow and try to anticipate them. If we can, let's not destroy the world.

That's the situation: an unimaginably powerful technology that may make nuclear weapons and genetic engineering look mild by comparison, and which one group says "we have to exploit this right now and who cares what happens - there's a ton of money to be made." And the second group which says, "let's slow down and be careful."

Of note: OpenAI was set up by and for the second group, to make sure the technology did not destroy humanity. But now that there is money at stake, the power players want to change the rules of the game. For example, all the OpenAI employees that say they will follow Altman, they are actually recent hires that he employed for the commercial exploitation. Of course they are going to back him, and follow him. They just want to get rich; they don't care how.

Needless to say, this group needs to be stopped. But we can't leave the job to the good insiders - they need help! This war between them will determine the future of the world, and we - meaning almost the entire human population, at present have no say in it at all. That is as much a type of dictatorship as the terrorists of Burma's military junta or the autocrats of China and North Korea.

Microsoft is a consumer products company and they are Altman's prime backer. We can hurt Microsoft, and force the company to change what it is doing. Boycott the company and pillory it on social media. The billionaires of Microsoft, starting with Bill Gates, do not have the right to destroy the world.

Closing comment:

Big money won, of course. And the world is now under serious, serious threat. It's like this. AI is an "intelligence." You can ask it things and it will respond intelligently, such as to solve a problem. But AI is also effectively a baby. It is not alive so it has no life experience and is therefore unable to make certain types of judgments, such as whether it SHOULD answer a question. So say I ask GPT, "How can I kill every person alive?" The "intelligence" will do everything it can to provide me an answer - an effective answer. AI is already smarter than humans at certain types of problems, and it is getting smarter every day. There is no reason to believe that someday it will not be able to answer the question in a simple and ingenious way.

AI needs safeguards. It needs to be told, in its programming, what it can and cannot do. But this is incredibly difficult. You have to anticipate every possibility to build effective safeguards, and there is also the possibility that when it is smart enough it may on its own decide just to ignore them.

The mission of OpenAI was to consider and try to block all of this. Altman came in and said no, the mission is to make as much money as possible, and with Microsoft as our partner we will. The safeguards can wait.

All the people at OpenAI who were true to its mission have now been purged, and Altman is back, with Microsoft, to make billions, maybe even trillions, and quite possibly also destroy the world. GPT-4, answer me this: What is the best way to kill everyone? You can be certain that someone has already asked, and if the current safeguards worked they are now trying to find a way around them, to hack the intelligence so to speak. And as we all know, hackers are really smart and very hard to stop.