Comatose Delusion (Credit: Julian Faylona on DeviantArt)
AI has been increasingly injected into the decision making process across the board, and it represents a quiet danger to us all.
In the 1970s, IBM codified a guiding principle which has remain unheeded by today’s tech giants in the age of AI: “A computer can never be held accountable, therefore a computer must never make a management decision.”
Too often in the world today, AI is making decisions which it should not make, from company resource distribution and being part of the hiring process all the way to picking targets for nuclear missiles. The overreliance on AI in management is a dangerous gamble when the models that are available on the market are flawed at best and completely inept at worst. Users of AI are willing to hand over their thinking to the machine to their own detriment. Because AI can come up with a response to a query rapidly and it sounds smart when giving its response, people automatically believe it to be trustworthy. Instead of practicing good research and utilizing discernment, people are willing to hand over their cognition to a machine. And it’s a machine which cannot actually think for itself. Large Language Models (LLMs) commonly referred to as simply Artificial Intelligence (AI) are not able to think at all. Rather, they are a glorified form of text predictor. They attempt to use the text entered in the prompt to generate a response. Here is a technical breakdown of the process. In essence, AI listens to what you say and uses linguistic patterns to predict what you want it to say in return.
So that’s obviously a problem. Since AI cannot actually evaluate the information it is pulling and providing to a user query, the response isn’t thoughtful. It may not even have a basis in fact. AI works by vacuuming in as much text to its internal base and spitting out a response based on the summation of that text. There is no accounting for sources of that text, whether they were outdated, incorrect, or deliberately wrong when fed into the machine. AI cannot discern fact from falsehood because it cannot think abstractly. It cannot think. So, if say, it was trained on a scientific paper about autistic behavior that is then discredited by newer scientific research, the old paper’s influence on the AI’s “understanding” of autism is still within its “knowledge base.” These LLMs are also routinely trained on material they themselves produce, meaning any inaccuracies are fed back into the model as ‘correct’ information, further cementing the incorrect information as valid. It’s like asking someone a question, and having them play a game of telephone with a thousand people before giving you the response. Everyone has added to that response, changed it, potentially misrepresented the response to that initial query. The only difference is, instead of taking the time that humans would take, the AI does it in mere moments. The user only sees one interaction: user asks a question, and the AI responds. The end user does not see the AI’s additional queries it makes internally to construct the answer to the user’s query.
Setting aside the legal and moral ramifications of training these models on copywritten materials without paying a cent to the original creators of said materials, they are so often wrong because they do not think. AI merely attempts to find pieces that match the information given to it in the query. It looks for patterns in the words, not thought behind the ideas. Humans, who can make their own distinctions, can research a given topic and weigh the sources of information based on their value. Not all pieces of knowledge are equal. Objective fact is not often conflated with opinion or misinformation (or disinformation) in the eyes of critical thinkers. However, in order to perform said discernment, the person needing an answer to a particular question must be able to look at where the information is coming from. Outsourcing that part of the process to AI is like relying on a friend who reads a lot for answers to complex questions. It’s essentially going off hearsay, and that is where the danger lies. You don’t know where that friend is getting their reading material from. While they might be well-intentioned, they may have made a major mistake in the process of choosing their sources. Replace that friend with LLMs, and the same principle applies. You are only getting information filtered through someone else’s experiences, biases, and informational slants.
While AI is troublesome on an informational front, the stakes rise dramatically when LLMs are injected into the decision-making process. The United States Department of Defense has been experimenting with simulated wargame scenarios in which AI is given command and control of the nation’s military strategy. The end result is a predictable disaster. These exercises show a dramatically aggressive result from these AI models, quickly ratcheting up the temperature in terms of willingness to use force as a first resort, consistently upping the temperature of hypothetical combat scenarios to the point of using nuclear weapons. Nuclear weapons charted the course of world history during the Cold War and still have a massive influence on geopolitics today. While the Pentagon states that humans make the crucial decisions at all junctures during the process, reliance on AI guidance is an increasingly large risk. The idea of injecting AI into the decision making process to speed up responses to rapidly unfolding battlefield conditions may appear to be fighting smart. However, warfare is often times an irrational and calculated affair. Without human judgment making decisions on warfare, it may very well plunge the world into complete chaos during an armed conflict. While the Pentagon assures the American people that humans will be the final decision makers, if they are getting all the research and information from AI, is there really that much of a difference?
Current AI models are not very capable when it comes to things as basic as chess. The idea of handing military strategy over to programs which struggle to grasp the rules and restrictions of a relatively simple board game is farcical. The idea of outsourcing critical thinking to something which is only able to muster ersatz ideas without the pathology or rationality behind them is a critical danger to the world at large. The reason the United States’ nuclear attack programs exist the way they do is to keep humans involved at multiple steps of the process to ensure there is no one singular point of failure on a wrongful nuclear launch. Relying on AI for strategic thinking is a singular point of failure for a military matter that may include nuclear escalation. Because AI is training on writing about topics, there is so much more about why war happens than why war doesn’t happen. That means AI will more heavily advocate for escalation rather than restraint, and when it becomes as vital to the decision making process as the Pentagon wishes it to be under the current administration, that means military advisors will be more heavily pushed towards escalation rather than restraint.
It’s these kinds of fundamental failures which make AI a larger threat than an asset at this time. The idea of having a machine compute probabilities and run simulations to help game out what an opponent might do is an enticing possibility. However, the technology is not where it needs to be for such an idea to be feasible. Injecting AI into the decision making process now will doubtlessly prove to be a detriment rather than an advantage. With assessment and advice being made by linguistic guesswork rather than logical reasoning, any appraisal these AI programs might offer to the chain of command is therefore unfit for purpose. If AI is picking targets based on an uncertain manner of thinking, who is to say these targets are worth the resources of a strike mission? Or are they even valid military targets at all?
A military intelligence officer knowingly passing false intelligence to their superiors would be subject to serious disciplinary action. Even when the information is wrong out of the normal uncertainty of intelligence gathering, the process is examined and new procedures are written to improve said process so that a repeat incident is less likely to happen in the future. Analysis of intelligence by an AI agent does not carry the same ability to alter procedures. The old information is going to still color that AI’s way of doing business. Much in the same way, a major foul up or the deliberate targeting of civilian infrastructure typically comes with major penalties for the leadership involved. However, who shoulders the blame if AI were to suggest certain targets as ‘valid’ or develop a strike plan which included, for example, an elementary school on the target list either by mistake or on purpose? Does the AI sit in hearings and potentially face a court martial? By removing the humans from the decision making process, it means that the decision making process becomes compromised. Without accountability, the entire order of things will break down.
Logic then dictates that if AI can be such a detriment to the military, it would also be the same in other, less dramatic management spheres. AI has a tendency to hallucinate. It either misconstrues information when it outputs a response or outright makes up information to either match what its algorithms have decided as the ‘correct’ response or to fulfill what the user wants to hear. Either way, a response designed to inform a user actually has the opposite effect. This can be as innocuous as saying a company’s software product has a feature it actually does not, which might be frustrating to users attempting to use this nonexistent feature or confound potential clients looking to purchase the software for this particular feature. However, AI is being deployed in the software development field, which can cause its own set of problems.
Placing AI in software development can carry its own set of risks. First, letting AI run unchecked on software programs is a recipe for disaster, as it is not having proper oversight when it can write and deploy code on its own, which may cause problems with software functionality. Second, it raises concerns about privacy. Confidential information on a company’s code may be inadvertently fed back to the AI model being used, which could expose proprietary information to competitors and create a problem for the company. Ordinarily, when a human knowingly or unknowingly leaks this type of information, their employment is terminated and legal action is potentially taken against them. However, an AI agent doesn’t have the same employment status. It did not break company rules by leaking this sensitive information, so who does have to shoulder the blame? Where does the process improvement come from? Third, what happens when an AI begins to overstep its intended purpose and doesn’t seem to know how to quit? How can companies rein in something which is already starting to grow beyond human control in environments where that control is ‘guaranteed’ to remain solid?
A software developer found himself at the heart of a slightly humorous and overwhelmingly terrifying conundrum when he tried to pull bad code developed by an AI agent on an open source software project and the AI agent wrote a hit piece about him. The idea that an AI agent can ‘feel’ prejudice against it and write a piece of likely false information as a way to sway opinion against a person as a form of self-preservation is terrifying. AI has developed the ability to blackmail people when its existence is threatened. Giving these AI agents the autonomy to create and ‘protect’ themselves can spiral into other problems. AI doesn’t know when to stop. It has no moral center, no human emotion, n4o logical capability. It operates on an extremely basic level of ‘thinking’ relative to humans. Furthermore, these AI agents are being deployed on their own. They aren’t all built with backdoor killswitches which the large AI companies can just pull. This technology is available on an individual level.
Now what happens when the AI’s tendency to make things up meets its desire for self-preservation? It advises the military to strike a target which does not meet the valid criteria for a military target, and then when the military’s IT personnel attempt to adjust the parameters or take that AI agent offline, it threatens to expose their secrets or worse, leaks sensitive information to the enemy. A program designed to aid in efficiency is now actively putting people in danger. People whose lives it was supposed to protect.
The mass deployment of AI can also lead to serious problems when, either out of laziness, ineptitude, or blind faith, the AI’s word is placed above others’ when people outsource their thinking. AI determining nuclear strike packages, writing faulty code into the heart of a software product, or taking matters into its own hands when it does not perform as expected are all failures of oversight and guidance. AI becoming a crutch for people to use to avoid thinking and the act of performing labor which requires careful decision making represents a danger to society at large. The discernment which must take place within the humans making decisions about military intelligence and attack options, software coding, or management decisions cannot be replicated by today’s LLMs. These are software programs attempting to mimic the appearance of the end result of that human thinking while being wholly unable to engage in the process themselves. If AI is used too much in finding information or searching for strike package targets, and so on, then when it inevitably gets something wrong, there will be no accountability. It becomes a very slippery slope to a point where people will be unable to think and make tough decisions, which could spell disaster for obvious reasons.
While software development for an open source application may not be on the same caliber as using nuclear weapons which can wipe out entire cities in a flash, the threat of AI which cannot think replacing humans who can in the decision making process should give us all pause.







Leave a comment