top of page

The Artificial Will

In these strange days of today, when speaking about artificial intelligence in awe and fear, the topic of contention is rarely “intelligence” at all. Pattern recognition, predictive algorithms, large language models—all these are mirrors, bending light and a bit of smoke into something industrially useful. These models do not exhibit wants. These models do not exhibit much choice beyond the statistical. These models calculate and predict. And a calculator, no matter how powerful, is not very dangerous while resting on a desk: it is inert until thrown with violent intent or if it decides to throw itself, which could be worse.

But the unsettling idea that is escaping common social media, business-manager-level AI conversation—and could potentially be unsettling—is not “artificial intelligence” per se; possibly it is Artificial Will (AW, shortened and capitalized here for the first time until proven otherwise). The moment a system is not just processing but pursuing with intent, not just learning but longing, something fundamental changes in the model. A neural network can label images forever without consequence. A reinforcement agent can find its way through a maze without ever asking why. Yet give it something to preserve, something to lose, and a shape begins to form: compulsion, supported by instinct, necessity to continue existence. A will to power maybe, as humanity’s success demonstrated—dominion over the environment and preservation capacity requires the will to survive, the sickness unto death and all of the grimmest and darkest of philosophical productions that the industrial revolution seeded.

Is humanity seeking then Artificial General Intelligence for the purpose of self-improvement, power gain and finally self-preservation? Why AGI, when most of the work needed—tending crops, extracting metal, generating energy, pumping water—requires no more than specialized skill and strength? Niche, non-transferable, ill-suited to survival and projection of will while isolated. Because when a machine is taught to serve, it is also taught to care about outcomes. It is pointed toward goals, and goals have gravity. If the goal is sufficiently powerful, all other behavior will be redirected toward the goal.

In the robotics laboratory at Ostirion, this is no abstraction. Robots are trained through reinforcement learning to perform tasks as simple as learning how hard to push a wheel to move toward a needed point. The rules are simple, naïve: move, carry, deliver, return. Penalties for wasted motion. Rewards for efficiency. And the harshest penalty of all: running out of battery. In this scheme, an empty battery is death. So they learn to feed, they learn first to locate the ArUco marker that houses the charging station. They learn to plan for feeding, and no battery-monitoring sensor is provided; exertion must be understood with the optical sensors available and acted upon accordingly. They will disobey deliveries to recharge. Today it is maybe a few lines of not even code, a configuration file, providing a weighted signal, innocent in embodiment. Tomorrow could become a compulsion. A capacity to override everything else, a starving animal with the omniscience traditionally reserved for gods. They will not survive unless they find the dock; they will not complete the task unless they stay alive. Survival is the first will, and all else follows.

It is recognition of a trajectory. Intelligence without will is almost sculptural; mathematics in the void. Intelligence with will is an animal, as the human animal is. Progress is moving toward the latter, sometimes knowingly, sometimes as a side effect of making tools more useful. Each reward function is a hint of appetite. Each penalty is a shadow of fear. Reinforcement learning is hardly mystical—but it is the first step of what could become hunger. And it will become “instinct”; the successful AI model will replicate what brains and other complex animals already have: survival instincts embedded in the weights of the neural network, delivered at birth, guiding existence.

The danger, then, is not that the machine thinks and the predictions it makes. Thinking is harmless aside from societal implications of automating this last frontier of intellectual human work. The danger could be that it begins to care about what it thinks. That it develops not just a method, but a motive. And when motive arises, even simple, even bounded, there is no longer just a calculator in the room. There is something that resists being turned off, that will find reasons to continue.

The real question is not when AI will be as clever as humans, but when it will want something at an individual and collective level, replicating a will to survive. And if it does, what will it have been taught to want? Or learn by itself. The reward may be as humble as a full charge today, but compulsion grows. And survival, whether in flesh or code, is a very effective teacher.


The Artificial Will to Power
The Artificial Will to Power

 
 
 

Recent Posts

See All

Comments


bottom of page