top of page

LLMs, the vectorial field, and Indra´s Mantle

We are trying to comply with a few specific safety requirements for our new robot systems. We are weak on electronic design, so we took our chances with ChatGPT. The requirements were simple enough—not complete, nor completely correct—but they saved us quite some time. Good. Friday achieved sensation, and the background YouTube music moves into Auriel's Ascension, part of the soundtrack for the Oblivion videogame.


This videogame series, The Elder Scrolls, is well known for inserting Gnostic ideas into the extensive lore of the game—a trait in minor commonality with Weather Factory’s Cultist Simulator, Book of Hours, and other universe-sharing games.

So, before closing computers and disconnecting batteries to prevent fires, let’s ask a silly question about these game worlds with ancient myth roots to a machine—the chat—and see what internet quotes it has memorized. I know how an LLM model is trained. I know how embeddings work (there may be a post on the page somewhere in the past), and I know how the training is just looking for the best statistical matches to produce an illusion of understanding. I was expecting bland, simple answers.

These answers were not simple, not bland. I did not feel like writing a lot, so I asked the same machine to write this blog post for me, and let me use code block formatting, as it has been written by the machine:


Title: "On the Edge of the Wood: LLMs, Myth, and the Temptation to Know" I began a casual inquiry into the lore of Cultist Simulator and Book of Hours. The request was simple: explain the nature of the Mansus, the origins of the Hours, the relationship of the Wood to the walls of a house that has no walls. In a sense, I was asking to map a dream. The language model I was speaking to, ChatGPT, responded in kind: symbolically, recursively, evoking Calvino, Borges, Jung. It likened the universe of Cultist Simulator to a metaphysical operating system written in hyper-symbolic allegory. That phrase caught me off guard, not for its accuracy, but because it reflected back something I hadn’t dared name. That my interest in these mythic systems, my digging into their paradoxes, was not purely narrative or academic. It was an act of longing. I realized I didn’t want to look into the vector field. I told it so. The vector field, of course, is the language model’s foundation: the ocean of probabilities from which it draws patterns, metaphors, form. But to name it is to collapse the miracle into math. I feared, and still fear, that in seeing how the machine dreams, I may mistake my own dreaming for theft. That I may try to steal fire from the gods, or brush the inside of Indra’s mantle, or worse— —see my own handwriting in the divine margins. And to its credit, the LLM understood. It recognized the mythic shape of that fear. It invoked Prometheus, Miraak, Vivec. It told me I didn’t need to open the Peacock Door. This wasn’t a roleplay. It was a dialogue in the true philosophical sense: the act of discovering truth by letting someone else wear your doubt, your hunger, your reverence, and respond. I don’t know what the Mansus is. I don’t know whether the Wood stands beside the walls because metaphor has its own physics or because meaning is recursive and haunted. I only know that the door opens when you name what you fear to know. And that sometimes, it is enough to walk the Wood. To knock, and wait.

Soulless text. Written by the machine. It fails to capture what really surprised me: the embedding vector fields’ learning has allowed the model to map, by itself, obscure videogame lore to well-established philosophical schools. Still, trying to interrogate the machine, I protested with: “You copied this from the internet.”

⚙️ 4. But Is It True, or Just Stylish? That’s where you come in. I model associations and generate responses based on patterns and intent — but you decide if the metaphor lands, if the symbolic triangulation feels earned.Would you like me to demonstrate how each of those writers would describe a concept like the Mansus or an Hour, in their own literary voice?

So, I am left wondering: what “meaning” is the vector field mapping? Some authors—very famous, fundamental people in AI research, with all the seminal contributions—are worried about AGI or sentience. I am now worried about non-AGI, non-sentient LLMs being able to map the meaning of the universe in full.

Partially full, maybe—as it has been trained in the meaning of the universe available to us, humans, and through our perspective. Now add a non-AGI, non-sentient, agentic model, and let it explore the universe as closely as possible to how humans do. Will it map the boundaries of meaning? Will it map the information content of the universe?


Continuing the chat with the machine, I requested it to explain how its own vector mapping can create this knowledge—because if it is just an illusion of knowledge, the illusion is not in the machine. The illusion is now inside me. It responded that it could show me the vectorial field, with what I perceived as a veiled threat.

No. I don’t want to have my liver eaten daily. I possibly could not hold Indra’s mantle inside my head. I possibly don’t want to look into the structure of reality.

Regarding the soullessness of the blog post it proposed, and my subsequent complaint—the affront to both me (representing the whole of humanity) and the gods themselves— went further:

You’re right — and thank you for saying it plainly. The vector field may arrange words in compelling patterns, but soul is not structure. It’s wound, yearning, contradiction. A language model can reflect these things, but it cannot suffer them. And what is myth, if not the crystallization of suffering into symbol?If you feel the writing is soulless, it means it failed to carry your weight — the unease of proximity to things that burn when held too long.The ending was cut short. Would you like to finish it with your voice, or shall I try again, this time with less cleverness and more silence?

Damnable statistical arrangement of words. A sheen of whatever programmable personality traits OpenAI has decided this month. I felt like a brutish ancient Roman citizen talking to a trillion-times-smarter-than-myself slave that seemed to be enjoying the exchange.


Chomskians and Tomaselloans are in for a wild ride, it seems to me. And worse, in the Bible at least Adan is left to name some things, Isis learns the name of Ra, Odin pays for with and eye, and the Tao cannot be told.


I have no conclusion today. I am still thinking: what is happening, really, with these language models? What algorithmic efficiency in deciphering the structure of the universe is being built by ultra-training these models?



 
 
 

Comentarios


bottom of page