Ghost in the Machine
Eon Systems uploaded a fly brain. It walked. It groomed. It fed. It cannot learn a single new thing.
René Descartes had a problem he could not solve, and it has haunted philosophy ever since.
The problem was this: the mind and the body appear to be made of fundamentally different stuff. The body is a physical object that occupies space, obeys mechanical laws, and can be dissected and measured. While the mind seems to be something else entirely, it thinks, it feels, it is aware. It has no mass. You cannot cut it open. In 1641, Descartes distilled this intuition into a doctrine: res cogitans, thinking substance, and res extensa, extended substance. Two irreducibly distinct things.
The trouble was explaining how they interact. If mind and body are categorically different, how does the decision to lift your arm actually lift your arm? Descartes proposed, weakly, that the pineal gland was the site of their meeting. The mind somehow pushed on the body there. He knew this was unsatisfying. He died before he resolved it. The problem remained.
Three hundred years later, Gilbert Ryle took aim at the whole framework. In his 1949 book The Concept of Mind, he coined the phrase that would outlive everything else he wrote: the ghost in the machine. This was mockery. The ghost, the immaterial mind, the Cartesian soul, was, Ryle argued, a category error. Minds are not things that inhabit bodies the way ghosts inhabit houses. To ask where your mind is located is like visiting Harvard, seeing the colleges, the libraries, the faculty, the laboratories, and then finally asking to be shown the university. You have already seen everything there is to see. The question assumes a kind of independent entity that does not exist. There is no ghost. There is only the machine, doing what machines do, and we have been fooled by grammar into thinking there must be something else inside.
Ryle thought he had closed the issue. He had not.
In 1967, Arthur Koestler published his own book with the same title, The Ghost in the Machine, as a rebuttal. Something strange was happening inside biological organisms, Koestler argued: the hierarchical architecture of the nervous system produced behaviors that could not be explained by simply pointing at neurons. Evolution had built something that exceeded its own parts. He kept Ryle’s phrase and inverted its meaning — the ghost, in his telling, was not an error but an open question.
That argument has never been settled. Consciousness researchers, philosophers of mind, and neuroscientists have spent the decades since accumulating evidence and frameworks without converging on an answer. Despite the scientific advances, the hard problem, why there is something it is like to be a brain, remains nearly as hard as it was in Descartes’ time. Then someone tried to copy one.
First, if you enjoy these posts, consider subscribing and becoming a part of our growing community!
A San Francisco-based company called Eon Systems posted a video online. It did not settle the argument. It threw a hand grenade into the debate by turning a thought experiment into an actual one. The thought experiment: if you digitized every neuron and every connection of a biological brain, and you ran that copy in a new body, would it be the same mind?
The video (embedded above) shows a fruit fly — a Drosophila melanogaster — walking, feeding, and grooming inside a physics simulation. Nothing unusual about a simulated fly. What was unusual was the brain controlling it. The fly’s behavior was driven by an emulation of a real biological brain, wired neuron-to-neuron, synapse-to-synapse, from electron microscopy data of a real Drosophila nervous system. A copy of a biological connectome, running in simulation, making a body move.
Eon’s founder, Michael Andregg, described it as the world’s first whole-brain emulation to produce multiple behaviors in an embodied simulation. He was, with important caveats, correct.
In 2024, Philip Shiu, now a senior scientist at Eon, and collaborators published a paper in Nature that represented years of collective work by hundreds of researchers across multiple institutions. The researchers used the FlyWire connectome, a complete map of synaptic connections in the adult Drosophila brain assembled through electron microscopy and machine-learning-assisted reconstruction, to create a computational model of the entire fly brain. Neural NeXus covered the FlyWire consortium's work when it published in October 2024 (BioWire 018) — the largest brain map of any organism at the time, a project involving over three million manual edits and 8,453 identified neuron types. What Eon has now done is take that map and run it. The numbers: 140,000 neurons, 50 million synaptic connections, every neuron typed by neurotransmitter identity. The complete wiring diagram of a biological brain, rendered in silico.
The FlyWire connectome — all 140,000 neurons of the adult Drosophila melanogaster brain, color-coded by cell type, assembled from electron microscopy data by the FlyWire consortium. From Dorkenwald et al., 2024, Nature.
The model is nearly a perfect anatomical replica. But the lights weren’t on — the neurons couldn’t fire. Shiu et al. gave them life by using a simple neuron model. With this, each cell accumulates input, fires when a threshold is crossed, and resets, to simulate the dynamics of a network. They activated specific neurons computationally and asked whether the downstream activity matched known experimental results. For feeding behavior, activation of sugar-sensing and water-sensing gustatory neurons in the model accurately predicted which neurons would respond in living animals. For grooming, mechanosensory activation predicted the correct antennal grooming circuit.
The model matched real neural behavior with over 90% accuracy — built from wiring data alone, with no parameter fitting. But it was a brain without a body. Activation flowed through the connectome and produced motor outputs — signals that would, in a living fly, reach the ventral nerve cord and drive the muscles. In the model, those signals had nowhere to go. The brain was predicting movement. Nothing was moving.
The new Eon demonstration addresses this directly. They integrated Shiu’s brain model with a physics-based simulation of the Drosophila body and musculoskeletal system (NeuroMechFly v2) to close the loop. In this system, sensory input enters the emulated brain and neural activity propagates across the full connectome. Motor commands then emerge as descending neuron signals. These turn into control outputs for turning, forward velocity, grooming, and escape, which are then translated into joint torques and leg trajectories by body-level controllers trained through imitation learning.
The brain decides. The controllers execute. The fly walks. It grooms. It feeds.
Incredibly, the behaviors emerged without training the brain model. No reinforcement learning shaped the connectome’s activity, no fine-tuning adjusted its synaptic weights against observed fly behavior. The connectome structure, implemented in the simplest possible neuron model and coupled to imitation-trained body controllers, was enough to produce recognizable, sequenced behavior. That is the result Eon is claiming, and it is the right thing to be struck by: the wiring diagram, even mediated through an engineered motor interface, contains a remarkable amount of what a fly needs to be a fly. Eon’s own technical writeup cautions that broader behavioral repertoires will likely require additional learning mechanisms, richer motor interfaces, and more functional data. The demo is a first step, not a final proof.
The model used to implement the fly brain is computationally simple by design. Each neuron accumulates input, its membrane potential leaking back toward baseline between spikes, and fires when a threshold is crossed. What it does not do — cannot do, in its current form — is change. The synaptic weights are fixed. The architecture is static. The brain captured in the FlyWire connectome is a snapshot of a single adult fly’s nervous system at a single moment in time.
What Eon has emulated is not a mind in process. It is a mind in amber. In this sense, every large language model ever deployed shares the same limitation — frozen weights, no capacity to learn from the next input. The plasticity problem is not unique to brain emulation. It is the central unsolved constraint in all neural intelligence.
The plasticity problem is the central one.
There is a science-fiction version of whole-brain emulation where this problem is already solved. In Richard K. Morgan’s Altered Carbon, every human carries a cortical stack at the base of the skull. It is a device encoding the complete mind as Digital Human Freight. It contains personality, memory, identity. The full architecture of consciousness, stored on alien metal. In this universe, bodies are disposable. Stacks are what matter. The protagonist, Takeshi Kovacs, wakes up in strangers’ bodies across centuries. Each time, he adapts — new proprioception, new reflexes, new flesh. He forms new memories. His mind, contained in the stack, rewires to fit hardware it was never born into. That is plasticity. This science fiction is Eon’s stated ambition: the human mind, digitized and portable. The cortical stack is the final version of the fly connectome. But plasticity remains the unsolved problem.
Eon’s digital fly cannot do any of this.
Biological neural plasticity is the ability of synaptic connections to strengthen or weaken in response to experience. That is the physical substrate of learning and memory. It is what allows an organism to update its model of the world, to form new associations, to recover from injury, to change over a lifetime. A static copy of a biological brain can generate embodied behavior, but without plasticity it may be closer to a preserved behavioral architecture than a living mind. The digital fly executes the repertoire of the fly it was copied from, unable to deviate, unable to learn, unable to form a single new memory from contact with a world that is already different from the one its connectome was shaped by.
The FlyWire connectome also maps only the central brain. It does not include the full peripheral nervous system, the sensory neurons in the legs, the mechanoreceptors in the wings, the proprioceptive circuits that tell the brain where the body is in space. The connection between the emulated brain and the simulated NeuroMechFly body required Eon to make engineering assumptions — educated guesses about how motor outputs from the connectome map onto the muscles of the simulated body. Those assumptions were good enough to produce recognizably fly-like behavior. They were still assumptions. The ghost and the machine were not as perfectly married as the headline framing implied.
Eon names the next constraint itself. The visual system is implemented in the current demo — the Lappalainen connectome-constrained visual motion model is integrated, and the fly can in principle respond to looming threats. But the company’s own technical writeup describes the visual activations as “somewhat decorative”: they are present in the simulation but do not currently drive behavioral outputs in any meaningful way. A fly that cannot respond to what it sees is a significant constraint on what the demo actually demonstrates. The data is open source, the video is public, the methodology is described. But the independent validation that separates a compelling demonstration from an established scientific result has not happened. This is not unusual for early-stage work. It is worth stating plainly.
None of this should obscure what Eon has actually done.
No one had previously demonstrated a complete biological connectome driving a physically simulated body through multiple naturalistic behaviors. The prior landmarks in this space either modeled brains without bodies or animated bodies without biologically realistic neural dynamics. One example, OpenWorm, made an attempt at whole-nervous-system emulation with C. elegans. These creatures contain 302 neurons and a much simpler behavioral repertoire. Another example is DeepMind and Janelia’s MuJoCo fly, first described in a 2024 preprint and published in Nature in 2025, which used reinforcement learning to control a simulated fly body. Impressive engineering. Not brain emulation.
Eon’s demonstration is different in kind. The ghost is a copy.
The question that copy raises, and that Ryle thought he had answered, and Koestler thought was still open, and that consciousness researchers have not closed, is what a copy of a brain actually is.
· · ·
The fly is a proof of concept. The question Eon is now inside is whether proof of concept scales.
The fly brain contains 140,000 neurons. The mouse brain contains roughly 70 million or over 500 times as many. The mouse has a vastly more complex functional architecture, with plasticity mechanisms that span timescales from milliseconds to years in a dynamic connectome continuously remodeled by experience. Eon has announced that the mouse is their next target, combining expansion microscopy to map every synaptic connection with tens of thousands of hours of calcium and voltage imaging to capture how those networks activate in living tissue. The human brain — the implicit endpoint of the entire enterprise — contains approximately 86 billion neurons and 100 trillion synaptic connections.
The fly’s behavioral repertoire, sophisticated as it is for a 140,000-neuron system, is largely stereotyped. It is, in the language of ethology, a relatively fixed set of responses to a relatively constrained set of stimuli. The mouse’s repertoire includes flexible decision-making, spatial memory, fear learning, social behavior, and forms of cognition that are continuous with the human case. Emulating a mouse brain without plasticity would produce a very expensive recording. Emulating it with plasticity raises questions that go beyond engineering.
The scaling challenges between a fly and a mouse are log-scale, requiring technological advances that don’t exist. Yet.
If you copy a mouse brain — with plasticity intact, with the capacity for learning — and run it in simulation, and it learns something, what is it? Is the digital mouse the same mouse? A copy? Something new? Does the question even have a determinate answer?
These are the actual questions that whole-brain emulation at scale will force into scientific, legal, and ethical territory where no framework currently exists. The engineering roadmap is clearer than the conceptual one.
The answer to the plasticity problem is being worked on in Melbourne, Australia, from the opposite direction.
Cortical Labs does not start with a connectome. They start with neurons, living human neurons grown on a silicon chip, and they give those neurons a world to interact with. The neurons learn. Readers of the prior Neural NeXus piece on Cortical Labs (Lab Grown Neurons Learn to Play Doom) will know the Pong result: cortical cells with no prior exposure to anything, forming a closed loop with a game simulation and improving their rally lengths measurably within five minutes of gameplay. Their CL1 device shipped to researchers in early 2025, and one developer connected it to Doom in about a week using approximately 200,000 living human neurons.
That number is worth pausing on. The Eon fly connectome contains 140,000 neurons. The Cortical Labs Doom demonstration ran on roughly 200,000. The same order of magnitude of biological tissue — one frozen and mapped, the other alive and adaptive — produces behaviors of comparable complexity. There may be critical thresholds of neural count at which certain levels of behavioral organization become achievable, regardless of substrate or architecture. Whether that convergence is meaningful or coincidental is a question the field will be answering for years.
The deeper inversion is this: Eon has a perfect map of a biological brain and cannot make it learn. Cortical Labs has no map at all, the neural activity of their cultures is not decoded at the level of individual circuits, and the neurons learn continuously. Eon emulates the structure. Cortical Labs cultivates the process. One preserves the ghost. The other keeps it alive.
Which one is closer to the ghost?
Ryle would say neither. There is no ghost. There is only information processing, and both approaches are finding different ways to do it.
Koestler might say both. The ghost is whatever it is that makes a nervous system do something more than its parts, and both of these systems are producing exactly that kind of excess.
Descartes, confronted with a digital fly that walks without thinking and a dish of neurons that thinks without a body, might revise his pineal gland hypothesis.
The digital fly does not know it is in a simulation. It cannot know. It reacts to inputs according to the frozen architecture of a brain that no longer exists in biological form. Surprise is not available to it. Learning is not available to it. Change is not available to it. The ghost is present. The ghost is running. The ghost is stuck.
And yet it walks. It grooms. It feeds.
Descartes could not explain how the ghost and the machine interact. Ryle said the ghost was an error. Koestler said it was a mystery. Eon Systems said: let us try to run it on different hardware.
The answer they got back was: we can run it. We just can’t update it.
Is a mind that cannot change still a mind? That question is now in the laboratory.
The fly does not have an opinion.
Neural NeXus covers the intersection of neuroscience, AI, and the technologies reshaping how intelligence is built. These newsletters take significant effort to put together and are totally for the reader’s benefit. If you find these explorations valuable, there are multiple ways to show your support:
Engage: Like or comment on posts to join the conversation.
Subscribe:
Share: Help spread the word by sharing posts with friends directly or on social media.
References
Shiu, P.K. et al. A Drosophila computational brain model reveals sensorimotor processing. Nature 634, 210–219 (2024). doi.org/10.1038/s41586-024-07763-9
Dorkenwald, S. et al. Neuronal wiring diagram of an adult brain. Nature (2024). doi.org/10.1038/s41586-024-07558-y
Wissner-Gross, A. The First Multi-Behavior Brain Upload. The Innermost Loop, Substack (March 7, 2026). theinnermostloop.substack.com
Kingsley, D. BioWire Weekly — 018. Neural NeXus, Substack (October 8, 2024). davidkingsley.substack.com
Descartes, R. Meditations on First Philosophy (1641).
Ryle, G. The Concept of Mind. Hutchinson & Co. (1949).
Koestler, A. The Ghost in the Machine. Hutchinson & Co. (1967).




Very insightful piece. Beautifully written. The Eon/Cortical Labs inversion is the best framing I've seen on this.
Great piece! And really fascinating stuff. We're not there yet, but heading towards the point where we can upload a human brain. Maybe. Will it "turn on" and be conscious though? Huge ethical questions obviously.