VII. From Neurons to AI: The Surprising Symmetry of Emergence
How Simple Components Interact to Create Unpredictable, Complex, and Ever-Changing Systems
Table of Contents:
I. Overview and Introduction | II. The Inevitability and Existential Risk of Artificial General Intelligence | III. Understanding Human Intelligence | IV. Reaching vs. Expanding Biological Potential | V. Biological Strategies to Expand Human Intelligence: Neurotransmitter Modulation | VI. Biological Strategies to Expand Human Intelligence: Neurotrophins | VII. From Neurons to AI: The Surprising Symmetry of Emergence | VIII. Biological Strategies to expand Human Intelligence: Neurogenesis
Introduction:
Our world thrives on the interplay of simplicity and complexity. From the countless neurons in our brains to the vast societies we construct, intricate systems arise from the assembly of simple components. This process, known as emergence, unveils properties greater than the sum of their parts, and its understanding is pivotal in fields ranging from biology to the rapidly evolving domain of artificial intelligence. This post delves into the fascinating progression from neurons to civilizations and explores the self-similarity of fractals. It also examines the unpredicted behavior in artificial intelligence systems, shedding light on how these phenomena shape our world and the future of technology.
Understanding Emergent Properties and Hierarchies
The concept of emergent properties is integral to our understanding of complex systems. These properties arise when simple components interact in an organized manner to create a larger system. Remarkably, these systems then exhibit behaviors and characteristics that cannot be predicted or explained solely by understanding the individual components in isolation. A classic example of emergence in biology is the transition from single cells to multicellular organisms. Although each cell carries out specific functions, it is their collective organization and interaction that gives rise to a full-fledged organism capable of growth, reproduction, and response to environmental stimuli.
Hierarchies, on the other hand, represent a specific type of organization in which entities are categorized into levels based on their complexity or scale. These levels often reflect the degree of integration, or differentiation, of the entities within the system. For instance, in the biological world, we observe a clear hierarchical structure: cells form tissues, tissues form organs, organs form organisms, and organisms form populations.
Emergence and hierarchy are deeply intertwined concepts. In many cases, hierarchical structures are the foundation upon which emergent properties arise. By grouping entities into levels, a system creates the conditions necessary for complex behaviors to emerge through the interactions between entities at each level. This can be observed in the functioning of our brain, where individual neurons (entities) form complex networks (levels) that give rise to thought, memory, and consciousness (emergent properties).
In the context of artificial intelligence, these concepts become increasingly significant. Modern AI systems, such as deep learning networks, are designed to mimic aspects of biological brains, incorporating multiple levels of interconnected 'neurons' (nodes). Much like their biological counterparts, these artificial networks exhibit emergent properties. The understanding of emergence in AI is not merely an academic exercise; it's a critical insight that can guide the development and management of increasingly complex AI systems. As we'll explore further in this essay, this has profound implications for the future of AI and our understanding of complex systems.
From Neurons to Civilization: A Hierarchy of Emergent Properties
Understanding the journey from neurons to civilization requires a careful exploration of the interplay between subunits, systems, and the emergent properties that arise. At each level, the collection of 'subunits' forms a 'system', and the interactions within that system give rise to unique 'emergent properties'.
The journey begins with neurons, the fundamental units of the brain. When neurons connect and interact, they form neural circuits. While a single neuron can transmit an electrical signal, the power of these circuits lies in their ability to process and transmit information, allowing for basic reflexes and responses to external stimuli. This is the first emergent property in our hierarchy.
As we scale up, neural circuits form larger neural networks. The emergent property at this level is the ability to process complex information and perform higher-level functions. These networks are responsible for everything from recognizing faces to understanding language.
When these networks come together, they form the remarkable system that is the human brain. The brain, a system of intricate, interconnected networks, exhibits the emergent property of consciousness. This level of emergence allows us to think, to feel, and to perceive the world around us in a way that transcends the basic informational processing capabilities of the lower levels.
The complexity continues to grow as individual humans form communities. Here, emergent properties like social norms, culture, and laws arise. These phenomena cannot be reduced to individual consciousness but stem from community interactions.
On a larger scale, multiple communities form civilizations. This level sees the emergence of societal structures, technology, and economic systems. Each civilization is a complex tapestry, with emergent properties beyond individual humans or single communities.
In each step of this progression, emergent properties are more than the sum of the subunits. They involve new, often unpredictable phenomena from system interactions. This perspective offers a powerful framework for understanding complex systems, from the human brain to society, and can be illustrated through examples like the growth of cities or the evolution of social media platforms.
Emergence in Artificial Intelligence
Artificial intelligence (AI), particularly in its advanced forms such as neural networks and machine learning, has shown remarkable similarities to biological systems in terms of emergent properties. These AI systems, designed to mimic aspects of biological brains, incorporate multiple layers of interconnected nodes, or artificial "neurons," that process and learn from input data.
At the core of this phenomenon is the concept of emergence. As we scale up from individual artificial neurons to complex networks, new capabilities arise that aren't explicitly programmed but emerge from the interactions of the many components within the system. In this context, the parallels to our earlier discussion—from neurons to civilization—become evident.
As the scale of the model increases, the performance improves across tasks while also unlocking new capabilities (Narang, 2023).
A striking example of emergence in AI can be seen in the development of Large Language Models (LLMs). As these models are scaled, they hit critical scales at which new abilities are "unlocked”. These emergent abilities include performing arithmetic, answering questions, summarizing passages, and even guessing a movie from an emoji sequence. These abilities seem to emerge at unpredictable scales, and they increase from near-zero performance to state-of-the-art performance at incredibly rapid paces (O’Connor, 2023).
Recent research has further highlighted the concept of "breakthroughness" in AI task performance, where models exhibit sudden breakthroughs in ability at a particular model scale (Srivastava, 2023). These tasks often require a model to apply several distinct skills or perform multiple discrete steps to come up with the correct answer. Examples of such tasks include "modified arithmetic", which involves applying a mathematical operator defined in-context to certain inputs, and "figure of speech detection".
This emergence is not limited to language models. For instance, when AlphaGo, an AI developed by DeepMind, defeated world champion Go player Lee Sedol, it used moves that astonished even the most experienced Go players (Silver, 2016). These moves were not pre-programmed into the AI but emerged from the complex interactions of the system's components as it learned from vast amounts of data. Similarly, the pattern recognition capabilities of deep neural networks—an essential part of many AI applications from image recognition to natural language processing—represent another form of emergent behavior (Russakovsky, 2015; Vaswani, 2017). The network starts with simple nodes that recognize basic patterns, like edges in an image. As the information is passed along through multiple layers of the network, the system begins to recognize increasingly complex patterns, eventually being able to identify objects, faces, or even generate human-like text.

These emergent properties in AI systems have significant implications. On one hand, they open up exciting possibilities for developing more advanced and adaptable AI systems. On the other hand, they also pose challenges, particularly in terms of interpretability and predictability. As AI systems become more complex and their behavior more emergent, it becomes increasingly difficult to predict or explain their actions based solely on the programming of their individual components. This raises important questions about the oversight and management of these systems, particularly in high-stakes applications.
Emergence in AI represents a fascinating parallel to biological systems and also a critical frontier in our journey towards advanced artificial intelligence. As we seek to harness the power of these complex systems, we must grapple with the new challenges that their emergent properties present.
Conclusion
From neurons to civilizations, we've explored the hierarchy of emergent properties, uncovering the fascinating phenomena that arise from the interactions within complex systems. The parallels drawn to artificial intelligence, including the unexpected abilities of Large Language Models, offer a glimpse into the future of complex systems and the emergence of unprogrammed, complex behaviors.
This exploration leads us to a tantalizing question: Could consciousness itself be an emergent property from sufficiently complex AI systems? While current technology is far from achieving this level of complexity, the idea challenges our understanding of consciousness and opens new frontiers for debate.
Emergent properties, whether in the conscious mind, societal norms, or AI strategies, continue to intrigue and inspire us. They underline the beauty of our universe, where simple components interact to form systems that give rise to entirely new properties.
As we strive to understand these phenomena, we must appreciate the world they create—a world of unpredictable, complex, and ever-changing life and intelligence. In the marvel of emergence, we find a reminder of the boundless potential and profound complexity of both the natural world and our artificial creations.
References:
Narang, Sharan, Chowdhery, Aakanksha. 2023. Pathway language model (PaLM): Scaling to 540 Billion Parameters for Breakthrough Performance.
O'Connor, Ryan. "Emergent Abilities of Large Language Models." AssemblyAI, March 7, 2023.
Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M. and Berg, A.C., 2015. Imagenet large scale visual recognition challenge. International journal of computer vision, 115, pp.211-252.
Silver, D., Huang, A., Maddison, C.J., Guez, A., Sifre, L., Van Den Driessche, G., Schrittwieser, J., Antonoglou, I., Panneershelvam, V., Lanctot, M. and Dieleman, S., 2016. Mastering the game of Go with deep neural networks and tree search. nature, 529(7587), pp.484-489.
Srivastava, A., Rastogi, A., Rao, A., Shoeb, A.A.M., Abid, A., Fisch, A., Brown, A.R., Santoro, A., Gupta, A., Garriga-Alonso, A. and Kluska, A., 2022. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models. arXiv preprint arXiv:2206.04615.
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł. and Polosukhin, I., 2017. Attention is all you need. Advances in neural information processing systems, 30.
Great article, very information heavy, but I learned quite a bit about how the mind works.