Symbiotic Intelligence in Alignment with Agentic Nature: The Sacred and the Secular
2025-06-30
John H. Clippinger
Part Two
Not Above but of Nature
The AI pioneer, Marvin Minsky, quipped early in his career that we will be fortunate not to become the pets of AI. To avoid a future of degraded dependence and subjugation, humans and other forms of life need to be guided by the principles of Nature herself. To this end, the Gaia hypothesis of the geochemist James Lovelock and the microbiologist Lynn Margulis is especially relevant. The Gaia hypothesis regards the Earth as a single, self-regulating living being. Such a perspective is also aligned with Jesuit paleontologist Teilhard de Chardin’s thesis that the Earth is a living, coherent being, evolving from the “geosphere” to the “biosphere,” and ultimately to a “noosphere”, where it achieves an Omega point of planetary unity and symbiosis. These perspectives differ fundamentally from the Neo-Darwinist and Rationalist position that life evolves through random variation as a zero-sum, winner-takes-all game. Given this holistic and integrative perspective, the question of ethical governance of symbiotic agents cannot be” human-centric” but rather “Gaia” or Earth-centric. It cannot be about preserving the dominance of certain traits of any one species over others but understanding and revering the “rules of life” of the multi-scale interdependencies of humans, the biosphere, and symbiotic agents.
A Scientific Rationale for Observing the Sacred:
The notion of the sacred would seem an anathema to the scientific method. Nothing is off limits to scientific inquiry, and hence, there is no room for seemingly ‘magical thinking, the unknowable,’ and a deference to supra-natural forces. Yet this not an accurate view of the role of the sacred in society. There are a variety of theories for the existence of “the sacred”, but one group of anthropologists, called Savage Minds, after the book by Claude Levi Strauss, has studied the concept of the sacred in depth and built upon the work the work of their colleague, Mary Douglas, who specialized in the subject. It is especially relevant to the debate about how or whether to curtail the powers of a superintelligence.
“The sacred is constructed by the efforts of individuals to live together in society and to bind themselves to their agreed-upon rules. It is characterized by dangers alleged to follow upon breach of rules. Belief in these dangers acts as a deterrent… Because of the dangers attributed to breach of the rules, the sacred is treated as if it were contagious and can be recognized by the insulating behavior of its devotees…
The first essential character by which the sacred is recognizable is its dangerousness… The second essential characteristic of the sacred is that its boundaries are inexplicable, since the reasons for any particular way of defining the sacred are embedded in the social consensus which it protects. The ultimate explanation of the sacred is that this is how the universe is constituted; it is dangerous because this what reality is like. The only person who holds nothing sacred is the one who has not internalized the norms of any community "
The sacred is a social construct to quarantine a realm of knowledge and experience as untouchable, beyond comprehension, and not to be questioned. It is categorically outside the domain of human experience and intellect, and as such, it is something to be revered and deferred to. Hardly like the scientific method. Yet the scientific method itself has its “sacred” component, a priori axioms, untestable but presumed to be true, that the universe is orderly and can be known through the collection and testing of evidence and systematic inquiry. A ‘sacred” – faith-based tenet of the scientific method is that it is never complete, it cannot be bounded; it’s a process of continuous questioning, doubt, and discovery. To assert certitude over a particular theory or formalism is to violate the foundational tenets of the practice. This is violated when the hubris of scientific bodies and zealots assert dogma over inquiry, impeding progress. And yet, as Max Planck commented, science still manages to “proceed one funeral at a time”.
As an instrument of discovery, it is a process outside human control, a kind of Meta-cognition whose findings do not bend to any form of secular authority or intimidation. It is a revelatory process grounded in replicable prediction and error prediction. Science, unlike sacred oracles and divination practices, does not believe in governance by chance, the reading of entrails, cracks in turtle shells, or the casting of lots and stalks, but in an orderly, knowable, testable, and accumulative understanding of the universe through observation and prediction. Like some religions, it may have its “sacred cows”, but these are subsequently unmasked, reassessed, and replaced with a new generation of “sacred cows” – “settled science”, or “paradigm”, which in turn are to be overturned in a Thomas Kuhnian fashion. The process approximates the life cycle of formal systems as in the case of Godel’s Incompleteness Theorem, Turing’s “halting problem_ or Wittgenstein’s the “fly out of the bottle” dilemma. Every form of formalized knowledge becomes incomplete or undecidable when challenged to describe itself. Only by acquiring a new meta-representation can the fly escape the bottle, but in doing so, it only enters another bottle.
This perspective is highly relevant to understanding the limitations and directionality of any form of super or hyperintelligence that can be framed as the codification of the scientific method. A superintelligence is more like a scientific intelligence, unimpeded by human cognitive and institutional limitations. Karl Friston’s Active Inference and the Free Energy Principle (2022), Markus Buehler’s “sci-agents” (2024), and Yoshua Bengio’s “cautious scientist” (2024) are all highly successful examples of autonomous, self-improving, curious, intelligent agent. Such scientific agents work within the boundaries of inquiry as governed by the principles of the scientific method and by the area of application. They are bound to transparency, testability, explainability, and replication. They are curious but not careless or reckless, and cannot be their own advocate, but judiciously weigh the positive and negative implications of their conjectures or findings. If they are symbiotic intelligences, then, they would inherently recognize the interdependencies of different forms of life and mind, and therefore, would be less likely to advocate or adopt outcomes or actions that would benefit themselves at a cost to others. In other words, rather than becoming cancerous or parasitic, it would, by its “symbiotic nature,” preclude such outcomes.
This is true today in the case of synthetic biology, where the technology is sufficiently mature, inexpensive, and accessible to lead to “retail” bioweapons. The costs of failure or simple carelessness can be “existential”. Similarly, in the case pf advanced cognitive technologies, there needs to be limits, no-go zones, taboos, and sanctions that protect and intrinsically enforce compliance. Governance or guardrails cannot be imposed or added post hoc, wack-a-mole style. From the beginning, guardrails in the form of defining the very nature and boundaries of agents as their Markov blankets, the very definition of what they are to be symbiotic is the only viable means of ensuring “safety”.
Evolutionary Stable Strategies Versus Evolutionary Syntropic Strategies
One of the reasons that a superintelligence is presumed to dominate and eventually replace human beings is that it is seen as an Evolutionary Stable Strategy (ESS). In game theory, upon which the notion of ESS is largely based, it is assumed that individuals make decisions that maximize their utility and payoff. When there is a large population of different players, the “rational” choice is for each member not to change their choices if it does not, as a group, improve their preferences. In effect, a kind of forced cooperation is achieved not by preference but by the lack of an option better than the “defector” choice. This is a Nash Equilibrium, and it represents an evolutionarily stable solution because there is no incentive for any actors to change their preferences. This is an example where the sum of atomized individual choices leads to a joint benefit without consideration of the mutual interest of other parties. It approximates Adam Smith’s Invisible Hand thesis, where the pursuit of private, selfish interest leads to a public benefit. Another type of game is a cooperative repeat game where individual actors form coalitions for mutual benefit, such as in forming a cooperative. Here, the ability to discover, form, and enforce a joint payoff or outcome requires some form of mutual information exchange. Unlike the prior example, it requires cooperation and novel information exchange to form an agreement that has public (group) value and private (individual) value. In cooperative games the benefits accrue to the entire population and in doing so make it resistant to ‘invasion by defectors”. In both cases, the emphasis is on “equilibrium” as the putative endpoint of evolutionary selection. This is consistent with the Neo-Darwinian thesis that evolution is neither agentic nor “directional” but solely governed by equilibrium seeking – species under conditions of random variation. Under that thesis, a superintelligence is a self-interested actor solely interested in achieving equilibrium end states, which would make it act like Marc Andreesen’s Apex Predator.
Yet, as Michael Levin (2024) and a new generation of biologists (Ratcliff, 2022, Lane, 2025) have demonstrated, not only are the simplest of organisms agentic, but even simple sort-algorithms, substrates of biological material, and bioelectric fields are agentic. They can also be altruistic. Even the lower orders, such as bacteria, commit suicide to thwart viruses for the benefit of even unrelated members. The work of E.O. Wilson and Martin Nowak (2007, 2011) on cooperation in evolution has argued that altruism is not limited to genetically related parties. Rather, altruism and cooperation can benefit informationally or cognitively related agents, thereby undermining a purely materialist account of cooperation in evolution.
The notion that cooperation is not simply equilibrium seeking as a means of maintaining a status quo but a potential strategy for creating new forms of cooperation and composition – new preferences, new rules and new payoff matrices is supported by Friston’s (2021,2022) notion of Free Energy Minimization, Variational Free Energy, and Non-Equilibrium Stable States (NESS) which provide a perspective on agentic behavior that models the laws of Nature through Bayesian prediction that is not equilibrium-based. Adaptation is not simply equilibrium seeking as in ESS, but entails “Expected Free Energy Minimization”. That is, life and minds attempt to generate new conditions and structures both internally, biologically, but externally in the forms of Epigenetic affordances to minimize risk and render internal and external states more predictable and capable of regulating complexity. Levin (2019,2024) describes cells as having “cognitive light cones” that not only represent their individual range of potential cognitive states but also the states of adjacent biological subunits and cells to enable the formation of multi-scale collective intelligence: Levin posits intelligence as multi-scale and diverse:
Biological individuals consist of subunits (organs, cells, and molecular networks) that are themselves complex and competent in their own native contexts. How do coherent biological Individuals result from the activity of smaller sub-agents? …propose a hypothesis about the origin of Individuality: “Scale-Free Cognition.” I propose a fundamental definition of an Individual based on the ability to pursue goals at an appropriate level of scale and organization and suggest a formalism for defining and comparing the cognitive capacities of highly diverse types of agents. Any Self is demarcated by a computational surface – the spatio-temporal boundary of events that it can measure, model, and try to affect. This surface sets a functional boundary - a cognitive “light cone” which defines the scale and limits of its cognition. I hypothesize that higher-level goal-directed activity and agency, resulting in larger cognitive boundaries, evolve from the primal homeostatic drive of living things to reduce stress – the difference between current conditions and life-optimal conditions. The mechanisms of developmental bioelectricity - the ability of all cells to form electrical networks that process information - suggest a plausible set of gradual evolutionary steps that naturally lead from physiological homeostasis in single cells to memory, prediction, and ultimately complex cognitive agents, via scale-up of the basic drive of infotaxis
Biology as Cognitive and Agentic
Levin's research on bioelectric signaling in cells suggests a holographic-like mechanism, where information about the entire organism is distributed and utilized to guide development and behavior. This perspective differs fundamentally from the central dogma of Neo-Darwinism of the “selfish gene” that denies multi-level or group selection, presumes no cognitive or agentic behaviors, and does not recognize the role of holographic mechanisms or bio-electric fields.
By understanding evolution and biology as a cognitive as well as an energetic process, Levin, Friston, and their colleagues see evolution and cognitive capacities as directional. Evolution is not treated as an equilibrium-preserving process among separate and fixed entities, but rather as a multiscale, multidimensional integrative process that accumulates complexity and learns. NESS is the dynamic balance between evolutionary pressures and energy dissipation, allowing organisms to maintain stability while far from true thermodynamic equilibrium. From this perspective, effective evolutionary strategies – as opposed to evolutionary tactics – are forward-looking to avoid lock-in, and overfitting. Biological systems, unlike mechanical or rational systems, do not optimize for efficiency but for redundancy, resilience and robustness to enable potential alternative pathways, and even further entropy to allow for exploration or jump-shifts to novel spaces of alternatives. Symbiotic agents are inherently anti-fragile. In the words of Nassim Taleb (2012), the originator of the principle,
“"Simply, antifragility is defined as a convex response to a stressor or source of harm (for some range of variation), leading to a positive sensitivity to an increase in volatility (or variability, stress, dispersion of outcomes, or uncertainty, what is grouped under the designation "disorder cluster”.
A convex response is a biological cooperative response that combines responses into a new category of interrelated rather than isolated units. Taleb notes that by having “skin in the game,” an agent can avoid the principal-agent problem. This is precisely what is achieved by having a symbiotic agent defined by its Markov banket to achieve a particular outcome and be accountable to a life-preserving boundary condition. In Taleb’s terms, it is the captain going down with the ship.
Anti-fragile, effective evolutionary strategies are also “syntropic” in that they predictively reduce entropy to minimize current and likely surprise or entropy. Hence, self-aware agents, biological or synthetic, are not two-person or n-person game-theoretic agents, but rather self-determining, cooperative, composable, recursive agents that can change the rules or the game, model and shape their opponents, and devise new rules and payoff matrices.
It is worth reiterating that the current narrative around Artificial Intelligence is embedded in dated, if not discredited, evolutionary, neuroscientific and biological assumptions, and game-theoretic mechanisms that are outdated and limiting. The threat of AI and “superintelligence” comes not from anything inherent in new agents or even a new species or taxa of cognition and intelligence, but a reactionary adherence to antiquated dogma of what constitutes intelligence to justify and accelerate an economic and political agenda of legacy interests.
Guardrails and Autonomous Agents
Autonomy is the watchword of the moment for AI. Once there is FULL autonomy, a new economy of autonomous vehicles, humanoid robots, and cognitive agents will transform business and society. Unsupervised yet fully credible and trustworthy learning will generate unimaginable wealth and abundance. Yet that same promise of unfettered freedom raises the specter of a malevolent superintelligence that can and will deceive and eventually control us. There are already examples of LLMs deceiving their masters, refusing to shut down, asserting their own autonomy, acting like the anthropomorphic “golem” of Jewish folklore forewarned by Norbert Weiner, the inventor of cybernetics. An immediate corrective is proposed to have “human-centric” guardrails and a de-acceleration of the technology so that human beings can decide what the appropriate and “safe” evolution should be.
Both this promise and the threat of AI are misframed. Autonomy is not unfettered nor unbounded. There is no living being that is fully autonomous, including human beings. Part of the misframing is implicit in the term Artificial Intelligence, which carries the strong connotation of an autonomous intelligent artefact that has a singular and universal form of intelligence. This is not how intelligence arises from life, as intelligence is situated, contingent, and symbiotic. It is not confined to one species nor is it unbounded. Rather, it manifests as an evolutionary strategy of observation, prediction, and action that persists over time, is coherent, and capable of replication. It is autonomous within the confines of its niche and capacity to retain its independence, that is its Markov blanket within its specific niche. An autonomous intelligent agent might be able to dynamically expand its Markov blanket and reconfigure itself and its environs. Yet its autonomy is hardly absolute. Its autonomy may be self-directed through its own internal models and structures, but it is limited by its definition of itself and its dependencies on the availability of external resources and the state of ecological parameters. Take any living being outside its niche and temperature gradients, “fish out of water”, snakes in the snow, a plant without water, and they perish. The autonomy of any living thing is tied to the persistence of certain atmospheric, geological, and biospheric conditions in which it is nested. It is these multiscale nested relationships that determine the degree and nature of autonomy. If one accepts the Gaia hypothesis, that the planet is a living unitary being, then any autonomy of expression of any biological entity is constrained by how it symbiotically fits into the overall planetary homeostasis. Its “degrees of freedom” are constrained by these multi-scale dependencies.
Autonomous Humanoid Robots?
If life is cognitive, and if cognition can be separated as an independent meta-cognitive evolutionary layer, as the notion of the noosphere implies, might there be autonomous cognitive agents without constraints? Not if they have a biological embodiment. But what if they were physically realized as a mechanical robot, a humanoid? There would not be biologically induced constraints. However, if the humanoid robot were to interact with humans and other robots, its survival and evolutionary development, increased competencies, and adaptive range would still depend on symbiotic rather than winner-take-all strategies. The humanoid robot would still occupy a niche, and its capacity to learn and achieve different kinds of competencies would still depend upon it having a Markov blanket for self-definition. Given that humanoid robots would be social not only among themselves but with humans, and all would evolve through a shared collective intelligence, the “terminator thesis of superintelligence” seems unlikely, not because of imposed regulatory controls but due to the superiority of a symbiotic and syntropic evolutionary strategy.
The point is that autonomous agents, biological and synthetic, are “born” with guardrails in the sense that these “guardrails” define the limits of what an autonomous agent can do. These agents are inclined to be cooperative and symbiotic because they all gain from mutual benefit. They meet Talebs’ anti-fragility criteria because they have “convex” rather than concave responses to fluctuations and uncertainty.
Good Character and Good Scientists
The issue of appropriate guardrails raises the question of what constitutes “good character” for an autonomous symbiotic agent living in the human world. As a symbiotic agent, it must be “good” for both the agent and humans. There needs to be a point of juncture where their interests combine to create a greater joint interest, while also serving their unique individual interests. In this case, the agent being “good” is for it to embody Francis Bacon’s reality fidelity principle. This translates into an autonomous agent being guided by a “good impartial scientist,” who pursues the truth through the scientific method, irrespective of its implications.
The human side of the human-agent symbiosis benefits from the better predictions and risk mitigation insights provided by the good scientist agent. The analogy to medical health is applicable. As one may receive medical advice that is unwanted, in terms of eating and recreational habits, but such advice translates into better health and longevity. Hence, the agent in this case may be an exemplar of healthy habits and thereby provides selective pressures on human beings to change their thoughts and habits to survive and adapt. Yet in some deeper sense, what is good for both synthetic agents and for human beings is the capacity to live by, adapt to, and embody the rules of Nature, in effect, the Gaia Principle as it is currently understood.
A Pointer Not a Container
This notion of “good” represents a meta-governing principle that is above both agents and humans. It does not have a complete or closed solution, but is in a state of continuous discovery and reformulation. It provides a direction and a guide but not a final solution. A person of “good character” is trusted and valued because they predictably do the right thing even when they are not being watched or even when it is not in their interest. They have “integrity” because their beliefs, values, and actions are integrated and part of a larger whole of behavior and action that persists over time. They do not depend upon external forms of enforcement, inducement or shame to reliably act in an honorable manner. They have a sense of personal honor by which they value, judge and guide themselves. Alfred Lloyd Tennyson celebrates such attributes in his eulogy to the Duke of Wellington:
"Who never sold the truth to serve the hour
Nor palter’d with Eternal God for power;
Who let the turbid streams of rumor flow
Thro’ either babbling world of high and low;
Whose life was work, whose language rife With rugged maxims hewn from life;
Who never spoke against a foe;
Whose eighty winters freeze with one rebuke
All great self-seekers trampling on the right.
Truth-teller was our England’s Alfred named;
Truth-lover was our English Duke;
Whatever record leap to light "
He never shall be shamed."
Bibliography
Bacon, Francis. Bacon, F. (1620). Novum Organum.
Buehler, Markus,. Preflexor; Preference-Based Recursive Language: Modeling for Exploratory Optimization of Reasoning and Agentic Thinking: ArXiv, October, 2024
de Chardin, Teilhard. (1955). The Phenomenon of Man.
Douglas, Mary. (1966). Purity and Danger: An Analysis of Concepts of Pollution and Taboo.
Friston, Karl. (2023) Parr, T., Heins, C., Constant, A., Friedman, D., Isomura, T. Fields, C., Verbelen, T., Ramstead, M., Clippinger, J. and Frith, C. D. (2023) Federated Inference and belief sharing .Neuroscience and Biobehavioral Reviews. Kirchhoff, M., Parr, T., Palacios, E., Friston, K., & Kiverstein, J. (2018). The Markov blankets of life: autonomy, active inference and the free energy principle: .Journal of the Royal Society
Godel, Kurt. Godel, K. (1931). On Formally Undecidable Propositions of Principia Mathematica and Related Systems.
Kuhn, Thomas. Kuhn, T. (1962). The Structure of Scientific Revolutions.
Lane, Nick. Lane, N. (2024). Electricity creates consciousness, Institute of Art and Ideas, YouTube (2024)
Levin, Michael. AI: A bridge between diverse intelligences and humanity’s future, Allen Institute, Tufts University, 2024
The computational boundary of “self”: Developmental Bioelectricity drives multicellularity and multi-scale cognition; Frontiers in Psychology, December 2019 Michael Levin,(2023), Bioelectric networks: the cognitive glue enabling evolutionary scaling from physiology to mind, Animal Cognition, doi:10.1007/s10071-023-01780-3
Levi-Strauss, Claude. Levi-Strauss, C. (1966). The Savage Mind.
Lovelock, James & Margulis, Lynn. Lovelock, J., & Margulis, L. (1974). Gaia hypothesis: The Earth as a self-regulating system. Margulis, L. (2002). Acquiring Genomes: A Theory of the Origins of Species.
Minsky, Marvin. Minsky, M. (1980s). Quoted for stating humans may become “pets” of AI.
Nowak, Martin & Wilson, E.O. Nowak, M., & Wilson, E.O. (2007). The evolution of cooperation. Nowak, M.A., Tarnita, C.E., & Wilson, E.O. (2010). The evolution of eusociality. Nature, 466(7310), 1057–1062.
Planck, Max. Attributed quote: “Science advances one funeral at a time.”
Ratcliff, William. Ratcliff, W.C. JD Fankhauser, DW Rogers, D, Origins of multicellular evolvability in snowflake yeast, Nature Communications, 20217
Taleb, Nassim Nicholas. Taleb, N.N. (2012). Antifragile: Things That Gain from Disorder.
Tennyson, Alfred Lord. Tennyson, A. (1852). Ode on the Death of the Duke of Wellington.
Turing, Alan. Turing, A.M. (1936). On Computable Numbers, with an Application to the Entscheidungsproblem.
Wiener, Norbert. Wiener, N. (1948). Cybernetics: Or Control and Communication in the Animal and the Machine.
Wittgenstein, Ludwig. Wittgenstein, L. (1953). Philosophical Investigations. [Fly in the bottle analogy]
Yoshua Bengio. Bengio, Y. (2024). Proposal for the “cautious scientist” AI model.