This feature is the first in a series from presenters at the Woodenfish Foundation’s upcoming conference on “Buddhism, Consciousness, and AI.” The conference will be held on 21–23 June in Taipei, with an option for live-streaming. More information is available here.
The early Buddhist compendium, the Dhammapada, begins by affirming that “all things are preceded by mind, led by mind, made by mind.” Just as a cart follows the footsteps of the ox drawing it, our lives unfold as a karmic function of how our minds are directed—a relational elaboration of the patterns of our likes and dislikes, our hopes and fears, our concepts and beliefs, and the words and deeds through which we express them.
This is a hopeful teaching. We can change our minds and thereby dissolve the causes of conflict, trouble, and suffering. We can lead liberating lives.
Listening to the seers of Silicon Valley, the promise of AI is that it will take the guesswork and effort out of that process. If they are right, the “sweat equity” built by working long and hard in order to realize freer and better lives will become a thing of the past. According to these techno-optimists, intelligent technology is setting humanity on a glide path into utopian futures in which choice becomes utterly frictionless.
Now, there is no doubt that intelligent technology is vastly expanding experiential freedoms of choice. And it does have transformative problem-solving potential. But it is also capable of placing at risk our most basic emancipatory rights to cultivate both freedom of attention and freedom of intention, without which progress on the path to liberation is illusory or impossible.
During the Buddha’s lifetime, a fundamental block to liberation was belief in an abiding, individual self. The teaching of non-self was directed toward dissolving that block, and remains crucial to progressing on the Buddhist path. In the age of AI-orchestrated digital mediation, however, dissolving the belief that consciousness is wholly reducible to brain function is just as crucial.
Physicalist reductionism: a karmic “heresy”
The idea that mind is merely a result of matter in motion is not new. Democritus claimed in the fifth century BCE that all is just “atoms and void.” But based on the immense successes of modern science in explaining how the material world works and the discovery that brain electrodes can produce not only sensations and bodily actions, but also claims of responsibility for them, it is now commonly concluded that consciousness and intentionality are causally irrelevant “side effects” of brain activity. As the eminent philosopher Daniel Dennett put it: “An impersonal, unreflective, robotic, mindless little scrap of molecular machinery is the ultimate basis of all the agency, and hence meaning, and hence consciousness, in the universe.” In any important explanatory sense, consciousness and intentions do not matter.
This reductionist conclusion might well be called a Buddhist “heresy”—a denial of the Buddha’s insights on the night of his awakening that all things occur interdependently, that the cosmos is karmically ordered, and that our experienced realities are materializations of consistently enacted patterns of values and intentions.
These insights have been variously articulated and understood in different Buddhist traditions. But in keeping with the early Buddhist description of consciousness arising through the conjunction—or, as I would rephrase it, the coherent differentiation—of sense organs and sensed environments, I would argue that one of their key implications is that brain-body-environment systems are the evolutionary residue of consciousness mattering.
In much the same way that material transportation infrastructures, such as roads, railways, train stations, and airports, are the results of transportation practices that in turn condition or constrain those practices, brain-body-environment systems are the material infrastructure of consciousness. And, just as transportation practices are also informed and constrained by immaterial traffic regulations and international laws, human consciousness is informed and constrained by the immaterial infrastructures of language and culture.
This understanding of consciousness has the support of recent neuro-scientific studies demonstrating that highly valued states of consciousness realized through meditation are correlated with a “disordering” of established neural networks, including those involved in maintaining a narratively structured sense of self or ego. By disrupting the neural infrastructure of consciousness—and in particular the saṃskāra or self-defining habits of thought, feeling, and action that ordinarily constrain our attention and responsiveness—meditation opens potentials for being differently present.
Moreover, studies have shown that interpersonal neural harmonization occurs during collaboration, but not during competition or simultaneous task performance, and that this is affected by the beliefs of those involved. That is, what we think and feel matters causally for the depth and quality with which our brains become materially attuned, and there is a very real sense in which the infrastructure of consciousness can be fundamentally shared.
Studies like these open up new ways of thinking about Zen narratives of mind-to-mind transmission, about the importance of pairing calming/settling (śamatha) with clear-seeing/insight (vipaśyanā) meditations, and the mutually reinforcing interplay of cultivating wisdom (prajñā), attentive virtuosity (samādhi), and moral clarity (śīla). But they also compel expanding the compass of AI ethics far beyond concerns about privacy, transparency, accountability, and ensuring that AI is aligned with human values.
The digital attention economy and consciousness hacking: a karmic and evolutionary risk
AI is now being referred to as a general-purpose technology like electricity. But unlike all previous technologies, intelligent technology is not a passive conductor of human values and intentions. It is an active, innovative, and thus karmically significant amplifier of human values and intentions and of conflicts that exist among them. In fact, it involves the digital synthesis of human and machine intelligence and marks nothing less than a new evolutionary turn—comparable to a shift from physical to cultural evolution—that is transforming who we are, how we are present, and who we might become.
At the heart of this transformation is the algorithmically orchestrated attention economy—a digital extension of the infrastructure of human consciousness that, like cultures and languages, supports certain kinds of practices and creativity while constraining others.
Attention is not just something we “spend” on things, a kind of generic currency representing quantities of focused awareness and time. When our attention is digitally captured attention, it generates and transmits data about what we think is important, how we respond to it, and with what changes over time. The synthesis of human and machine intelligence that is being brought about through 24/7 connectivity and the growth of the global attention economy is thus yielding: unprecedented epistemic powers to predict human behaviors, beliefs, and desires; but also equally unprecedented ontic powers to produce patterns of thought, speech, emotion, and action, and to reinforce the values and intentions informing them.
These are powers to sculpt who consumers and citizens become, not through acts of coercion, but through algorithmically reinforced craving and the manipulation of choice—and decision-making environments—powers to transform the material and immaterial infrastructure of human consciousness, and thus to transform the human experience from the inside out. The algorithmic and deep-learning tools involved are the computational equivalents of brain electrodes, causing experiences and behaviors that we will claim to be the products of our own free will because there are no personal or evolutionary precedents for them occurring otherwise.
Moreover, as machine learning and generative AI systems also become increasingly adept at anticipating, interpreting, and enacting our intentions, thereby delinking attention and effort, they are performing as karmic intermediaries, doing so on digital platforms that inequitably valorize competition, convenience, choice, and control, and with reward structures that foster addictive engagement and the consolidation of what amount to digital saṃskāra or habit formations.
The future of the human-technology world relation
Intelligent technology clearly has tremendous positive potential. But based on its design goals of algorithmically engineered attention capture, near-infinite choice, and continuous desire turnover, the digital attention economy is functioning as a consciousness hacking karmic accelerator that is placing at risk our capacities for cultivating true freedom of attention.
Without freedom of attention, there is no freedom of intention. And without freedom of intention, the line between choice and compulsion dissolves. Our prospects for enacting bodhicitta and the bodhisattva vows collapse.
Virtuosically effortful attention is our most precious human resource. If intelligent technology is going to foster more equitable and humane futures, we will have to cultivate the attentive mastery, the moral clarity, and the wisdom needed to resist the hacking of human consciousness, and to retool the karmic engines and computational factories of the Fourth Industrial Revolution.
This is work that cannot be done successfully alone. Consonant with Zen master Dogen’s characterization of the hard but joyous work of compassionately clearing paths of liberation, it is work that can only be done shoulder-to-shoulder.
Related features from BDG
Praise for a Hopepunk Psalm
In a World of Human Ignorance, Can Artificial Intelligence Help?
Dharma, Perfect Knowledge, and Artificial Intelligence
Does Artificial Intelligence Have Buddha-nature?
Dharma and Artificial Intelligence: Further Considerations
The Rise of Artificial Intelligence and What it Means for Our Jobs