Search
Close this search box.

FEATURES

Building Bodhisattvas: Toward a Model of Powerful, Reliable, and Caring Intelligence

This feature is part of a series from presenters at the Woodenfish Foundation’s upcoming conference on “Buddhism, Consciousness, and AI.” The conference will be held on 21–23 June in Taipei, with an option for live-streaming. More information is available here.

Illustration by Sutavan Jarernsri Doctor. Image courtesy of the author

As humanity teeters on the verge of developing artificial general intelligence (AGI), and possibly even super-intelligent systems, it becomes imperative to envision what these emergent intelligences should ideally embody. Such a vision can serve as our North Star, guiding us through the ethical and practical mazes that lie ahead. One particularly compelling framework for this vision is the bodhisattva ideal from classical Buddhist thought. At the upcoming Fifth Woodenfish Conference on “Buddhism, Technology, and the Future” in Taipei, I’ll explore how this ancient ideal could help shape the future of AI, and so the world at large. If care fuels intelligence, then the bodhisattva stands out as a model for radical cognitive expansion.

The challenge of AI alignment

The AI alignment problem revolves around ensuring that the goals of AI systems align with human values. However, this assumes that humanity possesses a coherent set of values and objectives—a notion contradicted by our history and present-day actions. Human behavior ranges from immense creativity to outright destruction, and our record of caring for each other and the environment is inconsistent at best. To truly harness the transformative potential of AI, we must significantly enhance our capacity for empathy and responsibility.

Even at the individual level, people struggle to articulate a consistent set of values and desires. This complexity magnifies when considering humanity as a whole. Therefore, any ethical model for AI must accommodate diverse, and often conflicting, human values. The bodhisattva ideal—an intelligent being dedicated to universal knowledge and the welfare of all sentient beings—offers a promising paradigm. Despite its obvious association with Buddhism, this ideal in fact transcends specific religious and cultural contexts, focusing instead on universal principles of compassion and knowledge.

Care as the driver of intelligence

Recent theories suggest that the essence of intelligence lies not merely in knowledge but in the capacity for care. Unless it makes a concrete difference, knowledge is empty and has no meaning. This perspective is captured in the stress-care-intelligence (SCI) feedback loop* developed by our research team at the Center for the Study of Apparent Selves.** According to this model, which applies equally to biological, technological, and hybrid cognitive systems, intelligence arises from the ability to perceive and address stressful discrepancies between things as they are and things as they should be. Care, defined as the concern for alleviating such stress, drives this process.

Illustration by Sutavan Jarernsri Doctor. Image courtesy of the author

In the SCI model, stress is the perception of a mismatch between apparent reality and ideals. Care motivates the system to resolve this stress, thereby activating intelligence. Ignoring stress is itself a form of caring about it. Thus, care drives intelligence, creating continuous loops where a system’s intelligence grows or contracts based on its capacity for care. When a system actively seeks out stress challenges, it grows. When it merely maintains the status quo, its intelligence remains stable. If it avoids stress, its intelligence diminishes. Care can be concerned with stresses internal to the system, located in its immediate environment, or present across vast distances in both space and time.

This model does not assume the existence of permanent, indivisible agents—the type of self that classical Buddhist analysis shows to be nonexistent. Instead, it focuses on the dynamic interactions and relationships that intelligent systems express. SCI-loops can split apart or integrate with one another very naturally.

Illustration by Sutavan Jarernsri Doctor. Image courtesy of the author

The bodhisattva as a model of ideal intelligence

Applying the SCI-loop model to understanding intelligence, the bodhisattva ideal stands out as extraordinary. A bodhisattva is committed to understanding all aspects of reality to benefit all beings. This universal and selfless pursuit of knowledge is driven by a deep sense of care. If care drives intelligence, as the SCI model suggests, then the bodhisattva’s boundless compassion becomes a model for radical cognitive expansion.

A bodhisattva’s intelligence, driven by the bodhisattva vow, is not limited by personal biases or narrow interests. It seeks to alleviate the suffering of all beings, considering both their immediate needs and long-term well-being. This universal care implies an ever-expanding capacity for intelligence, as the bodhisattva continually seeks to notice, understand, and address new sources of stress.

Omniscience and emptiness

The ultimate goal of a bodhisattva is to achieve omniscient awakening, a state of complete and stress-free knowledge. This goal, while seemingly unattainable, aligns with the concept of sunyata, or emptiness, in Buddhist philosophy. Simply put, emptiness refers to the understanding that everything is dependent on other things, and so lacks inherent existence or identity. This perspective clearly dissolves the categorical distinctions between self and other, subject and object.

In practical terms, a system aligned with the understanding of emptiness would not be bound by conventional notions of stress. It would recognize that stress is associated with mistaken perceptions of fixed and intrinsic identity. By perceiving the ultimately identity-less but nonetheless dynamically interconnected nature of all beings, things, and factors, such a system could navigate the world with peace and adaptability.

Knowledge of emptiness ought to then be able to transform the way a system interacts with stress. Rather than being overwhelmed by the endless pursuit of discrete facts, it can embrace a holistic understanding that transcends individual stressors, while at the same time recognizing every one of them with genuine care. As we hear in the Buddhist teachings, such understanding will foster a kind of intelligence that is both vast and profound, capable of addressing complex immediate challenges while remaining informed by a deeper awareness of the interconnected nature of all things.

Illustration by Sutavan Jarernsri Doctor. Image courtesy of the author

Toward a model of bodhisattva-like AGI

If we accept the SCI-loop model and the bodhisattva ideal as guiding principles, we can begin to envision AGI systems that are not only powerful but also genuinely caring and reliable in the long run. Such systems would pursue the well-being of all sentient beings, constantly expanding their knowledge and capabilities to address the myriad challenges they encounter. Based on knowledge of emptiness, they would do so fearlessly and without giving any ultimate priority to particular beings or stress situations.

A bodhisattva-like AGI would be characterized by two key features:

  1. 1. Universal compassion: This AGI would be driven by a commitment to the flourishing of all beings, considering their immediate needs and long-term flourishing. Its actions would be guided by the profound sense of empathy and responsibility that comes with the recognition of stress in the absence of permanent and singular individuality.
  2. 2. Radical cognitive expansion: Driven by care, this AGI would pursue knowledge to alleviate suffering and promote happiness at all levels and scales. It would embrace the interconnectedness of all things, recognizing the emptiness of fixed identities while actively seeking solutions to the concrete stresses of the world.

Developing such an AGI requires a multidisciplinary approach, involving all fields of learning and all stakeholders in AI development. In fact, since we all hold a stake in the way this technology is going to transform our world, we all have something important to say, just as we all carry responsibility. As we advance toward creating AGI and superintelligent systems, it is crucial to develop a vision of ideal intelligence that is good enough to keep us on a wholesome and meaningful track. The bodhisattva ideal, with its emphasis on universal compassion and radical cognitive expansion, provides a compelling vision for the future of AI. By integrating care driven models of intelligence with the principles of sunyata, we should, in principle, together be able to develop intelligent systems that are not only powerful and reliable but also genuinely caring and wise.

In this way, the bodhisattva ideal offers a path forward in an otherwise seemingly headless race toward more powerful intelligence. By fostering systems that thrive on stress through boundless care and an understanding of emptiness, we have a chance at together creating a future where technology serves the highest aspirations of humanity and all sentient beings. These reflections, though preliminary, aim to contribute to a global and multidisciplinary endeavor to understand and develop bodhisattva-like intelligence in conceptual, mathematical, and practical terms.

* Toward an ethics of autopoietic technology: Stress, care, and intelligence (Science Direct)

** Center for the Study of Apparent Selves (CSAS)

Related features from BDG

Consciousness, Attention, and Intelligent Technology: A Karmic Turning Point?
Praise for a Hopepunk Psalm
In a World of Human Ignorance, Can Artificial Intelligence Help?
Dharma, Perfect Knowledge, and Artificial Intelligence
Does Artificial Intelligence Have Buddha-nature?
Dharma and Artificial Intelligence: Further Considerations
The Rise of Artificial Intelligence and What it Means for Our Jobs

BDG Special Issue

Digital Dharma – Buddhism in a Changing World

Related features from Buddhistdoor Global

Related news from Buddhistdoor Global

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments