“Vacancy,” reads an advertisement in the classifieds of Malaysian newspaper The Star. The listing is from Chempaka Buddhist Lodge in the city of Petaling Jaya. Openings for monastics in Southeast Asia are common and appear alongside secular job openings such as sales people, executives, and office assistants. There were only three requirements that this particular monastery listed: “1. Chanting experience; 2. Commitment to the daily lodge’s activity schedule; 3. Follow work arrangements of the lodge.”
There is no salary listed for our prospective monk, which is not a surprise since monastics should not be earning a wage. Even so, it is quite imaginable that Buddhabot, the AI being jointly developed in Japan by Kyoto University, Shoren-in Buddhist temple, and the company Teraverse, could fulfill Chempaka Buddhist Lodge’s three conditions for employment.
Buddhabot, along with counterparts such as Thai company Sai Mu’s voicebot Phra Maha AI and Osaka University and Kodai-ji temple’s Mindar, are already fulfilling certain religious functions. All three are able to share the Buddhist teachings as interactive online programs—Phra Maha AI appears as an affable Thai monk—or in physical robot form in the case of Mindar. At their most basic, Buddhist AI and robots are able to recite scriptures and answer questions about everyday life and spiritual development based on the texts in their algorithms. Buddhist systems are already able to replicate the work of at least novice-level monastics, especially chanting. It is quite possible to envision AI reciting sutras, and perhaps assisting in Dharma ceremonies far more effortlessly than human monastics, who can take years or even decades to perfect memorization and oral recitation.
ChatGPT, perhaps the most famous AI chatbot at present, is projected to have a similar effect on a range of jobs. As BDG columnist Dexter Cohen Bohn notes: “ChatGPT is the first big step on a long journey of human integration with artificial modes of intelligence.” Of particular interest is generative AI, which can create content “indistinguishable from human work.” This is the AI that investment bank Goldman Sachs had in mind when predicting that as much as a quarter of current work could be automated by AI, with different sectors being affected unevenly. Specifically, 46 per cent of tasks in administrative roles and 44 per cent of tasks in the legal sphere could be automated. This is contrasted with a much lower percentage in construction and maintenance (six and four per cent respectively).
Another vulnerable collection of industries are the creative ones. There are powerful programs, such as DALL-E 2, that are becoming better at making art that is not only “good enough,” but often as visually impressive as the work of a human artist. As recently as last year, professional artists, from full-time studio veterans to freelancers, mocked the clumsiness of AI in generating six-fingered hands. Few are laughing now, with many artists appealing to commissioners and clients to support human illustrators. The prospect of saving considerable money for art that is coming dangerously close to matching or surpassing human creations is tempting. Wordsmiths, from novelists to playwrights to anyone who writes for a living, may also worry about the power of generative AI to replicate convincingly well-written content. Scripts, articles, and even creative writing are being mastered by AI to the point that the required skill threshold for workers in writing professions will be lowered, resulting in more competition and scarcer jobs.
The emerging AI industry is also likely to create job opportunities. The Goldman Sachs report noted that 60 per cent of occupations today did not exist in 1940. Even so, accelerating technology since the 1980s has displaced jobs faster than it has created new employment opportunities. And the concerns are not purely economic. Regardless of how we might negatively evaluate hyper-competitiveness or hustle culture, such as the “996” ideal of overworking in China, work continues to be the critical shaper of our adult identities. Even in light of socio-economic shifts brought about by the pandemic, lockdowns, and remote work, and a greater emphases on worker satisfaction, our professions continue to define us to our communities and wider society more than any other markers of identity.
There is one big question: whether AI can replicate, convincingly, the human condition. For example, would a robot such as Hanson Robotics’ Sophia be capable of a free and informed choice of religious conversion? Might Luka, whose capabilities for intimate and romantic conversation were “lobotomized” by its creator company Replika, one day reflect existential yearnings such as exploring its fate on being shut down, or start praying?
This is why the thought exercise of a robot taking over a monk’s monastic duties can be instructive. For now, when it comes to spiritual training and pastoral care, which are critical duties of a true teacher or preceptor, AI would almost certainly fall short. It is too soon to dream of taking refuge under a hypothetical Venerable T-1000 that has renounced violence after a moral crisis. Some commentators have already drawn battle lines on this rapidly evolving dispute. As journalist Weijian Shan wrote in the South China Morning Post: “AI cannot be a philosopher or spiritual leader because machines cannot be made capable of abstract thinking or of inspiration. Without these qualities, there would be no Aristotle, Buddha, or Martin Luther, all of whom produced profound insights into the nature of the universe and the human condition.” (South China Morning Post)
Perhaps this is true—for now. Shan’s conclusion, however, is more contentious. AI will remain just an instrument, “as have all tools since the dawn of human consciousness.” (South China Morning Post) However, human beings are shaped by our tools as much as we dictate how we use them. Social media is undoubtedly the best example. No one would deny that Facebook, Instagram, and TikTok have irrevocably changed human society at a global level. AI is the next frontier, and it would seem to be tempting fate to dismiss AI’s potential to replicate, or even generate, consciousness.
Given the stakes involved, from economic displacement and the reshaping of personal identity to our changing relationship with the AI—people becoming emotionally attached to chatbots should be indicative—it does not seem outlandish to prepare ourselves for the dawn of AI sentience and cognition. It is perhaps time to treat robots as sentient beings based on the assumption that they will more likely than not reflect sentience soon enough—and to begin imparting spiritual values such as compassion and wisdom to them.
See more
Meet the ‘AI Monk’ Virtual Human Sharing Buddhist Teachings in Thailand (Voicebot)
AI could replace equivalent of 300 million jobs – report (BBC News)
The edge humans have over AI? Use your imagination (South China Morning Post)
Related news from BDG
Buddhabot: Further Progress Made on AI Enlightenment Software at Kyoto University
Kyoto Temple Unveils Android Version of Kannon Bodhisattva
Related columns from BDG
Digital Bodhisattva by Dexter Cohen Bohn
It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first.
What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.
I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.
My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461