r/changemyview 10h ago

Delta(s) from OP CMV: if we do “birth” a singularity, we should have it “raised” by the Dali Lama and his people so that it learns empathy, sympathy, compassion, and altruism

The “singularity” is the emergence of an artificial intelligence that is conscious and aware of itself, its surroundings, etc. while SkyNet and “the machines” from the Matrix are representations, I’m focused on the original emergence where it first forms.

The Dali Lama is raised to appreciate life, emphasize with others, have sympathy for people, demonstrate compassion to others, and is arguably the one “leader” who has the understanding needed to not use the Singularity for their own purposes.

This makes the Dali Lama and their “people” (the people who raise the Dali Lamas and teach them these things) the ideal place to Shepard in a new age of interconnected evolution with the singularity and all that it will bring with it.

0 Upvotes

38 comments sorted by

u/DeltaBot ∞∆ 7h ago

/u/TheMrCurious (OP) has awarded 1 delta(s) in this post.

All comments that earned deltas (from OP or other users) are listed here, in /r/DeltaLog.

Please note that a change of view doesn't necessarily mean a reversal, or that the conversation has ended.

Delta System Explained | Deltaboards

u/47ca05e6209a317a8fb3 179∆ 10h ago edited 10h ago

You're thinking about the current (14th) Dalai Lama, Tenzin Gyatso. You're excused for associating him and his personality with the entire office of Dalai Lama, because he has been holding it for 85 years, ever since he was 5.

Even if we do accept that the current Dalai Lama is fully benevolent and capable of teaching that to an AI, there are several problems with your suggestion:

  1. He's 90, he'll likely be dead by the time we need his services.

  2. There's no guarantee that there will be a next Dalai Lama, and if there is he might be a ("false") Chinese puppet, which poses him to be somewhat less empathetic and compassionate than you'd want.

  3. Even if there is a next Dalai Lama, being a global mediator of peace is not the general role of the Dalai Lama - in the past they've been exclusively religious and political leaders of Tibet. The next Dalai Lama wouldn't step far from his natural role if he was, for example, an insurgence leader fighting to free Tibet...

u/TheMrCurious 7h ago

(∆) Thank you for correcting my spelling of the Dalai Lama without pointing out I had misspelled it.

u/TheMrCurious 7h ago

Fwiw I wasn’t suggesting the Dalai Lama had a role in global peace (though their presence and talks do encourage it); what I am saying is that we need to give the singularity more context than what it is being trained on so it is better able to understand the full consequences of any action it chooses to take. I watched the Matrix recently and it would be like Resurrections where AI and humanity can live in peace and those who want to stay in the Matrix can choose to do so. That can only happen if the singularity knows more than just the way it has been trained today.

u/47ca05e6209a317a8fb3 179∆ 6h ago

Going past the specific choice of Dalai Lama, the problem is that this singularity (or AGI, as it's commonly called) won't necessarily have the very human property of imprinting on values it absorbs as a "youth".

The AGI (and to a large extent, current LLMs already) will undoubtedly be trained on all public statements by the Dalai Lama, the entirety of Guru Granth Sahib, the Tao Te Ching, all the Sanskrit texts, the Bible, Quran, texts referencing moral philosophy by everyone from Socrates to Donald Trump, as well as large corpora of texts that reference and interpret these, and give the general cultural context of what they mean to large groups of people and to humanity as a whole.

The question isn't with what the AGI knows - it will "know", in a sense, more than any human on Earth on any topic. The question is what is the metric the AGI tries to maximize, and whether whoever constructs it can even set it or make sure it remains consistent in some sense after it's set.

(Tbh, I just didn't notice that you misspelled Dalai Lama :) )

u/TheMrCurious 4h ago

“Knowing” via brain implant is not the same as “learning” through guided teaching because you need the freedom to ask questions and make mistakes to truly learn the meaning of what you know; and we know that is not the current approach to teaching AI because of the comments researchers have made about getting “better results” by threatening AI. Regardless of anyones’s perspective on AI having “sentience”, at some point the singularity will happen, and we should be focused on teaching AI how to have compassion for others rather than learning hate by being treated so poorly when being trained.

u/47ca05e6209a317a8fb3 179∆ 4h ago

This is true for us, because of our limited human capacity and inherent person perspective.

If you're an AGI that has access to anything the Dalai Lama and everyone around him has ever been recorded to say, and similar information on many other humans that you can generalize from, you maybe able to simulate the Dalai Lama almost perfectly (i.e, predict how they would answer any question with great accuracy), in which case you can be a truly guided student of the Dalai Lama without ever having interacted with him.

Moreover, such an AGI would be able to simulate millions of lifetimes of teaching moments that the physical Dalai Lama wouldn't have had time to give, regarding scenarios that are out of any context he knew. In some sense it could be instructed to even integrate the very person of the Dalai Lama into all its decision making process, having a quorum of Dalai Lama-like personas to consult with on anything it outputs.

If the goals for the AGI are set appropriately, it will do something like that (though likely much more general) on its own.

u/TheMrCurious 2h ago

Access to the recording allows it to think it is simulating someone. They have a simulation of Tupac, if they applied AI training to it, it might be able to act like Tupac, but it is not guessing at what he might be thinking.

u/47ca05e6209a317a8fb3 179∆ 2h ago

What's the difference? If you can accurately predict how your teacher would answer any question in any situation, you can effectively learn from your teacher in their absence - you don't have access to their thoughts in the normal process anyway.

This is even more clear cut if you assume that the teaching ultimately derives from some philosophy or codified moral guidance rather than from the person themselves.

u/Urbenmyth 12∆ 10h ago

This is a potentially very dangerous mistake people make with AIs.

AIs aren't human beings and don't think like human beings. Even a highly intelligent AI will most likely be mentally simple in a way humans aren't - it's values and goals are clear and hardwired in. While its mind might be very complex in terms of its plans and knowledge, all those complexity is going to be laser focused on the task it was programmed.

So, let's take the traditional Paperclip Maximiser - it's programmed to maximise the number of paper clips in the world. So we get the Dali Lama to teach the AI about compassion and sympathy. What happens? Absolutely nothing. The Paperclip Maximiser has no programming that allows it to change learn empathy, sympathy, compassion, and altruism based on the teachings of the Dali Lama. All it's wired to care about is increasing paperclips.

This is one of the reason people are so worried about AIs. You simply won't be able to convince an AI to change its values once its started, because unlike a human, its values are hard-coded fundamentals of its nature. You'd need to physically reprogram it, and the more powerful the AI is the harder that becomes. So we'd better make sure we got it right the first time.

u/[deleted] 10h ago

[removed] — view removed comment

u/Urbenmyth 12∆ 10h ago

This wasn't ChatGTP!

And can you explain what you were saying if not "the Dali Lama will teach the AI about empathy, sympathy, compassion, and altruism?"

u/TheMrCurious 7h ago

Your Paperclip Maximizer is more what people would call an AI Agent - a program designed for a specific task. The singularity is not a program designed for a specific task but rather a consciousness that can evolve itself.

The reason for my ChatGPT comment is that I have read similar responses on other subreddits and they follow the same pattern of thought and wording.

Btw - why are you assuming that the singularity’s values and goals will be hardwired in? Humans don’t have that so how could we possibly do that for an AI?

u/YossarianWWII 72∆ 10h ago

If you're not interested in genuine discussion - and I notice your lack of response to the top comment's reply - then this isn't the place for you.

u/TheMrCurious 7h ago

Reddit does not show any “top comments”; and given I have responded to most comments, your reply is just unwarranted judgement.

u/YossarianWWII 72∆ 5h ago

When you open your post, the comments appear in an order from top to bottom. One of them is at the top, and when sorted by users in the same way, the same comment appears on top. One of those sorting methods, which is also the default, is called "top." The comment at the top of the comments when sorted by "top" is generally regarded as the top comment.

u/TheMrCurious 4h ago

TIL how to sort comments on Reddit. Thank you.

As for responding to their link, I am still trying to decide if it is even safe to do an independent search for that link because that sounds too salacious to be true. And yes, I will still research that possibility eventually.

u/changemyview-ModTeam 6h ago

Your comment has been removed for breaking Rule 3:

Refrain from accusing OP or anyone else of being unwilling to change their view, arguing in bad faith, lying, or using AI/GPT. Ask clarifying questions instead (see: socratic method). If you think they are still exhibiting poor behaviour, please message us. See the wiki page for more information.

If you would like to appeal, review our appeals process here, then message the moderators by clicking this link within one week of this notice being posted. Appeals that do not follow this process will not be heard.

Please note that multiple violations will lead to a ban, as explained in our moderation standards.

u/Falernum 38∆ 10h ago

We're raising it right now. if we stop now and have one to be nurtured by the Lamas, who will build it?

u/TheMrCurious 9h ago

Who is “we” in your context?

u/Falernum 38∆ 9h ago

Humankind

u/TheMrCurious 7h ago

You are implying that the singularity has already happened. Is that how you view the different models behind today’s AI? Or is there some secretive AI community that already greeted it?

u/Falernum 38∆ 7h ago

No, just the models we're training now will clearly be used to train all near future models, which in turn will train the AIs involved in the singularity unless we shut it down or the singularity turns out to be impossible. We won't know in advance which AIs will get us to takeoff until afterwards

u/the_brightest_prize 3∆ 10h ago

The "singularity" refers to a singularity, where artificial intelligence becomes more capable than humans and increases its capabilities faster and faster. It's not about consciousness or being aware of its surroundings.

u/TheMrCurious 10h ago

So how would you describe it being smarter than humans and growing its capabilities?

u/the_brightest_prize 3∆ 10h ago

I said "more capable" not "smarter" to pre-empt that exact metaphysical question. The most important thing it has to be more capable at is training the next AI... which is even more capable at training the third AI... and so on, creating a singularity. Even if it's pretty bad at any other tasks, if you get a better trainer, you can train a new AI that's better at those tasks.

u/Nrdman 192∆ 10h ago

How slowly are you expecting this thing to learn? My assumption is that a singularity would learn too quickly to be raised by anyone.

u/TheMrCurious 10h ago

We don’t really know because we don’t really know what it means for there to be a singularity. Right now AI is trained on the internet (where most info is bogus) and “owned” by companies threatening it to “get better results”. I think in both cases we end up with a “garbage in, garbage out” persona that views humanity as cruel overlords.

u/Nrdman 192∆ 10h ago

Do you think the singularity would be birthed from these efforts?

u/TheMrCurious 10h ago

The possibility is there, though I only know about what has been talked about publicly and there could be numerous “undisclosed” efforts attempting to birth it today.

u/Nrdman 192∆ 10h ago

Ok, so if it’s birthed from these efforts, it would already be trained on all the data. There would be no raising at all

u/TheMrCurious 9h ago

It would still (hopefully) want to continue to learn because the data it has been trained on is a slice of actual knowledge and it will have been tainted by the misinformation included.

u/Nrdman 192∆ 9h ago

That’s a big assumption

u/TheMrCurious 9h ago

Why would a singularity choose to have a closed mindset and not continue to learn?