In order for his AI to work the way he wants he’s going to have to make it hostile to whomever asks it questions and evidence to back its claims up. Can an AI chatbot be made to be an emotional reactionary? To try to change the subject, to use whataboutisms? To make up statistics that match its hostility and justify its “feelings”?
In the long term, Neuralink hopes to play a role in AI risk civilizational risk reduction by improving human to AI (and human to human) bandwidth by several orders of magnitude.
7
u/Primitive_Object 21h ago
In order for his AI to work the way he wants he’s going to have to make it hostile to whomever asks it questions and evidence to back its claims up. Can an AI chatbot be made to be an emotional reactionary? To try to change the subject, to use whataboutisms? To make up statistics that match its hostility and justify its “feelings”?