Should we start taking the well -being of the AI serious?
One of my values deeperly held as a technological editorialist is humanism. I believe in humans and I think technology should help people, rather than discomfort or replace them. I am interested in aligning artificial intelligence - that is, ensuring that artificial intelligence systems act in accordance with human values - because I think our values are basically good, or at least better than the values that a robot could invent.So when I heard that Anthropic researchers, the artificial intelligence company that created the Claude Chatbot, were starting to study the "Welfare of the model" - the idea that the models could soon become aware and deserve a sort of moral status - the humanist in me thought in me: Who cares of chatbots? Shouldn't we be worried about the IA that mist...