This is a pretty good article outlining the views of Sven Nyholm, a professor of ethics of AI at Ludwig Maximilian University of Munich. The argument is a two parter: first, we determine what is meaningful to us, and then second, we determine whether AI is taking away from this. As a philosopher myself I have resisted being drawn into 'meaning of life' arguments because they are a chimera. Nothing, not even life, is inherently meaningful (indeed, 'meaning', properly so-called, is a property of words and sentences, not people, and 'value', properly so called, is a property of things, not people). We decide what is meaningful, that is, we decide what stands for what. Including life. If AI leaves us with nothing to do but ride the bike or wash the bowl, then these are what is meaningful. This idea that we must develop this or that skill is based mostly on the idea that we must work, that we must repair society, or at the very least, repair ourselves. I value being able to do hard things, but I see no reason why I should forces this bit of personal psychology on others. If AI makes people's lives easier, I'm fine with that. I'm more than fine.
Today: Total: [] [Share]

