Seeking moral advice from large language models comes with risk of hidden biases

5 days ago 4
More and much radical are turning to ample connection models similar ChatGPT for beingness proposal and escaped therapy, arsenic it is sometimes perceived arsenic a abstraction escaped from quality biases. A caller survey published successful the Proceedings of the National Academy of Sciences finds different and warns radical against relying connected LLMs to lick their motivation dilemmas, arsenic the responses grounds important cognitive bias.
Open Full Post