https://link.springer.com/article/10.1007/s10676-024-09775-5
ChatGPT is soft bullshitter
It is aimed to be convincing rather than accurate.
The basic archi-
tecture of these models reveals this: they are designed to
come up with a likely continuation of a string of text. It’s
reasonable to assume that one way of being a likely continu-
ation of a text is by being true; if humans are roughly more
accurate than chance, true sentences will be more likely
than false ones. This might make the chatbot more accu-
rate than chance, but it does not give the chatbot any inten-
tion to convey truths. This is similar to standard cases of
human bullshitters, who don’t care whether their utterances
are true; good bullshit often contains some degree of truth,
that’s part of what makes it convincing. A bullshitter can be
more accurate than chance while still being indifferent to
the truth of their utterances.
tecture of these models reveals this: they are designed to
come up with a likely continuation of a string of text. It’s
reasonable to assume that one way of being a likely continu-
ation of a text is by being true; if humans are roughly more
accurate than chance, true sentences will be more likely
than false ones. This might make the chatbot more accu-
rate than chance, but it does not give the chatbot any inten-
tion to convey truths. This is similar to standard cases of
human bullshitters, who don’t care whether their utterances
are true; good bullshit often contains some degree of truth,
that’s part of what makes it convincing. A bullshitter can be
more accurate than chance while still being indifferent to
the truth of their utterances.
We conclude that, even if the
chatbot can be described as having intentions, it is indiffer-
ent to whether its utterances are true. It does not and cannot
care about the truth of its output.
ent to whether its utterances are true. It does not and cannot
care about the truth of its output.