Be Careful What You Wish For

So, a lawyer decided to use ChatGPT to “help” write some papers he filed in court, but got in trouble when the included citations were bogus.

The naive interpretation of this is that the computer program is broken, and it’s a danger to society and ought to be really regulated, and actually, I don’t disagree. However, there’s even more going on, here, that shows up below the fold in the story.

The lawyer, Steven Schwartz, claims that he was unaware that ChatGPT just makes stuff up. Never mind that this has been talked about openly in non-technical forums.

So here’s a flaw in the program:

• ChatGPT is utterly ignorant of and indifferent to the notions of “true” and “false.”

And here’s a flaw in the lawyer:

• Mr. Schwartz is fully cognizant of the notion of “true” and “false” and doesn’t care about them in and of themselves, but only insofar as getting caught telling falsehoods actually gets him in trouble.

And here’s a flaw in society:

• When a student submits a paper with false citations, that student suffers (gets a bad grade). When a person tells a lie in court, that person suffers (charged with perjury). When ChatGPT makes some shit up, ChatGPT does not, in fact, suffer. There is no negative consequence, so we don’t actually expect that its behavior is going to change.

“But what about fiction? It’s not always wrong to make stuff up!” I hear you say, because it makes my own objection sound like a chorus. And that’s true. But the thing that makes fiction okay is that we have figured out larger contextual cues that allow us to signal when it’s okay and when it is not okay to tell lies.

But that’s a really hard problem to solve! It’s way easier just to program the thing to “tell the truth only” or “don’t care about the truth” and not have to worry about switching back and forth between those two. But guess which of those two strategies is harder to implement? Right, it’s definitely harder to care than to be indifferent. So here’s another flaw:

• The programmers chose to implement a chatbot that emulates human writing and which is utterly indifferent to the truth.

…which is not actually a problem all by itself but then:

• The company took this program that writes prose like a human and is utterly immune to the consequences of its speech and turned it loose on the world.

The way I see it, there has to be feedback. Punish the system, and that system definitely includes those humans who are responsible for the system. If you want to participate in society, then you have to really participate, and that looks like learning, and trying not to do harm.

Published by pirateguillermo

I play the bagpipes. I program computers. I support my family in their various endeavors, and I enjoy my wonderful life.

Leave a ReplyCancel reply