Understand the limits (and consequences).
First, it’s important to understand how the technology works to know what exactly you’re doing with it.
ChatGPT is essentially a more powerful, fancier version of the predictive text system on our phones, which suggests words to complete a sentence when we are typing by using what it has learned from vast amounts of data scraped off the web.
It also can’t check if what it’s saying is true.
If you use a chatbot to code a program, it looks at how the code was compiled in the past. Because code is constantly updated to address security vulnerabilities, the code written with a chatbot could be buggy or insecure, Mr. Christian said.
Likewise, if you’re using ChatGPT to write an essay about a classic book, chances are that the bot will construct seemingly plausible arguments. But if others published a faulty analysis of the book on the web, that may also show up in your essay. If your essay was then posted online, you would be contributing to the spread of misinformation.
“They can fool us into thinking that they understand more than they do, and that can cause problems,” said Melanie Mitchell, an A.I. researcher at the Santa Fe Institute.
In other words, the bot doesn’t think independently. It can’t even count.
A case in point: I was stunned when I asked ChatGPT to compose a haiku poem about the cold weather in San Francisco. It spat out lines with the incorrect number of syllables:
Fog blankets the city,
Brisk winds chill to the bone,
Winter in San Fran.
OpenAI, the company behind ChatGPT, declined to comment for this column.
Similarly, A.I.-powered image-editing tools like Lensa train their algorithms with existing images on the web. Therefore, if women are presented in more sexualized contexts, the machines will recreate that bias, Ms. Mitchell said.