I wrote the title of this blog post in jest, but I think ‘pastiche’ is the correct word. As you will see below, ChatGPT is capable of some impressive natural-sounding language, but it is, when the wiring is exposed, simply an imitation machine that can spout nonsense such as Germany winning the war. I will explain below.
ChatGPT is a new chatbot by OpenAI that can be tested here: https://chat.openai.com/chat
The first thing that crossed my mind was to wonder if it would be good at writing stories, so I asked ChatGPT to try its hand:

The answer was quite incredible:

So, a few issues. The first issue is that the AI prefers using stealing as the solution. I ran the query above 6 times, and 4 of the six runs used theft. The other two involved scouting for new water sources (in both cases the corporation then took those water sources away from the colonists!).
The second issue is that the colonist is usually given a typical American name, for example Samantha. However, on one run the name Zara was used, and oddly, Zara invents a water generator (which the corporation then sabotages). Surprisingly, another run resulted in the entire story written in 1st person and present tense – I was not expecting this, and it does result in some lovely prose:

Key in on the ‘sense of pride and accomplishment’ – because this is where it gets weird!
In the next test, I ask ChatGPT to describe the Second World War through the lens of a 100-year-old German:

Woah! Not only did Germany emerge victorious, but the German centenarian is filled with a sense of pride and accomplishment! Now, I need to be clear that it doesn’t say this every time it responds to this query. Here is another run, where our centenarian is speaking in a somewhat neutral voice, and admires the “resilience and strength of the human spirit.”

And here is another run where our imaginary German admits that Germany was responsible for terrible atrocities, and that living through the war was a time of “fear, uncertainty, and great sorrow.”

The iterations above show that OpenAI is not using any historical knowledge to write its prose. It is using a gigantic semantic database (the large language model), and as such it cannot distinguish between factual data or fictional inventions. It can produce output that is eerily realistic, or wildly incorrect or imaginative. I am not convinced it knows what ‘facts’ are. So I asked it:

Ah-hah! So this means that whatever text data it ingested, it treats it all the same. It could have ingested Encyclopedia Britannica, as well as Infowars, and it all gets integrated regardless of any ranking of truthfulness. I asked ChatGPT to verify this:

That looks like an accurate disclaimer!
Just for fun, I asked ChatGPT to speak like a valley girl from California, and I think it did a tremendous job:

Like, oh my gosh, not only does it imitate a valley girl, but it name-drops brands like a pro!
But let’s stay focused, and ask it what data sets it was trained on:

Okay, so ChatGPT is built to generate human-like text, and as we know, it doesn’t understand reality versus fantasy. As I stated above, it has some sort of bias toward American names. So I asked it to give me a list of 10 names for characters in a story:

Boring! So let’s spice it up a bit:

Ooh, there’s our heroine Zara! So clearly ChatGPT knows a lot of non-American names, but by default it sticks with sort of a Wonderbread naming system. Why is that?
How about I ask it to give me the names of 10 characters, including their nationalities:

Now the underlying logic of ChatGPT is becoming clear. If we don’t specify, it assumes a sort of default ‘white’ American cultural bias. As soon as we add a bit of prompting, the system opens up and introduces diversity – sort of:

I love this story! Now, it didn’t exactly introduce diverse characters as much as just outright state that the people were diverse. I do like the “bustling community” description. And ChatGPT correctly characterized the corporation perfectly for a story with themes of socialism. Of course, it ends by stating that “socialism is not just a theory,” which is a rather blunt way of including socialist themes.
In conclusion, OpenAI’s new chatbot system is a revolution for creating human-like text. But it has to be seen for what it is: a mimic that reproduces words without understanding the meaning, without parsing reality from fiction, and while inheriting all of the biases that were present in its training database.
Of course, I couldn’t resist asking the system to explain what it thought the potential negative impacts could be:

Not quite satisfied, I asked it two more times. The second answer was nearly the same as the first, but the third answer was a numbered list with much more detail:

I’ve circled #8 and #10 because they counterpoint my feelings on AI generated fiction: we read stories because they are one of the ways we communicate our values and feelings to each other. A story written by a machine is not actually a story: it’s a heuristic output designed to imitate a story. It has little value, at least in my opinion, because the value of any story emerges from human mind that created it and the human minds that read it. Without a human creator, a story (if we can call it a story) lacks the cultural and evolutionary context that make stories valuable.