No, LLMs can’t write good novels and they won’t be able to in the future. Yes, LLM-produced text is going to show up in all kinds of places where it doesn’t belong. Having an LLM write an entire novel from a single prompt and then selling it for money through self-publishing is a kind of asset-flipping and it is a scam. I think this is the use people are the most upset about.
This is an interesting article because it’s not featuring a scammer, but featuring a professional writer who has decided to use LLMs to assist her in writing including using LLM-produced passages in the context of her original writing. This is much more of a gray zone to me and not actually something I’m necessarily opposed to.
One of the first things I did with an LLM was to have it convert my silly ideas into iambic pentameter and it was a lot of fun. I didn’t tell it, “write a poem in iambic pentameter about X” and then try to sell whatever it gave me. I had it convert each of my descriptions part by part one at a time and often I had to generate several different versions of the same passage before I even had something to clean up. Some of what ended up in the finished product was directly from the machine, but most of it I had to re-write. It was interesting to experiment with.
The difference between a scamming people and using an LLM as a tool is reflected in the finished product. A writer has to know what works and what doesn’t work because most of what the machine will give is not going to be usable as-is, and if it is usable it is by happenstance as it happened to conform with what the writer was trying to express at that moment. A human mind is still absolutely necessary to write something someone would want to read, especially if they are choosing whether to read it, and I don’t see that changing with what these models are capable of or possibly capable of. This being the case there are probably going to be some distinctive traits of LLM-produced stuff that people will probably pick up on and get tired of. I’m interested to see how all art develops in a direction to distinguish it from what LLMs can produce such as when painting diverged significantly after the invention of the camera.
LLMs are really crappy at writing books right now.
However there is zero evidence they will not get better, in fact they are getting exponentially better all the time at all tasks which are getting measured.
My bet is on LLMs soon being able to put out mediocre fiction, and then not much later great fiction, indistinguishable from the best authors out there.
I know I’m more confident than most that LLMs are incapable of producing art. Although you are correct to say that it has not been disproven that LLMs may have the potential to produce art, there is also not currently any evidence supporting that they could create art. We’ll see how it ultimately plays out but allow me to explain why I don’t find it likely that LLMs are the technology we can ever expect art to come from.
Inherent Limitations
LLMs are fascinating and useful for a lot of things, but they are not intelligent. A “neural net” which is “learning” through exposure training data is more sophisticated than other ways of text and image generation that we have yet invented, but compared to the system it’s meant to resemble it is hopelessly outclassed. We can’t currently make something which resembles a human brain because we don’t have a firm grasp as to how one works at all. What little we do know indicates a level of complexity that might literally be beyond human comprehension. A brain is made of billions of neurons connected to one another at trillions of points by branches. At any given moment, these trillions of branches are sending and receiving not in binary but by various combinations of neurotransmitters. At a basic level we know that the the result of this neurotransmitter activity (which is different by the area of the brain it occurs in and even is highly variable between different brains) is a mind made up of some kind of consciousness, subconsciousness, and instinct. This system was not designed by human minds but is the result of eons of natural selection. We have no idea how to even begin replicating something like this, although neural nets could be a step forward. We would need a much larger system working in a fundamentally different way which we may not be able to replicate with our limited faculties
The Quality of Art
In my opinion, of all intellectual processes the production and appreciation of art is probably the most demanding of the system I described above. There is a continuum from concrete to abstract, and while computers are excellent tools to store and process concrete data, art falls on the furthest end of abstraction. Mathematics and the natural sciences are often clearly quantifiable. Social sciences, containing social constructs which change depending on variables we are not fully aware of including the interaction of billions of the above system interacting with one another, is significantly more difficult to quantify although still possible. It is not possible to quantify the quality of art. What makes good art? We have no idea. We have never had any idea. Art is not quantifiable and may often be appreciated on a level beyond our ability to describe or even understand. There is absolutely no guide to making good art and there can’t be. Every attempt to define art has been defied in a way which is considered more expressive and more artistic than the limited products a definite process may produce. At the highest level, art is the pure expression of intentional and/or unintentional meaning from one mind to another on levels we aren’t even aware of in many cases. A machine using sophisticated word-association algorithms using a tiny fraction of the computing power a typical brain has is just not powerful enough to accomplish what a human can.
The Human Element
LLMs are not aware in the same way a human is aware and couldn’t be. Although I think it’s possible to create a true Artificial Intelligence and LLMs may be a step forward in that direction, any AI is not going to be able to understand a human experience because they can’t have them. LLMs don’t have needs or desires, they don’t have relationships or a reason to form relationships, they don’t even have the basic requirement of life to maintain a system against entropy. These are things most animals with a nervous system more developed than a worm can act according to. Building upon these animal needs, our neo-cortexes in addition allow us to have thoughts, rationally solve problems, make plans, and form and store memories. Some of those things we find computers have an easier time with because they have fewer biases, but we have biases for reasons good and bad and this is relevant to art. An artificial mind which has not themselves had to survive and seek satisfaction in this world and without even the basis to do so is never going to be able to create something meaningful to a human mind except by sheer accident. If a true AI does produce art, that art will be most meaningful to other AIs rather than to us. LLMs are mindless machines which can only imitate but don’t have the foundation to produce art themselves. The best it could ever do is challenge the kind of writing which is done with the least amount of effort which is most reliant on common tropes and cliches. The best it could be is a shadow of what we are capable of.
Conclusion
With all of that considered I actually do think that LLMs will become better at applying human language and may even be capable of replicating writing styles which we find appealing when they are being used to tell the stories we enjoy. They may even be able to generate ideas which we may find appealing as well. However, just like we might see or read something we thought we wanted but are left feeling hollow by I think there will always be the most important things missing from AI produced text and images when compared to art from any human.