
We’re in the middle of an AI war, but it’s not the one you are thinking.
Yes, every company of any size is trying to shove generative “AI” down our throats, scraping our content without commission, and filling their apps with AI “functionality” that in many cases makes those apps work worse than they did before. Notice how your voice to text is suddenly less accurate, and won’t type the right word sometimes no matter how many times and how clearly and slowly you say it?
Microsoft has taken this to the next level, renaming its Office suite, a brand it has spent almost forty years building, Microsoft Copilot.
And last week, a “professional” service provider I was working with couldn’t be bothered to answer my questions himself, so he had his AI app do it instead.
And of course, so many of us authors had our works stolen by these gen AI companies without compensation, and the one successful lawsuit (so far) cut out anyone who hadn’t officially registered their copyright with the US Copyright Office, something we indie authors have been told for years and years was unnecessary, that just including the copyright symbol in the book protected our work.
Apparently that no longer applies in the age of AI. And the settlement left out non-US authors too.
But that’s not the war I’m talking about.
The one that’s catching us in the crossfire is between authors who use AI on one side, and authors and readers who abhor it.
Let me lay out my own position. I love the idea of generative AI. Who wouldn’t want a free (or low-cost) intelligent assistant to whom we could offload our more tedious tasks? One that would be a creative partner, and allow us to achieve amazing things with less work and time?
Unfortunately, that’s not what we got.
Fruit of the Poison Tree
From the beginning, Generative AI’s creators realized they would need massive amounts of data to properly train their new Large Language Models. They found a ready pool of data – the internet – where many of us had spent decades building websites, collating information and selling our wares. They relied on the fair use doctrine, and vacuumed it all up, even though someone writing an article and quoting a few paragraphs of someone’s work in a review, for instance, is far different form ingesting practically every work of mankind and mashing it up and reselling it at a profit.
Our legal systems responded, but slowly, as did the government. At this time, AI written work is not copyrightable (though that may change) and the Anthropic settlement and the Hollywood actor and writer strike made some headway in slowing things down.
But even in the Anthropic settlement, the seemingly gigantic $1.5 billion settlement looks really different when you realize it represents only a couple percent of Anthropic’s net value. A fiscal slap on the wrist.
Meanwhile, artists, narrators, and authors are being put out of work, and the settlement will reach only a small fraction of the affected creatives.
So from the get-go, gen AI was tainted by the sins of its creators.
Better Than LSD
Another fun AI fact – according to a recent study, Gen AI hallucinates things about a third of the time. This happens because it’s basically a pattern machine. Think of the text predictor on your phone text app, on steroids. So it draws on everything it has ingested to spit out the answer to your question, but if it can’t find the answer, it makes one up, following the patterns it has learned from its training data.
Often this manifests as bad data in the middle of the good. Say, for instance, you ask it for a list of great Italian restaurants in town. It spits out ten that sound really good. But if you dig into the data, you might find that two or three of them don’t even exist.
This is bad enough when we’re just talking dinner plans, but what about when it makes up things about people? Like saying that you are a writer, you won a Rainbow Book Award in 2018, and you are under investigation by the New York Times for plagiarism? The last one may be false (let’s hope!) but once it’s out there, the damage has been done.
The maddening thing is that these companies could fix the hallucination problem fairly easily, but just instructing the LLM to say “I don’t know” when it doesn’t have the answer. But how do you get people to pay for your product when it says “I don’t know” a third of the time?
You Are The Smartest Human Ever
A third major issue with AI is the sycophancy problem. Gen AI companies want you to enjoy your interaction with their AI systems. So they make them encouraging, which makes you feel good when talking to them.
But when this goes too far, bad things can happen.
Reports have emerged of people who became convinced that they had discovered a secret formula (or a terrible problem) that no one else recognized, and, egged on by an LLM that told them how amazing they were, fell down a rabbit hole and lost months of their lives while they tried to reach the authorities or get the word out about their amazing or terrible discovery.
And even worse, there is ongoing litigation alleging that several teenagers have killed themselves after being encouraged by LLMs to keep their depression and suicidal thoughts secret from their closest friends and families.
ChatGPT rolled back one new version launch within a week after users complained that it was overly fawning, to the point of making many users uncomfortable.
Let’s Cook the Planet
Generative AI companies are building thousands and thousands of AI data centers all over the world. Where these go in, local electric prices often rise, and the noise levels are a major annoyance for anyone within a mile or two of these centers.
They also consume huge amounts of electricity and water, exacerbating our climate change crisis, especially with the US turning away from renewable energy and back to fossil fuels.
Did you know that Microsoft has negotiated a deal to re-open Three Mile Island? Y’all may be too young to remember it, but the plant’s partial meltdown in 1979 effectively killed nuclear power in the US.
So every time you use an AI product, you’re using an outsize amount of fuel to create your “elephant monkey in a suit juggling bacon” image.
There are more issues, but they are well documented elsewhere.
Back to the War
The existence and pervasiveness of Generative AI in almost everything (we looked at an AI-enabled television the other day) has driven a wedge through the creative community.
Some authors and artists are adopting it in part (for creating advertising memes and writing blurbs) or wholesale (to create their book covers, write substantial parts of their books and narrate them.
And scammers have moved into the market, creating books whole cloth using AI, sometimes in dangerous ways, like a book on mushrooms that encouraged folks to eat the poisonous ones. And sometimes they are just tedious – there’s a flatness to AI prose that often renders it banal and boring.
On the other side, many creatives are mobilizing against AI, and looking for ways to brand their own work as human created. The Author’s Guild even has a Human Authored Certification authors and apply to their work.
But here’s where we get to the meat of the problem.
Authors using AI do so for a variety of reasons. They sound pretty reasonable – they can’t afford a professional artist. AI is not going away, so anyone who doesn’t get on the bandwagon will be left behind. Or they have a disability and can’t create what they want/need without it.
I have some sympathy for the last one. Disability access is a real and pressing thing, and assistive technologies have made a huge difference for the disabled community. And yet… AI is just so fundamentally flawed. Maybe we need to find better ways to help these folks that don’t involve the poison fruit.
Authors and readers who are anti-AI are understandably furious. I’m furious. At least nine of my works were stolen by multiple Generative AI companies, and while I am one of the lucky ones – I copyrighted Skythane when I first started writing – there are still eight stolen works for which I am getting paid nothing.
But here’s where it gets unhealthy.
There are a number of anti AI folks – let’s call them Crusaders – who have taken it upon themselves to “out” authors using AI.
There are lists going around of authors “known to use AI.” Some of these are correct – folks who have admitted using it and even defended its use.
But others are because someone saw a cover and thought it “looked like AI.”
I’ve been accused of this – someone called my cover for The Dragon Eater ugly and stated that it was clearly AI. I pushed back and explained that it was computer created, using model software that merged the work of a number of artists, all paid for their parts, under the skilled hands of another artist. No AI involved.
Their response: “So you wanted an ugly cover?”
These accusations can do real damage to an author’s reputation and income, and they are hard to fight. Sure, you can refute them… one author showed that their cover was created in 2016, six years before the launch of the first AI image generator.
But as the Mark Twain famously wrote:
A lie travels halfway around the world before the truth puts on its shoes.
These days, it’s enough for a cover to look like it was computer generated for the accusations to fly.
This is hard for websites that try to promote human authors to deal with too. Even though I ask every author when they add each book to our directories if no AI was used in its creation, people lie.
And I was recently contacted by two readers alleging AI use against two of our listed authors on QueeRomance Ink. I did not see any telltale signs of AI use myself – in one case, the “proof” was that the image looked “too perfect.”
This put me in the awkward position of having to contact both authors and ask some hard questions. Both were able to satisfy me that their covers did not use AI, and although they were fairly understanding, I still had to ruffle feathers based on an unfounded accusation.
At least both of these folks contacted me privately and did not air the accusations in public. And I understand why they felt the need to ask the questions.
When these things are aired publicly, things tend to get nasty very quickly.
And the war rages on.
What’s the Answer?
I don’t know. I’m still figuring this out as I go. I am committed to supporting human-made work, and am trying to do so in a positive way. Instead of accusing, I ask, and am ready to educate folks who are not aware of the flaws and shortcomings of this technology.
At the same time, I have to be ready to protect the groups I admin and the sites I run from the flood of AI slop.
I will always defend and support human-made work.
Maybe one day, our writing community will stop turning on each other and find a way to win the wider war.