On May 18, writers and software developers James Yu and Amit Gupta announced the launch of a new feature on their creative-writing AI platform called Sudowrite. The feature was Story Engine, described by Yu as “an AI tool for writing long-form stories”.

Sudowrite uses GPT-3, GPT-3.5, and GPT-4 – the same models used for ChatGPT, the AI chatbot created by OpenAI, an artificial intelligence research company whose investors include Elon Musk. Its function, according to people who support its founders, is to “enhance” the experience of writing. What this means in concrete terms is that Sudowrite was built to assist writers with writing, editing and brainstorming ideas, plot points, characters, dialogue, and more. Yu and Gupta announced that the model is a tool to help writers get around impediments such as writer’s block and help them produce larger volumes of content faster – or to simply function as a one-stop “reading circle”.

Advertisement

On their respective websites, Yu and Gupta both say they are speculative fiction writers as well as tech entrepreneurs. Yu’s company for creating mobile apps, Parse, was bought by Meta (then Facebook) in 2013. Meanwhile, Gupta was the founder of Photojojo, a company that has been described as an online store for photo products. Due to their backgrounds, they are more closely situated at the intersection of AI and writing than most writers.

AI has been making inroads into the creative arts for a while now. In 2022, Midjourney and OpenAI’s Dall-E art generators gained as much criticism as praise. AI-generated images have been making the rounds on social media, albeit with much debate about the number of fingers they attach to human hands. AI-generated music is available freely now as well.

Sudowrite is not the only AI writing tool. A simple Google Search throws up a dozen results on the very first page – including Jasper and Rytr, and entire lists of “novel writing AI” – alongside debates about whether GPT is “the end of creative writing.”

Advertisement

A work in progress

To see how good it really is, I used the AI tool for myself.

Once you log in to the website – no credit card or payment required at sign-up, at least for the moment – a text box waits expectantly, cursor blinking. Here, you put in your text, and there are various options to transform it. You could try expanding it with the help of descriptions, which are subdivided by the five senses; you could ask it to rewrite the whole paragraph; you could even ask it to continue the story up to a point.

Like any massive large language model – the official term for generative artificial intelligence machines – Sudowrite spits back at you material dependent on what you have already put in. But the responses it generates also take into account the billions of websites it has crawled and scraped data from to train itself; the people who have set up this particular model may have also fed more specific data into it. The entire history of the internet has led up to the creation of this one tool.

Advertisement

But to put it bluntly, the results were not impressive, at least for me. The generated content was repetitive, trite and riddled with cliches; there was no trace of an original voice – not even mine. The plot extension was uncalled for, absurd and not in the direction I wanted it to go in. The physical descriptions were more of a word salad. The content, at least at the moment, requires extensive editing in order to be usable, even without things such as citations, which are notoriously unreliable in ChatGPT – but remember, this is AI, which means the longer I use it and the more content I feed into it, the better it will understand what I’m looking for.

The part I enjoyed was the character name generator, partly because the names were so meaningless. When people choose names for characters, there is usually a logic to it – a justification based on the person’s motivations, characteristics and so on. Sudowrite has no such logic right now. It simply checks what you want, finds examples close to it and produces an imitation of those already existing examples.

Another enjoyable bit was trying to get the AI to suggest some ideas for creating a character arc – say, five quests a character would have to complete. This was a very clear prompt with no attempt at misdirection and yet the results were inadequate. Then again, perhaps the suggestions are just that – suggestions, and they’re meant to prompt better ideas rather than being prescriptive.

Advertisement

The “article ideas” functions were broad – too broad to be of any use and could be used as a basis to research specific topics at the most to convert them into usable pitches or ideas, which may not be a bad thing – it’s more of a guide for an angle than a story in itself, which narrows chances of plagiarism.

The AI’s most controversial product is still Story Engine, which helps to create plots – as opposed to merely giving feedback or criticism, as writing groups or beta readers tend to do. With the level of detail that is required to be input for a coherent result and the constant steering by the user, I found myself repeatedly thinking – why not just write the story yourself? Then again, other writers have mentioned writing an entire novella over the course of a weekend with the Engine, which would certainly be a difficult task for most.

So far, the AI model is clearly a work in progress.

Advertisement

Ethics, consent, plagiarism

But what is that progress based on?

Back in 2022, when Dall-E exploded onto the scene, there was an immediate outcry from artists. The backlash originated from the fact that AI models are trained with the help of existing data – which, so far, has been produced by human beings. So if an AI is mixing the styles of Van Gogh and Picasso, it can only have been trained to do so by being fed examples of their art. But did the companies behind these AI tools have permission to use their art in this specific manner?

At the moment, AI does not create – it only re-creates. So every piece of art generated by an AI is based on someone else’s creation – someone who may not have consented to this use of their art. Every piece of text or image that has ever been posted to the internet has become fair game – including works by actual human artists, including this very article.

Advertisement

So it is for Sudowrite. In his tweet, Yu mentioned that Sudowrite was the result of a collaboration with “hundreds of writers.” But with no list of writers forthcoming, it’s unclear who these writers were and how they contributed. Did they directly consent for their pieces to be fed into the AI model? Or was their writing used by being pulled out of the internet’s depths, without their knowledge?

Sudowrite’s own website says that the underlying large language models are trained on the entire “crawl-able internet” from 2011 to 2019, representing “tens of thousands of books”– this may obviously include material that is copyrighted.

OpenAI has, after some pressure, revealed the sources used to train its GPT models, although at one point the company’s CEO, Sam Altman, had threatened to pull out of the EU after the bloc formed rules that required companies to disclose the sources of their AI training datasets (retracting the statement later).

Advertisement

On the other hand, when one uses Sudowrite, one of the most prominent warnings is regarding plagiarism. The landing page of the website says that it functions on the basis of “plagiarism in, plagiarism out” – Sudowrite will generate responses based on the dataset, so if the dataset is plagiarised, the responses will be provided based on the plagiarised information without any real bars. But, as far as I could tell – the AI did not appear to be doing a plagiarism check in real-time.

To verify this, I changed some terms and put in a sentence, which expressed an idea that was plainly not mine and has been used before. I wrote about an evil wizard named Tom stepping out into a “crooked street.” As most human beings will spot, the idea is derived from the Harry Potter novels – but the AI did not ping or warn or give any other notification that it had spotted this attempt to use someone else’s published work as one’s own. This places the onus on the user to not cheat, and on whoever receives the user’s material to check for plagiarism. So while the website states that plagiarism is against the website’s “Terms of Service”, it’s unclear how it will be enforced.

The flip side is the usage of AI to generate pieces of writing for which an individual cannot rightfully take credit if they have not written it – essays, for example, or novels.

Advertisement

“It’s very difficult to detect AI-generated text,” said publisher Arpita Das, founder of Yoda Press and a visiting professor at Ashoka University. “It’s not like having a plagiarism checker, and youngsters are already adopting it. It’s better to teach them how to use it ethically and correctly rather than attempt to eliminate the usage entirely because generative AI technology is here to stay.”

“I would not accept prose generated by AI,” said literary agent Preeti Gill. “To my mind, it’s lacking in emotion, real creativity, real feelings and new ideas which only human writers can actually create.”

There is also the deeper issue of copyright. The team behind Sudowrite have said that the company does not claim the copyright ‘over anything you put into Sudowrite, nor anything Sudowrite generates for you.’ But if the AI models it uses were trained by crawling data from existing writers’ work, then is the work produced by it, its own? Do we even have provisions to copyright such work? And if so, should the copyright go to the human who helped to generate it, or to the company who licensed the AI model, or the company who created the AI model?

Advertisement

“(This) could mean that ChatGPT output, and the output of tools like Sudowrite that tap ChatGPT as the underlying technology, is not copyrightable – obviously a big concern for anyone using the platform to create proprietary content,” said Zweifler.

The privacy policy of Sudowrite, meanwhile, says that user data will not be shared with a third party - or used to train either Sudowrite or OpenAI’s models. But large language models – especially those used for generative AI – work by learning from input data. Usually, they then apply that learning to all users’ commands, even after the specific content is deleted from the database. AI does not forget. How, then, does Sudowrite intend to enforce its statement of not using its users’ writing or ideas to train the underlying models?

Do we need AI?

One can see how AI tools such as Sudowrite might be useful for a frustrated content writer having to go through hundreds of pages a day – it can suggest new ideas, help summarise things quickly, extract bits of text from longer excerpts to create shorter, quote-worthy quips, and in general do a lot of the rote work of content writing. It’s to be noted that no business can use fully AI-generated content yet – at least, not the ones that care about accuracy – it requires extensive fact-checking and editing to be usable. And AI learns – the longer one uses it, the more refined the language will become; the more other people use it, the larger the dataset it will have to draw from to generate more coherent responses.

Advertisement

It is also entirely possible that a content producer’s role will now evolve from someone who researches and writes, into someone who scrapes data to provide AI with the dataset that it can then use to generate summaries, social media posts, quotes and so on. But not everyone agrees that this is a good thing.

“There’s no need for this because it’s not solving a problem for anyone,” said sci-fi and fantasy author and screenwriter Samit Basu. “There is no shortage – rather, an oversupply of high-quality writing.” What he is referring to, of course, is the fact that this is not solving a problem that the writers have – it’s solving a problem that corporations have.

There are many creative writing jobs that are not in the limelight, such as content writing or copywriting – which may be a livelihood for some but are an expendable cost for companies. Hiring human beings to churn out those copies is neither fast enough nor cheap enough – so to keep up with the demands of endless growth, humans turn to machines. “One can make the argument that AI is not supposed to replace writers, but only individuals who are secure in their careers can say that,” said Basu. “For the rest of us, those in more precarious positions, those in entry-level jobs in fields like journalism – they will absolutely be replaced.”

Advertisement

It’s already happened. Buzzfeed has famously started automating its quizzes with AI. NPR recently held a podcast episode generated entirely with AI. Google is testing a tool that writes newspaper articles. What is coming, in short, is a massive transition period creating job losses and career changes as people grapple with this new reality. “Other technologies have also been sold earlier as democratising but, ultimately, the power equations have not changed,” said Basu. “Something like YouTube started only 20 years ago and is already difficult without backing.”

It’s going to be especially tough in a country like India, where writers do not hold much bargaining power – there is no union like the WGA in the US, currently on strike to help improve revenues for writers; writers take terrible contracts in the film or publishing industry all the time because they are easily replaceable and very few can get the terms they want, unless they’re already very well-known. There’s a lot to be written about the sacredness of writing – but not all writing is sacred; there’s nothing sacred about writing blog posts about escape rooms and Astroturf, as I myself have done. What it did do, however, was to keep me afloat – at absolutely terrible rates that resulted in back-breaking work – but afloat nonetheless. Now, that will no longer be an option.

“AI technology such as Sudowrite will disenfranchise creatives further and inequality will worsen by creating massive job losses,” Basu said. “It will not help writers – only corporations will get richer, employing it as a cost-saving measure, and it will make writing efficient only for them, creating more material that no one is going to read.”

Advertisement

What about fiction and art?

“There are precious few paid jobs for fiction writers as it is, and, by reducing the number of paid fiction writing gigs even further, I think AI-generated writing has the potential to make high-quality literature the sole domain of the wealthy hobbyist,” writer David Zweifler told me over email. He also shared concerns echoed by many other writers, “The quality of AI-written stories doesn’t have to rival the quality of human writing to make human-written fiction not viable. If the quantity of bad or mediocre fiction increases exponentially, finding good fiction will become so difficult it will effectively destroy the art form.”

Basu does not think that the supplemental tools – generating plot ideas, names of characters and so on, ostensibly to help to get around a writer’s block – are of much need, either, and Zweifler agreed. “I would not use it,” said Basu. “Writers would know how to do these things. I do not know a single writer who loves writing who would want to use AI to write their book instead of going to the trouble of writing it themselves.”

“It’s important to clarify that beta readers, mentors, or editors can help with names, plot points, plot devices, dialogue, and descriptions, but they would not normally write them for the writer, as Sudowrite does,” said Zweifler, referring specifically to Sudowrite’s Story Engine. “Helping a writer is one thing. Doing the writing for the writer is replacing the need for a writer.”

Advertisement

Publisher Das added, “I’m constantly talking to writers about the excitement of writing on their own. Do you want a robot to do this creative or academic or critical writing when you want to go through the process of crafting words – the whole appeal of being a writer?”

There are writers who have used AI, though. On their “use case” page, Sudowrite links to a poem published in Strange Horizons literary magazine in 2021, written by Darusha Wehm. Sudowrite’s page says that it was written “in collaboration” with their product. (Wehm refused to be interviewed for this article and Strange Horizons did not respond to requests for comment.)

“What we need, urgently, are laws framed keeping the use of AI in mind,” said machine-learning specialist Anirban Saha. “We need to frame laws that legislate the use of AI for copyright issues and disinformation – the two most immediate dangers from the use of AI as a tool.” Many places have already foreseen the danger – the EU has already passed regulations around the usage of AI, something that OpenAI allegedly lobbied against. One of the major provisions is that the bloc now requires generative AI developers to submit their systems for legal review before releasing them commercially. The UK is holding a global summit on AI safety later this year.

Advertisement

In the absence of immediate, coordinated framing of policy or legislation, private players have stepped in. One of them is Spawning. Spawning allows users who sign up with it to opt out of their content being used to train AI models. The technology generates a dot txt file that one can input into their website’s code to stop training models from using it. Shutterstock and Artstation have already adopted it, and Stability’s CEO Emad Mostaque tweeted about it. “Most people assume that the data problem with AI is intractable, and we disagree,” said co-founder Mathew Dryhurst over email, “We think that long-term, consenting data interactions are good for AI and the people it is trained on.”

Regulating AI

The tool is not perfect. Creatives of all kinds rely on social media websites to market their work – the source code for which cannot be edited to include this file. And for many, of course, the damage is done and the measure comes too late. Companies such as OpenAI have already used data available out there. For these users, there may be hope in an alternate revenue model that allows them to retain some control over their creative output, such as the one created by Indian AI music company Beatoven.

Beatoven has hired musicians and artists for their AI product. The product allows users to create their own tunes, based on a database of music produced by musicians. “There are two main problems with using AI for creative purposes – attribution, and revenue,” said co-founder Mansoor Rahimat Khan, who comes from a family of sitar players, “Artists are not credited for their work and they lose revenue generated from their work. We aim to get around this by using data only from this database of artists who have opted to work with us.”

Advertisement

They also have the option to allow users to sort out copyright issues with their help. “If you are building an LLM (for creative fields), it is essential to have the consent of artists,” said Khan. “To behave the way that OpenAI has done is unethical; without providing people an option to opt in first and then ask for regulation is hypocrisy.” Khan, like Saha, believes that regulation is urgently required. But he also emphasises that alternate business models need to be developed to generate revenue for creatives. “Current models such as streaming only allow the top 1 per cent to make money,” he says, “AI needs to be utilised in such a way that it can change this.” With that in mind, Beatoven has created an artist’s fund to encourage musicians to join their catalogues.

Spawning has developed a tool for musicians as well – co-founder and musician Holly Herndon has chosen to copyright her own voice and turn it into an AI model. Those who use her voice will be able to save the resulting piece of music as an NFT, which, if sold, will allocate a percentage of the proceeds to her. It’s a way to get around something that Basu mentioned: “Various creators are facing contracts where people are buying rights to their digital images, voices and so on.”

Another option is to have analytical tools to help detect AI-generated text. This could be especially useful for outlets such as literary magazines which refuse “collaborations” with AIs, asserting that it is plagiarism and that it severely undermines the human creative effort. A few of these approaches include a tool such as GPTZero, which detects AI-generated text by analysing the usage of words. OpenAI has also unveiled a tool that, in theory, can detect AI-generated text, but at the moment it has only a 26 per cent detection rate.

Advertisement

Yet another option is something that’s allegedly also being developed by OpenAI – a watermark or signature built into the text that can be identified as an AI signature. Researchers at the University of Maryland have developed a tool of this kind and made it freely available. Of course, the problem with this is that it requires all generative AI to use a signature in the first place.

These alternate consent and revenue models, should they be widely adopted, could provide artists, including writers, some control over their creative work, and go some way towards preventing the devastating consequences being harbingered by the operative methods of Open AI and Stability. But the doubt remains – why should companies choose to adopt them when the alternative is so much cheaper? And how fast can we develop such measures, given how rapidly the technology is evolving?

And the larger questions remain unanswered – how is AI transforming the very idea of the human creative process, and how can we live in a world in which we can no longer trust what we read or hear or see? An immediate and major concern is the rampant dissemination of disinformation, especially in India. There have already been numerous instances of students using ChatGPT to cheat on papers; lawyers using ChatGPT to cite cases that don’t exist; in a bizarre turn of events, students at a Texas university were accused of using ChatGPT to write their papers because a professor ran their papers through ChatGPT and asked the AI if it had generated them, to which the AI responded yes, it had – except that this was completely false. Multiple studies found that human beings could differentiate between AI writing and human writing at a rate that only coincided with random chance.

Advertisement

This lack of education on how to use text-based AI tools such as ChatGPT can be resolved through greater public awareness and use – but longer-term consequences can be more deliberate, and more damaging. Anyone wishing to disseminate fake propaganda will no longer have to use human minds and hands to do so, and without checks on the content, the rate of spread of disinformation could become more accelerated than ever in human history, with potentially devastating consequences. India goes to the polls next year, and it already has one of the highest rates of disinformation spreading in the world – troll armies, fake news and more, spread through untraceable, encrypted sources such as WhatsApp. What happens when generative AI ramps up the rate of production exponentially?

On May 30, 2023, just a fortnight after Sudowrite was released, leading AI experts in the world – including Sam Altman and former AI lead at Google, Geoffrey Hinton – signed a statement stating that “mitigating the risk of extinction from AI should be a global priority.” “It’s like what David Bowie said about the internet,” said Das, “We’re on the cusp of something exhilarating and terrifying.”

Interestingly, I could not find a way to deactivate my account (there’s a logout option, but no deactivate account option that I could locate) or remove my email from the Sudowrite website (although you can cancel the paid subscription option at any time, according to the FAQ). It seems to be a good metaphor for the rise of AI – which, I have no doubt, Sudowrite itself will tell me soon.