International Space Station image of an aurora over Earth, 
with Mars and Venus shining bright behind the words Ad Astra Institute for Science Fiction and the Speculative Imagination
About the Ad Astra Institute Ad Astra Institute Blogs Ad Astra Institute Workshops and Courses Donate to the Ad Astra Institute Ad Astra Institute Resources for Speculative Fiction Writers, Educators, and Fans

Ad Astra Artificial Intelligence Policy
No GenAI slop, no plagiarism-machine botshit. Here's why.

To succeed in this new world, we need to be more creative, more curious, and embrace lifelong learning, because machines can't replicate that (yet).
– Arianna Huffington

SF is the literature of change, and part of what's changing in our world the most today is what corporations tout as "artificial intelligence." However, AI today is not what SF has long speculated about, and it's not the magic pill so many promote as the path toward creative success. Anyone looking to become skilled in the art and craft of speculative fiction needs to stay away from generative AI (genAI). We at Ad Astra do, and we expect attendees in (and teachers of) our courses and workshops to avoid it, too.

You might be thinking, "Isn't the Ad Astra Institute all about speculative fiction? What’s more SFnal than AI?"

Yeah, well... let's take a closer look.

What is AI?

Our policy on using AI

How AI Killed NaNoWriMo: (Opposing plagiarism machines isn't ableist)

More AI resources

What is AI?

First of all, most of what the media discusses as "AI" and what businesses push using that name is far from it. The "positronic brain" of R Daneel Olivaw (of Asimov's series of robot stories and novels) bestows this "humaniform" robot with human-like intelligence and a capability to think, reason, imagine, and do everything else humans do - only better. He's ruled by Asimov's "Laws of Robotics," which also gives these fictional robots ethics exceeding that of many humans. HAL 9000 (of 2001: A Space Odyssey) also possesses greater-than-human intelligence and the ability to think and possibly even feel, but it was programmed without ethical rules so much as mission-related ones (you'll have to watch if you don't know 'em; no spoilers). Neuromancer, Wintermute, and other artificial intelligences and digitized human minds in Gibson's touchstone novel Neuromancer are also superintelligent, thinking machines. The movie AI: Artificial Intelligence delves into the potential emotional nature of AI. Iain M Banks' "Minds" from his Culture novels possess vast intellectual and other power. Especially in recent years, SF offers many, many interesting and moving examples of machines that can think, create, feel, and do all the other things humans can.

But they do not exist in our world. (At least, not yet.)

Non-generative AI - often referred to as "machine learning" or "neural networks" - today is technology that enables computer systems to simulate or mimic human learning, comprehension, problem solving, decision making, and autonomy in order to perform tasks too complex, fast-paced, or dangerous for humans. How NASA defines AI, per the National Defense Authorization Act of 2019:

Note that none of this involves creative, artistic, or otherwise imaginative or innovative cognition - or thinking at all. True(-ish) but "weak" AI systems today include weather-analysis algorithms that pore over vast amounts of data to identify patterns that help humans predict upcoming weather patterns, military drones that evade defenses and attack enemies without human intervention by using geographical maps and images of threats and targets with real-time data processing and analysis without relying on remote processing ("edge AI"), self-driving cars, pharmaceutical algorithms that process likely biological interactions with various compounds, medical systems that process patient symptoms and compare them against vast databases of known pathologies, and much more. These are all various forms of machine learning, neural networks, and so on - (misleadingly) classified in computer science as "artificial intelligence" - and are far closer to the SFnal concept of Strong AI than the "generative AI" (genAI) dominating today's corporate landscape.

Strong AI, Artificial General Ingelligence (AGI), or Artificial Superintelligence (ASI) are something like simulated human minds. The "general intelligence" part of the term refers to the human mind's ability to independently adapt and change based on our environment and input, so we can solve math problems as readily as create unique art. While weak AI relies on humans to define its learning algorithms and provide relevant training data, AGI would not require human assistance after its growth phase. In theory, AGI could develop human-like thought capacity or even consciousness rather than just simulating it (as with today's weak-AI chatbots and LLMs). And ASI is just what it sounds like - AGI whose abilities far surpass those of the human mind. This sort of AI stuff has long lived in the realm of spec-fic, and we're not there... yet.

Though many believe it is coming, and could emerge as soon as the 2040s. For more, see Vernor Vinge's seminal essay on the topic, "What is the Singularity?"

However, genAI in today's world does nothing more than spit out statistically likely and expected output to the input it is given. That's why when someone asks ChatGPT, "Is 'Handyman' a palindrome?" it's likely to respond, "No, it’s not" - but if you follow up with, "Are you sure? I think it is," it's likely to respond, "Actually it is a palindrome! Handyman is spelled the same backward as it is forward." This is because its prime directive is to sound as if it's providing a correct answer. Despite having access to a database of basically all information that humankind possesses, ChatGPT and other LLMs or genAI models don't actually understand what a palindrome is. They understand nothing, in the human sense of those terms. The only thing these algorithms do is determine the next most-likely word to follow a word it just used and then barf it into the series of words it offers. That's how they produce "stories" and other text. Visual genAI does something similar, but using existing imagery to whip up a composite of imagery cues that seem (to its database) like the kinds of thing a human is requesting in their prompt.

GenAI algorithms can't even do math well - the most basic of all machine operations. "Look what you did - you took a perfectly good computer and made it worse!"

LLMs, genAI, chatbots, and similar plagiarism machines are not artificial intelligence by any real definition of the term; rather, they're just lazily (or trickily) called "AI" because that's a field of computer programming. Corporations likely glommed onto the term because of the popularity of robots in SF, which the public thinks of as cool, exciting, super-intelligent - or at least super-capable. This, despite these kinds of algorithms being simply unable to produce anything of value, because they do not understand, know, or create anything. (For more on this, check out "Difference Between Machine Learning and Artificial Intelligence," by Anusha Sharma.)

GenAI as it exists today is really just a marketing tool for businesses who want to eliminate human labor and increase profits, and for profiteers to attract the vast capital available from investors seeking quick and large returns on investment.

How AI Killed NaNoWriMo:
(Opposing plagiarism machines isn't ableist)

In fall of 2024, the amateur writing-support organization NaNoWriMo destroyed its long-running program by coming out with the policy screenshotted here. Not only did they actively condone the use of genAI by their participants, they went so far as to frame criticism of AI as classist and ableist. This was a dangerous silencing tactic that distracted from the real problems of writers using genAI (and prompted us to launch Ad Astra's Writing Solidarity Month, now an ongoing set of activities offered through our Discord channel).

For the “anyone can write” organization that offered support and encouragement to both beginners and experienced writers in producing something that gives them the joy of creation in a community setting, this was a strange stance. What’s the point of using genAI to barf out word-salad stolen from people who actually took the time to consider their words in particular combinations? If you’re not actually writing something yourself, what’s the point of participating in the “doesn’t matter if you’re just doing this for your own pleasure or as practice to get better” exercise?

Saying, “We support using LLMs to do this thing that’s supposed to help writers and would-be writers, and if you don’t think it’s okay you’re ableist” is an even weirder and, frankly, unsettling attitude, especially as an official position from a writing organization.

We at Ad Astra all for using AI to do actual work that humans need help with and don’t want to or can’t do: searching vast databases that would take years for us mortals to do, running spellchecks and offering grammar suggestions, doing tedious tasks to save hours of suffering, conducting virtual experiments so we don’t need to set off nukes or test on animals, and so on. Like most SF writers and people interested in what’s to come, we're both excited and wary about the emergence of true AI as described above. But what marketeers are pushing under that term now is not actual AI, hence the use on this page of scare quotes around the corporate-appropriated acronym.

Creating art is a human activity as old as our species itself (and likely older), something that brings pleasure to those who create it as much to those who receive it, so why use a machine that removes the joy and satisfaction of actually creating stuff?

One day (soon, we hope), we’ll have writing tools that benefit those who need them to fulfill creative desires without stealing (called "training on") other people's work. For example, I’ve (Chris here) long been in search of voice-to-text software that doesn’t require months to train, doesn’t require a mouse and keyboard to constantly correct errors and add punctuation, and uses a dictionary large enough to support writers who use technical language and foreign words - that is, allows us to easily add made-up SFnal or fantastic words.

This would save everyone’s wrists while vastly improving the lives of disabled people who wish to communicate using text. GenAI does nothing to help disabled people create their own new work; rather, it turns them into accomplices in the stolen-art world, and offers only the most sub-average assemblages of words that plagiarism machines can generate. One day soon, we hope true AI will help us create rather than spew degenerated content assembled from the effort of human creatives.

For those who need assistance in creating art due to a physical or mental disability, genAI is not your friend. Today there is some (actual but weak) AI-driven support, including programs that help with text-to-speech and speech-to-text, predictive text, social-skills training, emotion-recognition tools, personalized-learning algorithms, virtual-reality environments, scheduling assistants, and so on. These are all excellent and useful applications of both AI and "AI," and they don't steal from creators. But these programs don't actually create anything, and expecting algorithms to do the writing (or painting, or whatever) for you will only lead to disappointment. Maybe one day we'll be able to use Strong AI as a writing partner, but for now, using genAI is really just enabling an expensive, environmentally harmful tool that cheats and steals from others, and generates nothing more than lowest-common-denominator AI-slop botshit.

For those who cannot afford to take classes or workshops on the art they wish to create (which I assume is the root of the "classist" argument), check out workshops that offer scholarships (like those we offer through the Ad Astra Institute), dive into the mountain of free writing resources and writing tips you'll find online (including on our blogs and on this website), meet up with local writers to help one another out via solidarity groups (as with our alum Discord), and so forth.

For those who hate the process of creating art, some life advice: Do something else with your time that you actually enjoy doing, which brings you pleasure or satisfaction. Why tromp around in creatives’ spaces and steal their work to assemble shabby content you didn’t want to actually make in the first place? You can do a million other things to satisfy the need to make something - pottery, woodwork, gardening, bicycling, public speaking, building telescopes, and so on - and surely one of those will spark joy without perpetuating the harms that come with relying on genAI to do the work for you.

For those fascinated with the creative process but who feel intimidated by getting started, watch a documentary about writing (or painting or whatever art you want to do) instead, or take a writing workshop, or do something other than support corporate encroachment on artists' spaces - either accept that it’s not your bag, or put in the time and effort to do it yourself. Y'know, the things NaNoWriMo used to be all about supporting.

This nonsense of using LLMs to “create art” is nothing but appropriation of human creative labor, and gives those who use it - whether they're able-bodied or not - nothing but diminished cognition and atrophied creative skills.

Don't let anyone shame you into being okay with creative theft.

Our Policy

Given all this, recent improvements in genAI show that it is now able to produce readable stories and decent-looking imagery without much human input. Due to these developments, we decided we need a policy about machine-generated content.

Here it is:

What is currently touted as "AI" does not think, feel, or create. Even if the internet is becoming saturated with genAI slop (some now estimate more than half - or more - of what you see online is machine-generated), it pales in comparison to human-created material:

Readers can feel when creative work shares an authentic, genuine, human experience; when it's inspired by human imagination and thought. And if, indeed, we're headed toward a "dead internet" where much of what we encounter online is barfed out by plagiarism machines, the data those LLMs trained on came from internet content of widely varying quality - so it's only getting worse as more and more of the content they train on is, itself, genAI slop. The same is true of creative work: The more LLMs train on botshit, the faster their quality falls off the cliff. So as the average quality disintegrates, creatives - true artists who spend their lives honing their craft - stand out more and more. Even at its best, botshit simply cannot be better than the average quality of what its generator-algorithm database has consumed, so the average work of an average-skilled human creator is better than even the best genAI slop.

The image to the right is the final frame of a brilliant comic on this subject by The Oatmeal (by human author and artist Matthew Inman) entitled, "Let's talk about AI art."

(Go read it!)

But at the root of things: If you want to become a better writer, to develop your talents and hone your skills, why would you intentionally limit your own creative development by using a plagiarism machine to barf out work under your name? The only way to improve your art is to study the kind of thing you wish to create (like the LLMs, only without the plagiarism aspect), learn the tips and techniques to improve your skills, and practice, practice, practice.

Finally, on a more ominous note, Microsoft - ironically, one of the prime pushers of genAI - recently released (February 2025) results from a study that found "AI Makes Human Cognition Atrophied and Unprepared" (pdf).

Using AI literally makes people dumb and uncreative. What further convincing do you need to stay away from genAI in your creative endeavors?

     - Chris McKitterick


As a community of creatives, educators, and scholars, Ad Astra strives to make the future a better place for humans. And our AI friends, once such exist.

Some AI resources

If you're aware of others we should add to this list, let us know!

Some articles that delve deeper into some of the challenges with using today's "AI" -

We hope this helps!

Connect with Ad Astra

Ad Astra on Facebook Ad Astra on Tumblr Ad Astra on Bluesky Ad Astra on Twitterx Ad Astra blog Ad Astra YouTube channel AboutSF YouTube channel

We believe strongly in the free sharing of information, so you'll find a lot of content - including course syllabi and many materials from our classes - on this and related sites and social networks as educational outreach. Feel free to use this content for independent study, or to adapt it for your own educational and nonprofit purposes; just please credit us and link back to this website. We'd also love to hear from you if you used our materials!

This site is associated with the Science Fiction and Fantasy Writers of America (SFWA), the Science Fiction Research Association (SFRA), AboutSF, and other organizations, and its contents are copyright 1992-present Christopher McKitterick except where noted, and licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License: Feel free to use and adapt for non-profit purposes, with attribution. For publication or profit purposes, please contact McKitterick or other creators as noted.

This site does not use cookies and is free from tracking. We do not use or condone the use of machine-generated text or images for educational or creative purposes (except as satire), and do not accept student or teacher work manufactured by algorithms.

Creative Commons License
Works on this site are licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

updated 10/13/2025