National Public Radio correspondent Mary Louise Kelly interviewed recently Neil Clarke, senior editor of the speculative fiction journal Clarkesworld, regarding his magazine’s having to “cut-off” submissions due to an inundation of works composed by artificial intelligence. Irony aside, the notion of bots creating science fiction for a magazine specializing in science fiction— or creating anything formerly the soul purview of humans— sounds a bit scary. A writers’ conference attended by Terminators droning, “Take me to your readers,” scary.
A week ago, Tom Rutledge, a fellow Social Studies teacher and our Social Studies Department Chair, showed me an online bot for writing lesson plans. Simply entering key word search parameters, like “George Washington,” “The Whiskey Rebellion,” and “Station Learning,” began an algorithmic internet search for related text, which the program wove into a coherent lesson plan presented in a narrative style. What I saw was, admittedly, not bad and would certainly serve in a pinch. So, how may this kind of tech impact the world of creative writing?
Really, lesson plans and creative writing are apples and oranges. Formal lesson plans including essential understandings, behavioral objectives, formative assessments, indicators for observation, openings, procedures, and closings are complicated but pretty mechanical. Lesson plans do not score highly in the voice, literary device, nuance, and lyricism categories.
One of my students submitted recently a well-written project entitled “Washington’s Report Card.” I wrote in a margin, “You have an excellent voice that really comes through here.” This comment came right on the heels of my conversation with my colleague, and I paused to consider, can a bot develop voice? If asked, can I really explain to my student what I meant by voice? I know I can’t quantify voice in any meaningful way. If I can’t explain fully or quantify voice, how can I expect a binary entity whose world is a never-ending string of ones and zeros to develop a unique voice?
Publisher Neil Clarke shared in his interview publishers have ways of determining if artificial intelligence generated a submission or not, but he declined to reveal trade secrets— for obvious reasons. He did share the A.I. submissions were really bad, much worse than a vast majority of normal submissions and had no hope of publication. The biggest problem artificially generated writing poses to the publishing world stems from the work-hours needed to cull organic submissions from the rest before even starting the review process.
Perhaps there is a niche for artificial intelligence in the writing or academic worlds, but should this make any of us concerned? Am I concerned? Wary? Yes. Panicked? No. At least, I’m not concerned as long as we humans have a voice.
You must be logged in to post a comment.