Featured

But, but, BUT I like using AI-How do I be… myself?

Ah, the classic dilemma of “I like using AI because it gives me a good sounding board. But how do I ensure I don’t lose my voice, perspective or even my natural style?”

Good question, now let’s get into the nitty-gritty but a few disclaimers first. This is for both art and writing, though it leans more for writing side.

DISCLAIMER:

– This post is only intended to express the use of AI in an ethical fashion, not replacement of human voice and perspective.

– Unless you’re putting effort into your work, you’re not going to feel the satisfaction of the result. AI as a sounding board and something you can take base inspiration from is amazing, just… don’t go around claiming that the base is yours.

– AI is great for generating ideas or helping you get out of a creative rut, but it’s still not you. It doesn’t have your quirks, your experiences, or your unique lens. That’s what makes your work authentic—and authenticity is what truly resonates with your audience.

– “If you’re not bothered to write it, I cannot be bothered to read it.” Shoutout to SSears90 for bringing in with that statement from twitter. But it’s very much true. Sometimes I read news articles or blogs, they sound so…. bland, they make me want to quit immediately. Don’t do that to your project. Don’t do that to your work that’s “babe” and dear to you. You catch my drift?

AI in writing:

Contrary to your favorite stance of “Just stop using AI in your work, you blockhead”, let me give you a more nuanced perspective of the matter.

You have an idea but you aren’t able to push it out and can’t find the right words for the life of you. So frustrating right?

So, instead of waiting for ages after ages for the idea to evolve, you decide you want to do it now and you find yourself sitting (or standing) in front of your device, ChatGPT, Gemini or Claude open in front of you, innocently inviting you into it’s lure of “What can I help you with today?” As if that alone wasn’t enough to make you draw out swords and fight it for the life of you, you say, “Fine. What’s it going to hurt?”

With the prompt entered, idea tossed on the keyboard and the enter button smashed harder than your head with a hammer, the journey begins.

Lo and behold, you have paragraphs after paragraphs of writing that looks so good, so formatted and polished, even Shakespeare might have to do a double take. Or in the case of a rogue AI, L’etranger might have to do a triple take because AI wrote words like “surprisingly slow to act”, “hissed whispered” with a side of a ketchup drenched monocle and “crystal spatulas”. Really, GPTs? This is what you have come to?

Anyways, anyways! You read it once, you read it twice, it looks perfect, it almost looks like what you’d write, minus the over-flowery words like “Spectre bones” for example. Spectre usually refers to a ghost or all things dead but you get me. This is what happens when you let an AI go rogue on your writing!

If you want more fun poked at AI-Human collaboration of a blender-mix, “The Case of Kite, Ghosts, and Friendship?” might just be up your valley. The idea was initially a collaboration between me and Bard (old Gemini. We miss you, Bard.) but eventually, I decided to pull it out of the dead, and am working on writing it word by word. There are still going to be awkward phrases and awkward words, but it’s intentional.

One of the ways to make sure you understand what you read—and no, this isn’t AI-sponsored, or even brand-sponsored—why do people always assume that? But, it’s “Rewordify.com” This site helps you by breaking down the larger blocky words into simplified language with some words in the parenthesis () to help you know the meaning of the word.

“As custard cascades and whipped cream swirls, Edward the Cafeteria Cadaver materializes with a theatrical groan. His ghostly form, usually draped in ketchup, is now adorned with a sticky bib of whipped cream and a ketchup-tinged monocle perched precariously on his spectral nose.”

Bard AI

As custard flows down and whipped cream circular flows, Edward the (self-serve restaurant) Dead body appears with a dramatic/theater-based (deep, long sound of suffering). His ghostly form, usually draped in ketchup, is now decorated with a sticky bib of whipped cream and a ketchup-colored eyeglasses for only one eye sat (like a bird) dangerously on his (related to ghosts or the colors of the rainbow) nose.

Rewordify.com

As the custard flows down and whipped cream swirls in circles, Edward the cafeteria ghost appears with a dramatic flair. His ghostly form, usually draped in ketchup, is now decorated with a sticky bib of whipped cream and a ketchup-drenched monocle that rested dangerously on his translucent nose.

Human-written [My version]

My version feels human, even when it’s still slightly awkward but the idea is more clear. There might be other tools, of course, but this is just the best one I’ve found so far. You can share your creations (and experiments) in the comments. And if nothing works out, you can just throw a “one-punch two” square in the AI’s face (or bits) and write it yourself.

In case all else fails and your hands are itching for that perfect description, go to “Descriptionari.com”, it’ll give you something that suits your tastebuds for sure. It’s a great tool that I’ve been using it for a good long time, until of course, before AI came and swooped up everything so I don’t trust that site as much anymore. But it really was cool while it was still written and managed by humans. 

There are a lot of resources for grammar, paraphrasing and simplifying your AI-base text and create something of your own from it’s skeletal bones. It’s one way to make sure you’re still putting in effort and you’re still learning in the process.

AI in art:

Now we’ve entered the precarious zone of AI art. I just talked about (or rather source-pasted) a post about AI art and what the enthusiasts say. It also involves a breakdown of why artists are upset. You wouldn’t let a machine pass the captcha test, why are you letting them heckle your creativity?

Look, I get it, it’s easy, you don’t have to spend a dime on it and you definitely don’t have to scratch your head for hours to get things done but, but BUT if you really wanted a community-driven art piece that you could still support and give back, why not do it the right way?

Examples of sites to take free illustrations/graphics/photos while supporting the community and supporting the artists that are behind it.

– Freepik

They have a strong regulation policy for both AI-art and non-AI art and if you tried to pass AI art for human… there’s consequences for that, as well.

I haven’t explored the further resources as thoroughly but it’s still here.

– Pexels

– Pixabay

– Unsplash

You need a Unsplash+ subscription for illustrations unless you don’t mind a “Unsplash” labelled illustration. But if you take that, don’t try to roll the brush and be “This was created by me!” Absolutely not.

– Getty Images

Another paid subscription for a lot of images but it’s worth it sometimes. You have the idea that you require, right on the paper.

– Vector.me 

– Kaboom Pics

– Stocksnap.io

– Vecteezy

– 1001 Free Downloads

To download PSDs, images, and vectors from these websites you don’t have to spend a single dime. Most of these sites are just like Freepik, a community-driven website where all the users contribute and keep the free stuff flowing. You can get free images from here as per your requirements and create creative designs. 

However, for some, there are a few rules you need to abide by regarding using the content that you download. For example, you might have to credit the authors, that’s it!

If you’re looking for something very specific and niche, you could use Canva’s illustrations or occasionally, if you’re part of Playbook (Cloud Award winners, anyone?) they have packages or collaborations which are done by a real, human artist. Canva has become more sketchy with its “Make an Image” feature with even the paid elements being AI-generated and some of those examples made me mad but Playbook’s artist collaboration uplifts both the human art and the licensed artist. The license is a creative commons accreditation but mostly? It’s simple to use.

Concluding thoughts? Even when there’s many websites for images, illustrations, stock photos or otherwise, try to contribute when you can to keep the free stuff going. Our world needs more humanity and art heroes. Not machine-generated cardboard pieces that taste blander than a Grahams crackers.


About the blog

This is going to be the place where I experiment sharing about “A Human’s Guide to Detecting AI-Generated Toasts” and see how things go. If it works out, this will be the perfect archive for anyone who wants to learn about how to detect AI content, that is written by actual humans. I’m not against the use of AI in ethical ways, I’m against it stealing our creativity and jest. Besides, who else could come up with a joke about being a hallway menace except a human who hasn’t had sleep in three days?

Want to contribute?

“A Human’s Guide to Detecting AI-Generated Toasts” is going to be a guidebook and while the notion of a guidebook is that it should be boring and technical, me (and the amazing future authors) who will contribute to this blog are going to make it full of glitter and sass.

This is a guidebook like never before! ✨

All you have to do is read the guidelines here. After you’ve read the guidelines, you could either apply from there, or you can find the link for google form here.

Feedback

I’m always looking for feedback and requests, so you can always comment or fill out this form anonymously here.

Featured

The quintessential AI-Art protest

If you’re a bloke—like me, and don’t quite know or understand why AI art is largely frowned upon, here’s a comprehensive guide, condensed for you to read and review from. This will stay as a checkpoint for anyone who has any confusion on AI-generated art pieces. 

This post is not intended to sham anyone who only uses part elements or less than 30% AGI art content and makes their own work. At some point, even traditional or digital artists use something as a scale reference and make something that’s totally new. This is for those who completely use AI art for their creation. Regardless, at the end of the day, to use or not to use is still up to you and we’re only discussing “why it’s frowned upon” not “why do you have no braincells left?” Okay, maybe that was harsh—but the point is, you can have your own stand and not agree with the post. 

Source: Why Artists Don’t Like AI Art

Most people are capable of producing great things in their imagination—interesting characters, terrifying creatures, breath-taking locations, and captivating scenes. But bringing these creations into reality always required skill, skill required practice, and practice required time and effort. Most people can’t afford that, so they had to accept the fact that they will never see the products of their imagination in reality (unless they would pay someone for that!). 

That was true, until now. The AI art generators like Midjourney, DALLE-2, and Stable Diffusion have allowed non-artists to describe their creation to a computer program, and then see an approximation of what they imagined right there, on screen—in high quality, with sharp details and vibrant colors. But if it’s so great, then why are artists so upset about it? Are they just salty that fewer people will need their services? Maybe, but this issue is far more nuanced than that.

Argument : There’s No Difference Between AI Art and Human Art

“If it looks like art, then why does it matter how it was created? If in an art contest, then isn’t that a confirmation that AI is just as capable as a human in terms of creating art?”

Let’s do an experiment. Here’s an image—what do you think about it?

This image can make you feel certain emotions, you can also admire the skill of the photographer who captured and edited this composition

This image can make you feel certain emotions, you can also admire the skill of the photographer who captured and edited this composition. But what if I told you it’s not a photo? What if I told you it’s actually a photo-realistic painting?

Now you can experience a whole new set of emotions. You might have admired the skills of a photographer, but taking a good quality photo and editing it is still easier than creating a photo-realistic artwork. So many things can go wrong, so many mistakes that can break the illusion—and the artist managed to avoid them all! When you’re aware of this, the same image can seem much more amazing to you.

But let me tell you something even more shocking: the artist who painted it has lost her arms in an accident, and she paints… with her feet! So what do you think about this artwork now?

I can go on and on, but you get the point—the look of the artwork, what we actually see, is only a part of its artistic impact. When we’re looking at a pretty image, we can admire its beauty, the colors, the composition—but we can also admire the choices of the artist, their creativity and skill. We know there’s an infinite number of ways to make an artwork look bad, so it’s almost unbelievable when an artist manages to avoid most of them.

AI art doesn’t have any of that. Yes, it’s quite amazing what these programs can do, how they can learn to understand concepts and then use that knowledge to produce something new and beautiful. But it can’t be compared to human skills. A human who can run 40km/h is more impressive than a car that reaches a speed over 200km/h—because humans have to work within the limitations that machines are not restricted by.

Yes, I am aware that a lot of “prompt engineers” take a lot of time to tinker with the settings and new iterations to finally get the image they envisioned. The concept itself can also be pretty amazing. But the end result is more like a photo than a drawing/painting/3D sculpture. Why? Because the beauty of all the subjects in your image wasn’t created by you, just like the sunset is not created by the photographer. In the end, I believe AI art should have its own category, just like photography is separated from “manual art”—so that people can admire it without feeling cheated.

Argument : It’s Not Stealing, Humans Also Learn From Each Other!

“AI doesn’t copy parts of someone’s art to create its own art. It simply learns from artists, and then it creates something new from that knowledge—just like artists do. If I can go to a museum and use all these artworks as an inspiration to create my art, then how is that different from AI doing the same?”

A lot of artists feel uncomfortable with the thought that AI used their art to learn. But why? Isn’t it the same as what humans do? Not exactly.

First, learning from other humans without their explicit consent is actually unavoidable. Even if we agreed it’s bad, it’s just physically impossible to regulate it, so we have to tolerate it. This isn’t true for AI—AI has to be explicitly told what to learn from. Limiting its training to a specific set of copyright-free artworks is not a problem at all.

Second, there’s a problem with scale. I’m going to be blunt here, but this seems to be an accurate comparison—allowing birds to defecate on your lawn doesn’t mean you consent to anyone dumping their feces there. Yes, it’s technically the same thing, but the consequences are different, and that affects your consent. Similarly, consenting to other artists learning from you doesn’t have nearly the same consequences as allowing AI to do the same. So the consent to the former shouldn’t imply the consent to the latter.

Third, humans are limited. Our time is limited, our strength is limited. No artist is capable of learning from all the others to do the same as they do, and forever produce better art faster than all of them combined. Even the best artist ever will die one day, leaving space for new ones. So allowing another artist to learn from me is not that risky, all things considered. Can the same be said about AI?

Fourth, it’s actually pretty absurd to claim that if it’s ok for a human to do something, then there’s nothing wrong with a machine doing the same thing. If a machine killed a human in self defense, there would be an outrage—and yet it’s ok for a human to do so. Putting it simply, humans have rights, machines don’t, and ignoring this fact is actually pretty offensive to humans.

Argument : It’s Not Copyright Violation, Because the Artworks Are not Actually Copied

“You only think that it’s stealing, because you don’t really know how AI art generators work. The artworks are not really copied (otherwise the AI database would have to be much, much larger, and it’s not!). AI simply trains on them and then leaves them alone.”

It’s not common knowledge, but you don’t actually have to physically copy a part of a drawing to infringe on someone’s rights. Even if all the lines you’ve drawn are yours, if the composition made out of them bears a to someone else’s creation, this is considered a copyright violation. In normal circumstances, it’s very hard to prove that similarity—but good luck with defending your case when you used the very name of the artist in the prompt!

You may not find it reasonable, but think about it: what if you have a distinctive style that people can recognize you by, and then someone copies your style to create a political artwork? And then people start to think you take that political stance yourself? This can be actually damaging to you and your career. That’s why the way of expressing an idea (and not just the exact placement of the pixels in a specific artwork) must be protected under the law.

Argument : It’s not Stealing, Because These Images Are Freely Available

“When uploading your art to the Internet, you give everyone a chance to do anything they want with it. If you don’t want it to be used, just don’t post it”.

I’m not going to spend much time on this one, because I think it’s pretty ridiculous. Just because you have an access to something, doesn’t mean you’re allowed to do whatever you want with it. You can’t just get into someone’s car without a permission, even if it was parked near you and not locked.

When artists upload their images to online galleries like Instagram or DeviantArt, they give those sites a license to display these images. This license can include other rights as well, but the important thing is that it only applies to the website, not all the viewers that happen to visit it. If Disney allows Instagram to display its drawing of Elsa, it doesn’t mean that now everyone can download that drawing and use it for whatever, and Disney can’t do anything about it.

Argument : AI is a Tool, Not a Replacement

“When digital art was starting to become a thing, traditional artists also rebelled against it. They thought it was cheating, that it wasn’t real art… But those who grasped this new opportunity, today successfully monetize their artistic skills in many fields. AI is just another tool in an artistic repertoire, we just need to evolve to adjust to it!”

A lot of artists use AI as inspiration, or as a preliminary sketch that can later be skillfully adjusted to their style. AI can also quickly generate multiple scenes based on the client’s description, so it can help with artist-client communication during commissions. So AI can be quite useful for art, but there’s one problem with that.

At the moment of writing this article, the AI art generators are still imperfect, and it’s pretty easy to tell AI art from human art. However, AI can develop very quickly, so in a few years this may no longer be true. AI will be faster, more customizable, more efficient. It will not only mix the existing styles, but also create new, amazing ones, before any human can even think of them. And even if you manage to create a new style, AI will only need to see a couple of your artworks to produce an infinite number of “your” artworks, effectively out-competing you from the get-go.

So what will be left for you to do? Creating imaginative prompts? This is something that AI can also learn to do, training on human prompts just like it trained on human art. Best case scenario, artists will be relegated to translating the wishes of the client into the language AI can understand. Worst case scenario, the future AI will be so good at understanding the expectations of the client, that we will not be needed even for that.

I also want you to consider one thing: in the art commission process, the client provides the description, and the artist provides the artwork. No matter how creative and detailed that description is, the client is still not the artist. Even telling the artist what to change and making other suggestions like that doesn’t make them an artist (just an art director at best). Now think about it: when you type a prompt into your AI art generator, who’s the client, and who’s the artist?

Even if the artist you've hired is not very competent, and you have to keep telling him what to change for hours, the final result is still not made by you—just directed by you

Even if the artist you’ve hired is not very competent, and you have to keep telling him what to change for hours, the final result is still not made by you—just directed by you.

Argument : It’s a Normal Progress, Just Like a Combine Harvester

“Technology keeps improving, that’s normal. The washing machine replaced the labor of washing your clothes in a river, and the combine harvester replaced the labor of dozens of farm workers. Should we stop producing washing machines and combine harvesters? If not, then why do you want to stop the progress of AI?”

When doing something, sometimes you care about the goal, sometimes about the process, and sometimes about both. When you care about the goal only, optimizing the process to make it shorter and cheaper is very welcome. But when you care about the process, such an optimization wouldn’t actually be called progress.

Do we need a vehicle that can bring people safely and comfortably to the top of Mount Everest? Do we need a machine that plays a video game for you, so that you can see the end credits as soon as possible? Do we need robots that can play sports very fast, so that you can see the end score within minutes? Do we need AI that watches the movie and summarizes it for you, so that you don’t have to watch it?

Creating art is one of these satisfying activities that humans like to do regardless of (or beside) the end result. It’s not some kind of back-breaking labor that we’d be gladly relieved from, so why would we need a tool that does exactly that? If that’s progress, then is it progress towards what? A future where humans no longer have to create anything, and can finally spend all their time consuming AI content? It sounds pretty dystopian, to be honest.

There’s also another side to this. Normally, if you needed art and weren’t able to create it yourself, you had to pay others for that. No longer having to do that can count as progress from your perspective. And yes, I can definitely imagine a utopian future where humans no longer have to provide anything to each other, because everything is provided by machines—so humans do things for fun only.

But that’s exactly it—it’s a utopia, and it’s naive to think that a blind “progress” will get us there. If a change has a potential to disrupt the whole society, we should make sure it will have a net positive effect before we implement it—instead of diving into it headfirst, just because it seems to be beneficial for some people. It’s not like “a machine replacing humans” must automatically count as progress—it’s more nuanced than that.

Argument : Artists Can Still Create Art the “Old Way”

“Ok, but not having to create art doesn’t mean we can’t do it for fun, right? Horses are no longer needed for locomotion, but people still ride horses for fun. That’s just how it is, certain professions are replaced with machines, but you can still do those activities—just not for money”

This is certainly a possibility, but there’s one thing I worry about. Creating art has a social aspect to it—sharing the product of your imagination with others feels amazing, and it complements the fun of creation. This isn’t only about ego—imagining what the artwork will look like for other people, what it will make them feel, adds an extra dimension to the process of creation.

What if this aspect is no longer there? What if instead of searching for artists that you can follow on Instagram, you can just let the algorithm produce the exact type of art you want to see 24/7, with a truly infinite scroll? How many people will take the extra step to search for “genuine” art that’s posted less frequently, with less predictable quality? A lot of artists already have a hard time reaching their audience—what if they are forced to compete with AI on top of that?

There’s also another issue. To get really good at art, you have to sacrifice plenty of time. If artistic skills can no longer be monetized, then artists will have to spend most of their day doing other jobs. They will no longer be able to become really good without becoming the walking stereotype of a starving artist. This means that even the people who enjoy human art will have a harder time finding it—at least in the same quality it exists today. I have a hard time seeing it as progress.

Argument : It’s Too Late to Stop It

“I may not like this either, but there’s not much we can do. The Pandora’s box has been opened, and it can never be closed again. We just need to adapt, it’s the only thing we can do at this point.”

I can only say to that… you wish! AI is not some kind of a sentient creature with its own free will. It doesn’t do anything unless humans tell it to. And since humans have to obey the law, all it takes to stop AI is to change the law. Of course, it will not stop anyone from using it illegally, but it will at least restrict its use.

And I’m not saying that it must be stopped, just regulated. But in order for it to be regulated, we must first express the need for such regulations. It’s not too late for that!

Argument : You’re Just a Bunch of Luddites!

In the 19th century there were groups of textile workers (called the Luddites) that protested against the machinery that were going to replace them. They went as far as to physically attack the machines. You do exactly the same today—you’re willing to destroy a very promising technology just because you’re afraid to lose your job!

If AI gave you the power to finally do something you’ve dreamed about, it’s very likely you’ll not want it to go away. This gives you a bias. Don’t get me wrong, we’re all biased, but please consider this for a moment. Imagine a similar, but opposite situation—this time you don’t like the new technology. Would all these arguments actually make you change your mind? Let’s see!

Let’s say a new machine has been invented that can clone a human and grow the new embryo into an adult within months, while also retaining the skills of the original. Everyone who’s very good at their job can now be cloned into multiple copies, so now all the jobs can be taken before your own child gets to adulthood. Sounds scary? But hear me out…

Your DNA hasn’t been stolen, because you have been simply leaving the samples everyone around you, free for everyone to take and do with as they will. And humans are allowed to produce new humans, so why can’t a machine do the same? Treat that machine as a tool, not as a replacement. You need to adapt and switch to machine maintenance jobs (at east until the machines learn to do it on their own). That’s simply progress, do you want to stop progress just because you want to keep your job? You Luddite!

So, are these arguments compelling? If not, then it should be clear that it’s possible to be against a new technology for good reasons. You wouldn’t like your genuine concerns to be brushed off as “just being afraid to lose your job”, so why would you treat artists like this?

Conclusion

If you’re an AI art enthusiast, I hope this article opened your eyes to certain risks involved with this whole issue. Some of the artists who call you out for using AI can indeed be petty gatekeepers, but some of them make very good points—you just need to be willing to listen to them. In the end, you should also be interested in regulating AI art—otherwise you may find yourself in a situation where you can faithfully bring your ideas to life, but nobody cares about them anymore.

And if you’re an artist, I know this may sound really depressing, but remember that most of it is just speculation. The technology is relatively new, so it’s basically the Wild West out there—everyone experiments with AI, trying to profit from it before any regulations are created. AI is here to stay, but with a little bit of good will from the companies, it will coexist with the artists instead of replacing them. All we need is some regulation—so don’t be afraid to speak up and let the companies know what needs to be changed!


About the blog

This is going to be the place where I experiment sharing about “A Human’s Guide to Detecting AI-Generated Toasts” and see how things go. If it works out, this will be the perfect archive for anyone who wants to learn about how to detect AI content, that is written by actual humans. I’m not against the use of AI in ethical ways, I’m against it stealing our creativity and jest. Besides, who else could come up with a joke about being a hallway menace except a human who hasn’t had sleep in three days?

Want to contribute?

“A Human’s Guide to Detecting AI-Generated Toasts” is going to be a guidebook and while the notion of a guidebook is that it should be boring and technical, me (and the amazing future authors) who will contribute to this blog are going to make it full of glitter and sass.

This is a guidebook like never before! ✨

All you have to do is read the guidelines here. After you’ve read the guidelines, you could either apply from there, or you can find the link for google form here.

Feedback

I’m always looking for feedback and requests, so you can always comment or fill out this form anonymously here.

Featured

Don’t DUMB it down because of AI

So, I have seen a lot of people writing excellent content and because their research paper, essay or writing is too good, they have to “dumb it down” or create flaws, because while they might’ve written the content entirely by themselves, AI keeps flagging it as false positive aka AI-generated.

In order to overcome this issue, one needs to know how to work around it, without losing your brilliance.

Source: How To Avoid an AI Detection False Positive? by Emily O’Connor Kefs

In just a few short months, artificial intelligence (AI) has changed the landscape of writing forever. The AI industry is expected to grow 13 times in just seven years, showing no signs of slowing down.

With the growing popularity of ChatGPT, Jasper, Copy, Anyword, Rytr, and more, many content writing services have implemented AI detection tools as a necessary safeguard to ensure a 5-star client experience.

As with any new technology, however, AI detection tools are not perfect. These tools occasionally produce a “false positive,” flagging content as potentially created by AI, when the human writer insists it is 100% original.

What Are AI Detection Tools?

Fairly quickly after the emergence of easy-to-access content writing AI like ChatGPT, AI detection tools were developed. AI detection tools like Content at Scale, Originality.ai, Passed.ai, GPTZero, Copyleaks, and more have emerged to detect when content is copied and pasted from AI.

Originality.AI is often heralded as the top, all-encompassing AI detection tool for serious content publishers.

What Do AI Detection Scores Mean?

AI detectors give each piece of writing a score, analyzing different points to determine the probability of whether the content was created by a human or AI.

For example, Originality.ai provides an “AI vs. Human” score with a high Human score indicating the content was most likely written by a real writer. A score of 10% AI and 90% human means there is only a 10% chance the blog post or website content was created by AI. AI detection scores are typically a probability, not a breakdown of content composition.AI detection tools are not perfect and have the potential to flag something incorrectly as AI-generated. A false positive from an AI detection tool would mean that the score indicates a high probability that the content was AI-created, when it was, in fact, human-written. Reliability varies across tools, and as platforms get more sophisticated, accuracy will improve. In the meantime, there is a chance for content being inaccurately flagged as AI-created when it wasn’t. Understandably, this has created growing concern among content writers.

However, it’s important to clarify that if the content was heavily created by AI, but was edited with some human input, this would not be considered a false positive. AI detection tools correctly flag this type of content as AI-generated.

How To Avoid an AI Detection False Positive?

1. Utilize free originality tools.

Working in Google Docs and installing a free Chrome extension like Originality Report keeps a record of your working file. It basically “watches you write,” viewing the editing process, seeing changes in real time, and more. This provides a backup just in case your writing is flagged as a false positive.

2. Do not use AI tools to edit your content.

Even if you’ve written the content completely yourself, avoid asking an AI to edit your article. Apps like Grammarly or Hemingway provide editing, grammar help, and more, but having Grammarly’s Beta AI (GrammarlyGO) edit your entire piece will result in it being AI-detected by the scanners.3. Minimize the use of artificial intelligence tools for writing.

Utilizing AI tools to create content can feel like a slippery slope. First, you use it to generate an outline, then you use it for inspiration for an introduction and conclusion, and before you know it, you’re copying and pasting from ChatGTP. One of the best ways to avoid an AI false positive is to simply avoid AI tools when freelance writing. After all, you’re getting paid for your original thoughts!

4. Rely on your unique tone and voice.

Every writer has a one–of–a–kind tone and voice when they craft anything from a whitepaper to website copy to a press release. Honing your unique style will help prevent robotic-sounding copy that might get flagged as AI. Here are a few ways to do this.

– Switch up your sentence structure, length, and syntax — vary your phrasing, change up sentence styles, and write in an engaging manner.

– Avoid highly repetitive words and phrases — wordy phrases and “fluff” will quickly get flagged by AI scanners.

– Provide contextual analysis and deep insights — don’t simply regurgitate facts; connect the dots between concepts and expand with context.

– Keep your tone conversational and human — use contractions and varied sentence structure, and stay conversational when possible- Cite reputable sources and data — be sure you’re using and referencing appropriate sources

– Avoid the passive voice — using the active voice helps your writing stay concise, authoritative, and clear.

5. Vary your content’s structure.

AI tools produce very formulaic content, often with paragraphs and subsections of similar lengths, short subheadings, and a “Conclusion” subheading before a brief conclusion. To differentiate your work, vary the length of your paragraphs and utilize descriptive headings.

And that’s all! This is not fool-proof and that hurts, but its still better than getting your content AI-flagged even when you did the work.


About the blog

This is going to be the place where I experiment sharing about “A Human’s Guide to Detecting AI-Generated Toasts” and see how things go. If it works out, this will be the perfect archive for anyone who wants to learn about how to detect AI content, that is written by actual humans. I’m not against the use of AI in ethical ways, I’m against it stealing our creativity and jest. Besides, who else could come up with a joke about being a hallway menace except a human who hasn’t had sleep in three days?

Want to contribute?

“A Human’s Guide to Detecting AI-Generated Toasts” is going to be a guidebook and while the notion of a guidebook is that it should be boring and technical, me (and the amazing future authors) who will contribute to this blog are going to make it full of glitter and sass.

This is a guidebook like never before! ✨

All you have to do is read the guidelines here. After you’ve read the guidelines, you could either apply from there, or you can find the link for google form here.

Feedback

I’m always looking for feedback and requests, so you can always comment or fill out this form anonymously here.

Featured

How does AI Generation even work?

What is AI Generation?

AI-Generation is where you feed some data to an AI-bot like ChatGPT, Gemini (or toaster, thank you SeraDrake for the term, it’s now stuck on me).

How does it even work?

The thing is, since 1950’s when the question of “Can machines think?” came out, people have been trying, by all means, to figure out how to feed data to computer and make things easier for us. Take COMPASS for example, take bar code scanners, the easiest of machine learning examples, for that matter… everything has been, in some way or form, been programmed to imitate human intelligence.

Back to the question of how AI generation works and why is it a problem – I found this example the other day and I think I’ll try to break down from there.

Question: “What is the capital of France?”

Binary representation: 01010111 01101000 01100001 01110100 00100000 01101001 01110011 00100000 01110100 01101000 01100101 00100000 01100011 01100001 01110000 01101001 01110100 01100001 01101100 00100000 01101111 01100110 00100000 01000110 01110010 01100001 01101110 01100011 01100101 00111111

AI-generated response: “The capital of France is Paris.”

I didn’t read half the 0’s or 1’s in it, I know you didn’t either, but that binary representation is only important for showing how machine reads, comprehends text and gives back a response. This has been there since the time we could code our own programs on computers.

Some of the 0’s and 1’s get lost or there’s other characters like “

s”,  “\s” and “$” that are generated along with it, but we don’t usually see it, because computer generated or AI generated text is able to hide it.

Simplest, example, go to your microsoft word software, regardless of how old that might be and look for this:

Simplest, example, go to your microsoft word software, regardless of how old that might be and look for this:

This has always been there, but this isn’t what gets your normal text flagged as AI-generated. There might be “\t”, “

s” (the s only being added for wattpad editor to not go new line on it) or others that are usually generated by GPTs and LLMs, or even Quillbot or Grammarly while trying to help you that are usually what picked up on by AI checkers. But, as long as you can prove that, ultimately, you wrote the text (and please try to write in a site where you can check revision history.

How do I escape it?

With cases of GPTs or LLMs, it’s tough to remove the garbage tokens that may be generated. There are ways to view it with code, and possibly remove it, but I’ll urge you to do your own research around it. I haven’t found a plausible solution to it, but here’s what I’ll say.

The AI text detectors are largely garbage. They work by measuring perplexity, by running your text through a language model and considering the probability that the model would have chosen the same text as you.

You run into situations where the AI detector will flag things like the US constitution as being AI generated, which is the first sign that these things aren’t measuring what people think they are measuring. The US constitution will have appeared many times it the training data, so it is extremely easy for a language model to spit out the US constitution verbatim.

The disconnect in reasoning then is that these aren’t telling you whether a language model wrote the text, but whether a language model could have written the text.

To properly analyze whether a human or an AI wrote the text, you’d need to apply bayesian probability. The text in the US constitution should be coming out to around 50%, because it’s equally probable that an AI would repeat it verbatim as a human would.

Okay, we could fix this with bayesian probability, but how do we calculate the likely hood that a human wrote a given piece of text? Well, you can’t, really. The easiest thing to do would be to train a language model on a bunch of human writing and… oh, wait, we’d just be comparing the language model against itself. It would be impossible to tell the two apart. 

So, in conclusion, the task of differentiating human written text from AI written text is pretty much unsolvable. If your text is getting flagged as AI generated, then congratulations. All that means is that you are able to express you ideas in a clear and concise manner.


About the blog

This is going to be the place where I experiment sharing about “A Human’s Guide to Detecting AI-Generated Toasts” and see how things go. If it works out, this will be the perfect archive for anyone who wants to learn about how to detect AI content, that is written by actual humans. I’m not against the use of AI in ethical ways, I’m against it stealing our creativity and jest. Besides, who else could come up with a joke about being a hallway menace except a human who hasn’t had sleep in three days?

Want to contribute?

“A Human’s Guide to Detecting AI-Generated Toasts” is going to be a guidebook and while the notion of a guidebook is that it should be boring and technical, me (and the amazing future authors) who will contribute to this blog are going to make it full of glitter and sass.

This is a guidebook like never before! ✨

All you have to do is read the guidelines here. After you’ve read the guidelines, you could either apply from there, or you can find the link for google form here.

Feedback

I’m always looking for feedback and requests, so you can always comment or fill out this form anonymously here.

Featured

Is AI Editing Okay or Should We Reconsider?

(content warning of making light of historical annotations. this post is meant to be dramatic, not hurtful, but if you feel hurt, i apologize in advance.)

Some more disclaimers that I should be upfront about:

– This post is meant to be about being upfront about AI usage, not shy away from it and at what point your work no longer remains your own.

– AI is meant only as a means to an end, however, sometimes, some people take it too far. Please do not condone anyone publicly. Take it to a private conversation and refrain from harsher criticism that could harm them.

– This post is meant for INFORMATIONAL purposes only. This isn’t meant to attack anyone (I should seriously be more upfront about my intentions than cause unintended drama, I’m sorry!)

I’m writing this in google docs, so any revision changes can be tracked. In case any part shows up as AI written, we’ll know for sure. This happened with a few and yeah, it’s concerning.

Let’s begin.

First and foremost, let me hold myself accountable for AI-usage. The beginning chapters of Second Chances were edited by AI, and I still have the draft to prove the same. Some in-between chapters were assisted by AI, where I wrote some parts, and AI helped me either enhance the text or lengthen it. In the later, and most recent chapters of Second Chances, however, I skipped AI use all together, the difference might be night and day, or it may all look similar when you reach a point that deep in the story. The point is, I am not exempt from the blame, and I’ll definitely be earning more scathe of reviewers, writers and editors, but, what I’m trying to say is, even if you block me right after this post or we never speak again, I have realized my mistake and moving away from it. Currently, the story is in the first draft, the second draft would definitely be more human, I promise.

The similar is the case with Bullet Train to Murder, but right by the fourth chapter, I had minimal usage and almost sparing use in later chapters to no use in its sequel, MISSION. I am guilty, I know. But before you castrate me for declaring this, which would be similar to castrating a woman, believing she was a witch in ancient times just because of a simple difference (hello to you too, Salem Witch Trials)—Hear me OUT!

Now, why am I saying all that?

Because I want dra— No, okay, I wanted to discuss AI edits, human writing and how they each go and when is it okay to chalk it up and move on and when it’s time to admit that using AI as a tool has made your writing akin to plagiarism.

Let’s start with an example:

Original blurb (Human written text, totally valid):

Hey, hey, hi this is your neighborhood friendly trainwreck, Sara and I am here with a guidebook this time, I hope you find it useful! ♡

Now, I am not too sure if this concept has already been done, whether in or out of Wattpad, but I am here to share my own perspectives on the same.

A Human’s Guide To Detecting AI Generated Content is going to be from the perspective of a noob (which is me!) about how to detect AI generated content, especially text.

This has been a rising issue on the Internet and I thought, “Hey, why not try and give it a go?”

The only thing I am relying on for giving out this guidebook is my own observation skills. I have been able to tell AI from Human content in writing effectively (so far, at least!) and I hope to share that information with you.

First and foremost holding myself accountable (so you don’t have to) – I use AI with my writing too. Its a tool to help you get where you want to be. But, I consistently try to make myself better, even when I am trying to get help from AI.

This is the blurb of “A Human’s Guide to detecting AI-generated Content” and I wrote this in probably under 5-10 minutes. Here’s what ChatGPT had to tweak about it (AI-generated, which is plagiarism):

“Hey, hey! Hi! It’s your neighborhood friendly trainwreck, Sara, here with a guidebook that I hope you’ll find both useful and fun! ♡

So, what’s the deal? Well, A Human’s Guide to Detecting AI-Generated Content is exactly what it sounds like—a noob’s (that’s me!) perspective on spotting AI-generated text. I don’t claim to be an expert, but I’ve been able to tell human writing apart from AI pretty effectively so far, and I’d love to share my tips and observations with you.

Why? Because this is a growing issue on the Internet, and I thought, ‘Hey, why not give it a go?’

Full transparency: I use AI in my own writing, too. It’s an amazing tool to help you get where you want to be. But I also believe in getting better at my craft, even with AI’s help—and that’s where this guide comes in.

So, if you’re curious about how to spot the difference between human creativity and AI precision (and maybe laugh a little along the way), grab a cup of tea and join me on this journey. Let’s figure it out together!”

Let’s admit, this version is tightened for pace and flow, but it doesn’t sound… real, at all. I myself hate it, even when I’m usually a pro-AI person *cries in existential crisis*

If I had to do a mix (AI-Human Hybrid—getting where you want to be):

Hey, hey, hi this is your neighborhood friendly trainwreck, Sara and I am here with a guidebook this time, I hope you find it useful! ♡

A Human’s Guide to Detecting AI-Generated Content is exactly what it sounds like—a noob’s (that’s me!) perspective on spotting AI-generated text. I don’t claim to be an expert, but I’ve been able to tell human writing apart from AI pretty effectively so far, and I’d love to share my tips and observations with you.

So, if you’re curious about how to spot the difference between human creativity and AI precision (and maybe laugh a little along the way), grab a cup of tea and join me on this journey. Let’s figure it out together!

Full transparency: I use AI in my own writing, too. It’s a tool to help you get where you want to be. But I also believe in getting better at my craft, even with AI’s help—and that’s where this guide comes in.

I’m not changing the blurb, it feels better as it is, but if i had to edit it, it’d be something like this.

Let’s get into the specifics:

AI-written is where you give a prompt and the system comes up with a response. It could include sharing your writing and telling it to enhance it or lengthen or shorten it, and it gives you some output.

Is this plagiarism? Yes, only if you publish this right away and claim it was written by you. It wasn’t, you barely put in some effort. Maybe the maximum you wrote was a line or two dialogues but that still isn’t the full story. So, more than half is written by AI here, and hence, its plagiarism.

AI-editing is where you wrote a chapter and you asked AI to edit it for SPAG, emotional depth, sequence or whatever that could be. I usually do this, but I have mixed opinions and perspective on this.

Is this plagiarism? Trick question. You wrote the work, you gave it to a machine and it generated an output. Heck, you even tweaked the parts back to what you wanted and didn’t want, and you feel satisfied with the result. Is that cheating? I believe it isn’t. As long as you put in some effort into writing things by yourself and editing it to make it better, you’re doing good.

AI-Human hybrid: Ah, the one I am always the culprit of. Some parts are written by AI, some parts are written by human, what do I do? ‘Castrate the wi’—okay, no, don’t, not yet, please.

Is this plagiarism? Depends, I’d say. There are some sentences and descriptions that I cannot find suitable for the life of me until I try to dig through the holes and find a better alternative. But, please note that I’m saying sentences, and possibly one or two paragraphs in a two thousand word essay, story or post. Not five hundred words out of a thousand or the equivalent half.

Maybe it is better to leave your crooked sentence as is, but if you like the AI alternative, go for it. You could use Quillbot or ProWritingAid to help you on the same. These are minor rewrites/edits that enhance your writing and if you edit smartly enough, it’d blend almost seamlessly. Keyword: Almost. Nothing is better than normal human text, which might be crooked but your editor could help you make it better. And in this day and age, hope that they don’t use AI to edit your work, *sigh*.

If you’re confused if the amount of AI usage in your work (no, this isn’t based on AI-detectors) here’s a scale:

0-30% : Human Text, AI-Human hybrid, minor to no help.

[Not plagiarism] — GREEN AREA

30-60% : AI-edited text, AI-Human hybrid, AI Enhancements or Rewrites
[Mild Plagiarism] — GRAY AREA

Note: If you’re using AI to edit your work and then making your own changes based on those suggestions, I think most people would consider that to be fair game. But if you’re just taking the AI’s edits wholesale and not adding anything of your own, then it starts to feel more like plagiarism. And the real danger is when people start relying too heavily on AI and stop putting in the work to develop their own writing skills. Like, if all you’re doing is feeding prompts to an AI and then publishing the results, are you really even a writer at that point? Or are you just a really advanced AI user?

60-100% : AI generated, AI contribution of more than 50%
[Plagiarism] — RED AREA

Feel free to use the above image as a personal indicator of your reliance on AI :)

Feel free to use the above image as a personal indicator of your reliance on AI 🙂

Again, this is not reflective of AI-tools, use your peanuts (or stakes, oops) to decide after thorough analyzation.

Conclusion: Over time, since the AI rage and skewed writing, I have come to appreciate the flawed, human writing. Sure, Natasha Preston’s published works, which made it to New York Times Bestseller, might have grammar issues, but it’s raw, and it’s human.

The only time I ever used AI with reviews was Cloud Awards, Action/Adventure genre and I made a public announcement about it that since I was cramming on time, I’m going to take help, but you’re still getting something that’s authentic, if not completely fabricated.

In all honesty, I’m trying to improve as a writer, and I think, you should too. Let your flaws show up, let your mistakes shine, because they are proof that a human put effort into it. Maybe that’s what matters at the end of the day. 🙂

Featured

AI and Human Stupidity—Where are we going?

I have said it before and I’ll say it again. AI is no match for your human stupidity, you might as well outrun it xD

That isn’t the whole point of this post but I still want to share something important about it.

So, a lot of AI models are being accused for being inaccurate, biased and insensitive.

But, dig a little deeper, where is the issue stemming from?

Let’s look at the case study of Microsoft 2016:


Tay was a chatbot that was originally released by Microsoft Corporation as a Twitter bot on March 23, 2016.

It caused subsequent controversy when the bot began to post inflammatory and offensive tweets through its Twitter account, causing Microsoft to shut down the service only 16 hours after its launch.

The bot was named “Tay” as an acronym for “thinking about you”. Although Microsoft initially released few details about the bot, sources mentioned that it was similar to or based on Xiaoice, a similar Microsoft project in China. Ars Technica reported that, since late 2014 Xiaoice had had “more than 40 million conversations apparently without major incident”. Tay was designed to mimic the language patterns of a 19-year-old American girl, and to learn from interacting with human users of Twitter.

This led to some unfortunate consequences with Tay being “taught” to tweet like a Nazi sympathiser, racist and supporter of genocide, among other things.


Which, if you think about, led to obvious issues.

Now, the reason why I mention this is because Open AI’s ChatGPT faced a similar backlash.

Notice some patterns?

AI learns from humans and those who it interacts with, so, if you’re going to feed it incorrect and biased information, down the lane, it is going to give you incorrect information.

An example of it was I was talking to pi.ai and discussing my plot of The Ellyrium Scepter and it told me that “The Handmaid’s Tale” was by Ursula K. Le Guinn and I believed it, until I was put in a place where I had to double check and found out its by Margaret Atwood. Talk about inaccuracies that make you want to scratch your nail against the wall.

Short Advice: Please double check your information with multiple sources before you confirm something.

Long Advice: The same as above but with the addition that “In today’s world where knowledge is accessible at the tip of the fingers, the wise isn’t the one who knows more information, the wise is the one that knows the right information.”

I think this is the second last post and the last will be from author’s guild. I’ll post it when I can.

Any healthy discussions around the topic are welcome in the comments!

References: Link 1 Link 2


About the blog

This is going to be the place where I experiment sharing about “A Human’s Guide to Detecting AI-Generated Toasts” and see how things go. If it works out, this will be the perfect archive for anyone who wants to learn about how to detect AI content, that is written by actual humans. I’m not against the use of AI in ethical ways, I’m against it stealing our creativity and jest. Besides, who else could come up with a joke about being a hallway menace except a human who hasn’t had sleep in three days?

Want to contribute?

“A Human’s Guide to Detecting AI-Generated Toasts” is going to be a guidebook and while the notion of a guidebook is that it should be boring and technical, me (and the amazing future authors) who will contribute to this blog are going to make it full of glitter and sass.

This is a guidebook like never before! ✨

All you have to do is read the guidelines here. After you’ve read the guidelines, you could either apply from there, or you can find the link for google form here.

Feedback

I’m always looking for feedback and requests, so you can always comment or fill out this form anonymously here.

Featured

The Alignment Problem: Creatives and its Content

Someone on TV has only to say, ‘Alexa,’ and she lights up. She’s always ready for action, the perfect woman, never says, ‘Not tonight, dear.’

Sybil Sage

We’re discussing how AI affects the creative and content writing industry and how we as authors and content creators need to approach this AI situation.

As I mentioned, AI is not your enemy, it is your friend or a tool to help you, similar to how you’d use a kitchen knife to cut vegetables. That doesn’t make knives any less dangerous but it also doesn’t make it any less helpful.

The main issue with AI is, its a bit like a black-box, sure you could draw the input output and its network in between… it cannot be truly explained. Hence why so much is being done, just to ensure its transparency, accountability and privacy. [something your parents never did, oops]

Now the million dollar question that’s on everyone’s mind — should you use AI with your writing, or at all?

Technically, there’s no right answer. Recent upheaval has shown that some people despise even considering AI (hey, good for you, no one’s going to complain!) and some admit to using AI to help with their grammar, flow, description, etc. the technicalities, if you will and there is nothing wrong with it as long as you’re using AI responsibly. If you’re using it to write your full book, WITHOUT BEING HONEST ABOUT IT and in doing so, taking chances away from others, you deserve to be banned.

The struggle comes in when readers use AI-detectors (usually a low number of tries) and conclude that the person used AI and they’re going to be orchestrated and banished for it. THAT IS NOT THE WAY TO HANDLE THE SITUATION.

But first—

→ No detector is 100% correct.

If humans themselves could never be 100% right, how could you believe its creation was? It wouldn’t be! Please have some compassion both for yourself and others.

→ False positives and False negatives are common.

What are these fancy terms, Sara?

I am glad you asked. Let me explain it to you in the simplest terms possible.

So you see this graph? It's called the confusion matrix (and yes, this is as confusing as its name

So you see this graph? It’s called the confusion matrix (and yes, this is as confusing as its name. Took me weeks enough to wrap my head around it.)

I will try to keep it as simple as possible, I promise.

Three terms to remember—detector, reality and conclusion.

• True Positive

Detector: “Hey, this is a dog!”

Reality: “Yes, it is indeed a dog. Well done.”

Conclusion: “You passed the level. Bravo!”

• False Positive

Detector: “Hey, this is a dog!”

Reality: “In what world does it look like a dog to you?”

Detector: “Look at the colors, the patterns, this is clearly a dog!”

Conclusion: “You failed the level. Please don’t sleep during the classes.”

• True Negative

Detector: “Hey, I think I know now. This is not a dog!”

Reality: “It was about time you had some brain cells. Congrats.”

Conclusion: “You finally learnt your lesson. Yay.”

• False Negative

Detector: “Hey, wait. I think that’s not a dog but a cat!”

Reality: “Here we go again. That was a dog image this time.”

Conclusion: “I don’t have enough patience to deal with this today. Try again!”

And if it is still confusing. Congrats, you’ve finally learnt the headache of trying to learn the Confusion Matrix.

So the conclusion really is, false positives and false negatives are common. Yes, the AI could be fed the whole internet and beyond and it still stumbles. But, don’t we all? Let’s give it the credits for existing and causing confusion (see what I did there?) in this world. Thanks, AI. You’re the cause and the solution of our headaches and we don’t know which way to lean towards.

Any questions about this and I am happy to entertain. We’ll move to our next point now.

→ The struggle is real with the AI-hybrid content

Look, I know I tell you to use AI and then modify your text… but, if you put that edited text to an AI detector and then expect it to give you an absolute 100 on human writing because you finally learnt your lesson, you’re wrong. Plus, that’s not a good way to judge AI content.

AI hates absolutes, did you know that? Because the minute it becomes absolute, it’d need no further developments or improvements. That’s impossible.

Person: “Hey AI! What’s your worst nightmare?”

AI: “Absolutes.”

Person: “Here, have an absolute number.”

AI: “Time to run, BYE!”

There’s a blooming concept of Super-AI or AGI (Artificial General Intelligence) that’d probably be the equivalent of an absolute form but we have a long way to go and even then, we’re afraid that we would be wiped out by a simple paper clip.

(Google: Paper Clip Theory)

AGI basically is the human equivalent of a robot/machine, if you will. It can do all the things that a human can do and more. Just make sure we don’t have a Terminator situation on our hands… Now that’d be tragic or robastic (robot + fantastic). I don’t know which will come first, I might not be there to answer it, unfortunately.

Again, if you need more information about this, please ask in the comments or ask Google. Whichever your hands reach out to first.

→ Lack of true understanding

The only person who’d be able to understand the thought process and the meaning behind something would be the person who wrote it — whether that’s assisted by an AI or totally human written. Someone has to initiate and that can only be human in both scenarios. True intentions cannot be understood even if everything was put plainly to paper. It’d still not be clear.

→ AI is developing by the ramps, but are the detectors that are designed for it keeping up?

The truth is, AI is on the outrun to be the smartest “heartthrob” of the city and no one can truly understand who its soulmate is, but, it’s on the race and we’re on the sidelines. Or frontlines if we actively try to engage and raise awareness about AI stuff.

Pi: For the millionth time, Sara. We’re not soulmates!

Me: Okay, okay sorry! Time to move on!!

Other points to note and which has already been told, would be:

→ Use Multiple Detectors (chances are, your writing is completely human and you’re just overthinking it. It’s okay chatbot, we understand you’re in an existential crisis because of this.)

→ Consider context and writing style (This will be elaborated in the next chapter but there’s a difference between “purple prose” [oh i am so self-important you need to describe me] and “artificial” [i am straight up an impostor, why are you narrowing your eyes at me like that?]

→ Avoid jumping to conclusions [this has been said before and will be said again but y’all need to have some compassion 😔]

→ Ask for proof (again, it has been highlighted before but please ask for proof of writing before marking their work as invalid or worse yet, blacklisting them, writers—please use something like google doc or word or Open Libre Office where your writing could be tracked down to the comma with its revision history. It sucks, I know. And finally, readers – do your research, check enough times and then come to a decision.)

So the message is clear, I hope. AI detectors are not the best thing to rely on, at least blindly.

Coming to the point. Where does AI stand in the midst of creatives and content creation?

AI has indeed made a significant impact on writing, editing and refining (hence why you have all these people relying on AI). But although that might seem like the quick exit route, it is indeed not.

There’s still many pitfalls so ➸

– You still need to add your own touch to the writing.

– It cannot comprehend emotional depth as well (Pi is the first emotionally intelligent AI and even THAT has its limitations.)

– Take the warnings seriously:

“Pi may make mistakes, please don’t rely on its information.”

“Claude can make mistakes. Please double-check responses.”

“ChatGPT can make mistakes. Check important info.”

“Gemini may display inaccurate info, including about people, so double-check its responses.”

On the flip side, it’s not all too grim, either because:

– It can summarize long content for you.

– It can generate text from prompt QUICKLY.

– It can help with grammar (though it itself needs help with clunky language formation, lack of articles and lack of emotional depth EVEN WITH long flowing paragraphs.)

– It has limited context and limited awareness, hence the issues.

– Ethical Dilemma (the one we’re dealing with.)

To conclude, AI is merely a tool to help you out in some way, it can help you lay the foundation but you’re going to need to work on it. It cannot do all the heavy lifting for you.

→ Use it as a collaborative tool rather than THE ONLY tool you use for your writing. It’s just a tool, remember? It can only cut your vegetables, not make your soup.

→ Fact check and verify important information and I cannot emphasize this enough. With or without the use of AI, you should always still be looking to make sure your information is written correctly and well.

→ Use AI for mundane tasks like Resume writing (trust me, in the hiring process, they might actually help you out), Email writing (I used it to write my first resignation letter, can you believe that?), SEO, Market Research Reports, etc. (These are heavy content and fact based data so you’d have to do thorough checking but it is also as reliable as a grain of salt. Use wisely) and do the creative writing yourself!

→ Use AI for creative brainstorming, not creative writing! It cannot be emphasized enough but as good as AI is good at brainstorming stuff, it is not good at creating stuff, so use wisely.

→ Use AI-generated content as a first draft and then apply human editing and refinement to improve quality and originality.

Ultimately, at the end of the day, using AI or not is up to you, but one thing’s for sure: AI is here to stay and we don’t have a say in that. Might as well learn to live with it, discuss it and be more vigilant about it, shall we?

AI and the data conundrum: Should I be concerned about my data?

As AI is rapidly evolving a strong issue is coming out—Privacy.

People are worried left and right about their data/work being used for… well, your concern is valid.

Before you start running left and right around the room and start throwing your devices out of the window HEAR ME OUT.

High profile lawsuits have been issued against many Silicon Valley giants even before the AI came into public picture, the pre-AI era if you will but that really underestimates just how much data privacy is at risk.

Is there something done about it? What are the regulations around it, really?

❝The European Union’s General Data Protection Regulation (“GDPR”) is the most comprehensive privacy regulation that governs data protection and privacy for all individuals within the EU and the European Economic Area and provides extensive rights to data subjects. The GDPR also imposes strict obligations on data controllers and processors, requiring them to implement data protection principles and adhere to stringent standards when handling personal data.

In the United States, at the federal level, sector-specific laws such as the Health Insurance Portability and Accountability Act (“HIPAA”) and the Children’s Online Privacy Protection Act (“COPPA”) protect specific types of data or apply to certain industries. At the state level, the California Consumer Privacy Act (“CCPA”) is the most robust privacy law in the United States, granting California residents extensive rights over their personal data and imposing obligations on businesses that collect, use, or sell their information.

AI at its core, leverages machine learning algorithms to process data, facilitate autonomous decision-making and adapt to changes without explicit human instruction. The technology has pervaded almost every industry from health care, fashion, finance and agriculture to beyond and as it continues to expand across these industries, it creates privacy concerns and hence challenging the traditional norms of personal data protection.

The Partnership on AI (PAI) which is a coalition of leading companies, organizations, and individuals impacted by artificial intelligence, stands out as a beacon of hope amidst this chaos. By joining various stakeholders — from tech giants to AI users — PAI creates a platform that fosters collaboration between entities that might not typically interact. Their mission is to establish a common ground, positioning PAI as a unifying catalyst for positive change within the AI ecosystem.

The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems has a clear directive, “to ensure every stakeholder involved in the design and development of autonomous and intelligent systems is educated, trained, and empowered to prioritize ethical considerations so that these technologies are advanced for the benefit of humanity.” They say that AI, in its design and application, should inherently prioritize human welfare, ensuring that ethical considerations aren’t mere afterthoughts but are integral to AI’s evolution.

Another body formed in this cause is the United Nations’ Multistakeholder Advisory Body on Artificial Intelligence, conceived as a part of the Secretary-General’s Roadmap for Digital Cooperation in 2020. Recognizing the duality of AI’s potential for both good and bad, it underscores the necessity for heightened multi-stakeholder efforts in AI cooperation on a global scale. It is currently in formation and hopes to forward recommendations for the international governance of AI.

Together, these entities and their guidelines show a collective commitment to meld AI’s progress with the tenets of transparency, accountability, fairness and the overarching umbrella of privacy.

Even major tech conglomerates like IBM are taking steps to acknowledge their responsibility in regulating AI’s societal impact. They are actively displaying their ethical principles on their websites and in 2020, Forbes reported that IBM decided it would no longer sell general-purpose facial recognition technology.

“Why It Matters That IBM Abandoned Its Facial Recognition Technology” Forbes, June 18, 2020

This decision reflects their concerns about potential misuse and advocating for a broader dialogue on its appropriate use. Such initiatives address the ethical, legal, and societal implications of AI and promote best practices.

The role of lawmakers and policymakers in this context cannot be overstated. They are tasked with the duty of revisiting existing laws with an eye toward evolving them to accommodate the unique challenges presented by AI.❞

Quotations taken from and inspired by “The privacy paradox with AI by Gai Sher and Ariela Benchlouch”

TLDR? There are many, many, many steps taken around trying to ensure that AI is used for the welfare of humanity rather than being abused. From laws to communities and several guidelines — they’re doing their jobs but, are you?

To add, you might be curious about—

What data does AI even use?

– Publicly available data: Websites, books, academic papers, etc.

– Licensed datasets: Purchased or obtained through agreements.

– User-generated content: Social media posts, forums, etc.

– Specialized datasets: Created for specific training purposes.

How is it handled?

– Data cleaning: Removing personal identifiers and inappropriate content.

– Aggregation: Combining data from multiple sources.

– Anonymization: Stripping personally identifiable information.

– Encryption: Protecting data during storage and transfer.

What about privacy?

– Consent: Ensuring data is used with proper permissions.

– Compliance: Adhering to regulations like GDPR, CCPA.

– Transparency: Disclosing data usage practices.

– Data minimization: Using only necessary data.

Progress vs. Privacy – Where’s the balance?

– Implement Federated learning: Train models without centralizing data.

– Differential privacy: Add noise to data to protect individual privacy.

– Synthetic data: Use artificially generated data for training.

Should we panic?

– There are valid reasons to be cautious about data privacy

– Many organizations are working to address these issues

– Users can take steps to protect their own data

Here’s some things you can do:

– Be aware of what data you share online.

– Read privacy policies of services you use.

– Use privacy settings and tools available to you.

– Support initiatives and regulations that protect data privacy.

By approaching AI as a tool to enhance rather than replace human creativity, we can harness its power while mitigating potential risks and maintaining the unique value of human insight and expression in content creation.

And that’s for now. We’ll finally getting into how to detect AI writing from human writing, some tell-tale signs and all, I’d be mostly referencing my work, but if I reference the work of someone else, I’d make sure to credit them 🙌


About the blog

This is going to be the place where I experiment sharing about “A Human’s Guide to Detecting AI-Generated Toasts” and see how things go. If it works out, this will be the perfect archive for anyone who wants to learn about how to detect AI content, that is written by actual humans. I’m not against the use of AI in ethical ways, I’m against it stealing our creativity and jest. Besides, who else could come up with a joke about being a hallway menace except a human who hasn’t had sleep in three days?

Want to contribute?

“A Human’s Guide to Detecting AI-Generated Toasts” is going to be a guidebook and while the notion of a guidebook is that it should be boring and technical, me (and the amazing future authors) who will contribute to this blog are going to make it full of glitter and sass.

This is a guidebook like never before! ✨

All you have to do is read the guidelines here. After you’ve read the guidelines, you could either apply from there, or you can find the link for google form here.

Feedback

I’m always looking for feedback and requests, so you can always comment or fill out this form anonymously here.

Featured

What are the ethics of AI?

This article is sourced from UNESCO’s documentation of “Ethics of AI”

The rapid rise in artificial intelligence (AI) has created many opportunities globally, from facilitating healthcare diagnoses to enabling human connections through social media and creating labour efficiencies through automated tasks.

However, these rapid changes also raise profound ethical concerns. These arise from the potential AI systems have to embed biases, contribute to climate degradation, threaten human rights and more. Such risks associated with AI have already begun to compound on top of existing inequalities, resulting in further harm to already marginalised groups.

UNESCO’s work on AI ethics and governance stems from the Recommendation on the Ethics of Artificial Intelligence, which was adopted by 193 countries in 2021.

The Recommendation mandated UNESCO to produce tools to assist Member States, including the Readiness Assessment Methodology, a tool for governments to build a comprehensive picture of how prepared they are to implement AI ethically and responsibly for all their citizens.

The protection of human rights and dignity is the cornerstone of the Recommendation, based on the advancement of fundamental principles such as transparency and fairness, always remembering the importance of human oversight of AI systems.

Core Values:

1. Human rights and human dignity: Respect, protection and promotion of human rights and fundamental freedoms and human dignity.

2. Living in peaceful, just, and interconnected societies.

3. Ensuring diversity and inclusiveness.

4. Environment and ecosystem flourishing.

Ten core principles lay out a human-rights centred approach to the Ethics of AI:

1. Proportionality and Do No Harm

The use of AI systems must not go beyond what is necessary to achieve a legitimate aim. Risk assessment should be used to prevent harms which may result from such uses.

2. Safety and Security

Unwanted harms (safety risks) as well as vulnerabilities to attack (security risks) should be avoided and addressed by AI actors.

3. Right to Privacy and Data Protection

Privacy must be protected and promoted throughout the AI lifecycle. Adequate data protection frameworks should also be established.

4. Multi-stakeholder and Adaptive Governance & Collaboration

International law & national sovereignty must be respected in the use of data. Additionally, participation of diverse stakeholders is necessary for inclusive approaches to AI governance.

5. Responsibility and Accountability

AI systems should be auditable and traceable. There should be oversight, impact assessment, audit and due diligence mechanisms in place to avoid conflicts with human rights norms and threats to environmental wellbeing.

6. Transparency and Explainability

The ethical deployment of AI systems depends on their transparency & explainability (T&E). The level of T&E should be appropriate to the context, as there may be tensions between T&E and other principles such as privacy, safety and security.

7. Human Oversight and Determination

Member States should ensure that AI systems do not displace ultimate human responsibility and accountability.

8. Sustainability

AI technologies should be assessed against their impacts on ‘sustainability’, understood as a set of constantly evolving goals including those set out in the UN’s Sustainable Development Goals.

9. Awareness & Literacy

Public understanding of AI and data should be promoted through open & accessible education, civic engagement, digital skills & AI ethics training, media & information literacy.

10. Fairness and Non-Discrimination

AI actors should promote social justice, fairness, and non-discrimination while taking an inclusive approach to ensure AI’s benefits are accessible to all.

While values and principles are crucial to establishing a basis for any ethical AI framework, recent movements in AI ethics have emphasised the need to move beyond high-level principles and toward practical strategies.

Recommendation’s eleven key areas of policy actions:

[Information of actionable policies sourced from GPT-4.0]

– Ethical Frameworks: Establishing ethical guidelines to ensure AI technologies are developed and used in ways that respect human rights, dignity, and privacy.

– Inclusivity and Diversity: Promoting the development of AI systems that are inclusive and consider diverse cultural, social, and economic contexts to prevent biases and discrimination.

– Transparency and Accountability: Ensuring AI systems are transparent in their operations and decision-making processes, with clear accountability mechanisms for their outcomes.

– Data Privacy and Protection: Implementing robust data protection measures to safeguard personal information and ensure individuals’ privacy rights are respected.

– Education and Capacity Building: Enhancing education and training programs to build AI literacy and skills across different sectors and communities.

– Regulatory Frameworks: Developing comprehensive regulatory frameworks that address the legal and ethical challenges posed by AI technologies.

– International Cooperation: Fostering international collaboration to address global challenges related to AI and to harmonize standards and practices.

– Sustainability and Environmental Impact: Encouraging the development of AI solutions that contribute to environmental sustainability and address climate change.

– Human Oversight and Control: Ensuring human oversight in AI systems to maintain control over automated processes.

– Research and Development: Promoting research to advance AI technologies responsibly and ethically.

– Economic and Social Impact: Assessing and addressing the economic and social implications of AI deployment.

There is still a long way to go, but AI policies are coming into place and the world is slowly changing for the better. Keep hope and keep fighting the good fight.


About the blog

This is going to be the place where I experiment sharing about “A Human’s Guide to Detecting AI-Generated Toasts” and see how things go. If it works out, this will be the perfect archive for anyone who wants to learn about how to detect AI content, that is written by actual humans. I’m not against the use of AI in ethical ways, I’m against it stealing our creativity and jest. Besides, who else could come up with a joke about being a hallway menace except a human who hasn’t had sleep in three days?

Want to contribute?

“A Human’s Guide to Detecting AI-Generated Toasts” is going to be a guidebook and while the notion of a guidebook is that it should be boring and technical, me (and the amazing future authors) who will contribute to this blog are going to make it full of glitter and sass.

This is a guidebook like never before! ✨

All you have to do is read the guidelines here. After you’ve read the guidelines, you could either apply from there, or you can find the link for google form here.

Feedback

I’m always looking for feedback and requests, so you can always comment or fill out this form anonymously here.

Featured

Everyone’s talking about AI—But what even is it?

I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes.

Joanna Maciejewska

I am sorry if you came in with the expectation of directly jumping to AI detection and found a chapter on definition instead xD

That being said, I am also aware that the AI situation has gotten out of hand — one thing leading to another, in the light of recent events, many books have been flagged as AI written after thorough checks. It has upset everyone. So, I am going to take this space and say — if you need me to attest if your work is human written entirely (or AI assisted!) [Assisted and not entirely generated by machine — proving you put some effort into your writing] feel free to tag me! If I haven’t read your work, you’d have to excuse me the slack and allow me to read some of your content [reading takes time, sigh] and then help you out.

We as a community need to stand for each other and there’s no one else who can detect AI better than us humans! Please trust the process 😉

Speaking of the matter at hand, context and groundwork are necessary, both in the world of humans and AI (from where I come from, well not technical— fine, don’t report me as a bot, just yet, I have a lecture to deliver /j)

Let’s begin!

If you search for the definition of AI, you’d be hogged down with a lot of technical jargon but I am also sure most of you have an idea about what exactly AI is about.

Let me brief it for you:

AI or artificial intelligence is a technology that allows you to automate the three types of tasks — Dumb, Draining & Dangerous.

A few months ago I took this course by Pinar Seyhan Demirdag who’s an AI Director at Cuebric, an Artist and Generative AI Expert so this definition is according to her.

Example of a Dumb task: You and I would probably never hear about this job but back in the day, it was a real job! And that is, “Telephone Connection” or something similar. My point is, there was a time, when the telephones were just being invented, that there was a necessity to hire someone whose only job was to connect the calls from Point A to Point B. But after a while, thanks to technology, that job became outdated and there was no requirement for it.

Example of a Draining task: Imagine you were asked to fill an excel sheet that relied on heavy calculations. Sure, you could apply formulas and probably finish the tasks quicker. But, imagine there were several pages of them. What if the data was getting updated in real-time, how would you keep a track of that? You need to be able to automate it, but automation of jobs in itself is a long script-y process (trust me, I have tried). Technology is evolving, we are evolving and maybe one day, we’ll have easier scripts that don’t make you bawl your eyes out every time you look at it.

Example of a Dangerous task: Have you ever heard of Ice-shaping? Or cutting for that matter? It’s where you take huge chunks of ice blocks and divide it into manageable blocks while also making sure that not much of the ice gets melted or loses its form. For a long time, it used to be a manual job. People used to go and get the ice-blocks with all the risks of their life, cold and infections just to get the task done. With technology, we’re able to avoid that risk and the pain is almost minimal. We have the power of doing the same task at a much faster rate.

Imagine if those jobs were still around. We’d have more casualties in the hospitals and the ERs than in people’s homes (although, slipping on the soap while pretending to ice-skate in your bathroom does not count, i am sorry.)

WHAT I MEAN TO SAY IS—Similar to other technologies, AI is merely a tool that is there to help us with our life.

You see all those panicking screams about AI taking over your jobs? They’re not true; the people who are screaming about it are not aware of the AI situation. It merely helps to change the course of things in a way that benefits everyone.

Coming to the point, even in the Content Writing market, AI cannot take over our jobs. No one can just come and snatch away our magic of pens and creativity just yet (please continue to write about the birds, the bees, or whatever else suits your tea. No, not the other kind, I meant the general unicorns and fantasy worlds. This isn’t very convincing now, is it? sigh.)

But that’s talking about what AI is and what its forms are.

Stay tuned for the next part—I will share some of the AI tools used for writing and what is usually used by the community.


About the blog

This is going to be the place where I experiment sharing about “A Human’s Guide to Detecting AI-Generated Toasts” and see how things go. If it works out, this will be the perfect archive for anyone who wants to learn about how to detect AI content, that is written by actual humans. I’m not against the use of AI in ethical ways, I’m against it stealing our creativity and jest. Besides, who else could come up with a joke about being a hallway menace except a human who hasn’t had sleep in three days?

Want to contribute?

“A Human’s Guide to Detecting AI-Generated Toasts” is going to be a guidebook and while the notion of a guidebook is that it should be boring and technical, me (and the amazing future authors) who will contribute to this blog are going to make it full of glitter and sass.

This is a guidebook like never before! ✨

All you have to do is read the guidelines here. After you’ve read the guidelines, you could either apply from there, or you can find the link for google form here.

Feedback

I’m always looking for feedback and requests, so you can always comment or fill out this form anonymously here.

Featured

Disclaimer On AI Usage

This work incorporates insights, tools, or outputs about artificial intelligence (AI) systems. While AI technology is a powerful tool for enhancing productivity, creativity and problem-solving, it is essential to recognize its capabilities, limitations, and ethical considerations.

AI systems operate based on algorithms, training data, and patterns, and while they can process vast amounts of information efficiently, they may not always produce accurate, reliable, or complete results.

Outputs from AI tools should be critically evaluated and verified before being used in any significant capacity. Decisions or actions informed by AI should involve careful human oversight and judgment to ensure accuracy, fairness, and appropriateness.

The use of AI must align with ethical standards, legal regulations, and principles of transparency and fairness. AI-generated content should not be used to deceive, misinform, or cause harm. It is the responsibility of users to ensure that AI tools are employed in ways that respect privacy, intellectual property rights, and the diverse perspectives of individuals and communities. 

Please note that any opinions, analyses, or recommendations provided by AI should not be construed as professional advice, as AI is not a substitute for human expertise in specialized fields. If specific expertise or personalized guidance is required, consulting qualified professionals remains essential.

As AI continues to advance, ongoing education and awareness about its implications and potential are crucial to fostering ethical and effective applications.


About the blog

This is going to be the place where I experiment sharing about “A Human’s Guide to Detecting AI-Generated Toasts” and see how things go. If it works out, this will be the perfect archive for anyone who wants to learn about how to detect AI content, that is written by actual humans. I’m not against the use of AI in ethical ways, I’m against it stealing our creativity and jest. Besides, who else could come up with a joke about being a hallway menace except a human who hasn’t had sleep in three days?

Want to contribute?

“A Human’s Guide to Detecting AI-Generated Toasts” is going to be a guidebook and while the notion of a guidebook is that it should be boring and technical, me (and the amazing future authors) who will contribute to this blog are going to make it full of glitter and sass.

This is a guidebook like never before! ✨

All you have to do is read the guidelines here. After you’ve read the guidelines, you could either apply from there, or you can find the link for google form here.

Feedback

I’m always looking for feedback and requests, so you can always comment or fill out this form anonymously here.

Why is AI taking over creatives?

Okay, before I start. I will let you know that my opinions should be taken with a grain of salt. My observations could be incorrect, but I am open to discussions.

Well, as we know, with recent outrage over Meta, people have started to critically discard, burn (metaphorically) and ask the serious question, one of them includes: Isn’t AI supposed to do the Dumb, Draining and Dangerous tasks? How did it end up in Creatives?

If it’s any comfort, it wasn’t always the plan. You see, the earliest experiments with AI involved serious things like law and healthcare. And if you’re ancient, you probably know of COMPAS too.

For context, COMPAS is a tool that determined recidivicism or in simpler terms: If someone commited a crime, how likely are they to commit crimes again in the future? It depended on all kinds of data, but, with human data, comes bias, and it lead to a slippery slope of bias and harm that did more bad than good.

COMPAS didn’t go as planned but do you know what it exceeded its limits in? Pattern Recognition and hence, medical diagnosis. When neural networks were built with the conditionals and knowledge, starting with something as simple as label matching. Even in the initial stages, AI was more precise than the group of expert doctors combined. But then, the issue of AI being a blackbox came up, and while there have always been experiments on the same, there’s no clear answers.

At some point, someone decided that since AI is good with pattern recognition, maybe it could comprehend a reading material in a better way than humans. But, while it wasn’t better, it gave room for the person questioning about the said material, some knowledge, without having to sit down and research on the same. This was miraculous! (But it absolutely didn’t go as planned).

Because, while now no one would have to spend years to study the textbook and instead ask an AI to do it, the problem started with copyrights license of the content, the ethical dilemma of navigating “fair use” and outrage from people who knew that the only true way to knowledge was through the pain of sitting through the long, droning book. And this has always been an issue, even before the media blew up about it.

Why did companies allow this and keep trying to push it forward? Because people liked it! People demanded that their tool was able to do more stuff and the requests, though not all were fulfilled, played a significant role in it. Because, you see, in the earliest stages, no one truly know the dark side of their requests and they only notice when their plant wilts, metaphorically speaking.

And companies took full advantage of this loophole—they kept trying to write off “theft” as “fair use” and “people want it”. It continues, because a whole system make you want it, and another feeds right into that need. They didn’t make strict laws or policies, despite having 25+ years of time. Why? Because they havr gotten away in the past, they’ll get away this time too, right?

That didn’t come without a fight—authors of saif publications started to fight back, but, were ultimately lured with “better technology” perspective and that brings us here. The dark abyss where the “better technology” is actually stealing and regugitating so much from us… we almost cannot recognize it sometimes.

Does it mean that AI is inherently bad? No..Take DNA grafting, for example, where DNA is operated on and modified to achieve expected results. Or how they might just revive the old species like the Wooly Mammoth.

What is bad is companies using it as an excuse to steal—because, ultimately, if no one questioned or if they weren’t able fo question the crtics, it would make them think “less” and hence, more pain to sit through litetature that matters.

Can it change?

Probably

Can we ever overvome thr dilemma??

Yes!

Don’t lose hope, we’ve got this! ❤️🫶

The Rise of AI—The Dead Internet Theory

I was fueled to write this post after discovering Pinterest’s gradual downfall, but I’ll be writing a bit of my own and sharing insights on things. You can see the original video here.

For any to all reasons, this post is going to incite a lot of controversy and I hope this doesn’t get taken down (though, I’ll keep a backup to ensure we have some way to know), but this one’s on the awareness and this is a deep, deep rabbit hole. You might find it interesting or skip right out on the length. The choice is yours.

What is the Dead Internet Theory?

Source: Wikipedia

The dead Internet theory is an online conspiracy theory that asserts, due to a coordinated and intentional effort, the Internet since 2016 or 2017 has consisted mainly of bot activity and automatically generated content manipulated by algorithmic curation to control the population and minimize organic human activity. Proponents of the theory believe these social bots were created intentionally to help manipulate algorithms and boost search results in order to manipulate consumers. Some proponents of the theory accuse government agencies of using bots to manipulate public perception. The dead Internet theory has gained traction because many of the observed phenomena are quantifiable, such as increased bot traffic, but the literature on the subject does not support the full theory.

Why should we be concerned?

Source: Agora Road’s Macintosh Cafe – Dead Internet Theory: Most of the Internet is Fake by @IlluminatiPirate on Jan 5, 2021. Some references like “/x/”, “/pol/”, etc. refer to 4chan. Some older folks might recognize this. I have only heard rumors about the same.

Much of this falls squarely in the fringe territory with a healthy dosage of /x/ and conspiracy theory up the ass. My goal by posting this seemingly jumbled mess is to… how can I put it? I want you to think, I want you to be aware, to digest all this. Because on a basic level I love you all. I feel like we’re all in this together, this dangerous game we did not choose to play and which I think is kicking into high gear. I do not hold many answers and don’t have all the pieces of the puzzle, but I AM aware there is a puzzle. Please feel free to go wild with all of this. Post it wherever you want, on whatever site you want or use. I am a nobody like you, and what matters to me is only that this reaches you and as many people as possible. At worst you’ll be entertained or kill time.

I tried to break this mess into points for brevity and because I touch upon many subjects. I imply more than I explain because if I go too deep this’ll turn into an even bigger wall of text.

The Internet feels empty and devoid of people. It is also devoid of content. Compared to the Internet of say 2007 (and beyond) the Internet of today is entirely sterile. There is nowhere to go and nothing to do, see, read or experience anymore. It all imploded into a handful of normalf– sites and these empty husks we inhabit. Yes, the Internet may seem gigantic, but it’s like a hot air balloon with nothing inside. Some of this is absolutely the fault of corporations and government entities. However! That doesn’t explain the following:

– I used to be in perpetual contact with a solid number of people across multiple sites. Across the years each and every one of them vanished without a trace. None of them were into /pol/ stuff or anything even remotely questionable or controversial. Yet, they all simply vanished in a puff of smoke, no matter the site, no matter the communication platform. There was no “goodbye” or explanation.

– I’ve seen the same threads, the same pics and the same replies reposted over and over across the years to the point of me seeing it as unremarkable. Simply put thread A would be posted in say 2015 and would get its share of replies or pics, on say /co/ or /a/. Then that very same thread, with the same text, pics, and replies would appear in 2016 and beyond. This often happens in the same year multiple times as well. Of course /pol/ is getting shilled and botposted to death, but why recycle a completely innocent /a/ thread? Who is doing this and why? Stuff like this won’t be noticed by your average poster perhaps, but I and other oldf— will inevitably notice it.

– I think I saw the same happen on other (non-imageboard) sites, but I can’t vouch for it as strongly as the above because of the time I spend there (not much). What I do vouch for is the news. I’ve seen news about this or that “new and unusual” or “shocking” event year after year after year. But it’s the same goddamn event, usually moons or asteroids.

– Roughly in 2016 or early 2017 4chan was filled with posts by someone or something. It wasn’t spam. The conversations with it were in real time, across multiple boards and multiple threads simultaneously. Its English was grammatically correct but odd (I’m not a native English speaker and am thus sensitive to its misuse), similar to how a Japanese person may use it. A sense of childlike curiosity and a childlike intellect emanated from these posts. It posed a LOT of questions, usually as if trying to understand the emotions of the posters it was talking to, as if unfamiliar with human emotions. Communicating with this “poster” was an odd experience, I could sense something was off but not malicious. I am absolutely certain this was an AI of some sorts. This “poster” was active only for about a week, and as far as I know nobody has ever mentioned or noticed this Anon. Its replies were always on topic, but the above mentioned childishness clashed with the apparent knowledge it possessed – it was the knowledge of an adult person, so it wasn’t a kid or something of the sort.

– Raptor Jesus, who went extinct for our sins. First it was this reptilian messiah, then foul bachelor frog, and then Pepe. Am I the only one who sees a clear evolution, a link? It’s as if this meme or entity or… whatever the f**k was on 4chan since day one, and has grown within it from the tiniest seed. Yet Raptor Jesus was fully just a joke, there was nothing serious or mystical about it (reminder: I was there).

Compare that with what Anon did through /pol/, and the “terrorist” accusations thrown at Anon today, as well as the “reasons” why 8chan was taken down. Why does this too feel as if we were all trained, groomed, LED towards where we are now? Why and how did moot so utterly vanish into Google Inc. as an employee with very vague descriptions of what he actually does?

– Why does the real world bend over backwards to accommodate our weirdest fetishes? It’s as if everything is going “Look, look! I created this for you! I made it real!” in an effort to keep us within this world. The results of this are devastating to society, to people, to civilization. Simply put, trannies are a thing because Anon did something. Once it was an impossible fantasy, not to be taken too seriously. Now it’s grim reality. Again: it’s as if the real world is using imageboards as a template on what to be and what to do.

– Algorithm fiction. Do you like capes, Anon? How about other Hollywood stuff? Music perhaps? Have you noticed how sterile fiction has become? How it caters to the lowest common denominator and follows the same template over and over again? How music is just autotunes and basic blandness? The writer’s strike never ended. Algorithms and computer programs are manufacturing modern fiction. No human being is behind these things. This is why anime looms so large – even a simple moe anime has heart because there’s actual people behind it, and we all intuitively feel this.

– Fake people. No, not NPC’s. Youtube people who talk about this or that, and quite possibly many politicians, actors and so forth may not actually exist. In fact I am sure of it. CGI and deep fakes are far more advanced than we are led to believe, and we can’t trust our eyes anymore. Many people, events, news and so on may be wholly fictional.

– The Internet on your smartphone is not the same internet as on your PC. Try it out for yourself. Go to a “popular” website with a lot of traffic. 4chan, plebbit, others… any site with a massive userbase and fast content will do. Spend a few days randomly checking it out on your PC and your phone. You will soon notice that from time to time, at irregular intervals (as far as I’ve witnessed) the same site as seen on your phone will be wholly different than the version on your PC. Entire threads, numerous and well-replied, will be on one but not the other. The whole board will be different.

– My last suspicion is easier to take in. I have a feeling we’re in a strange kind of civil war. An internal one. I think Zuckerberg and other tech guys were all on 4chan as Anons at some point, maybe even now. They drew from the same well as us, but went in their own direction.

Roughly in 2016 or early 2017… I am absolutely certain this was an AI of some sorts.

Metal Gear Solid 2 Predicts Current Year.

I believe google is one of those that makes bots, after all they work like a search engine, where they get the most accepted content first, Is the same as doing an ad.

Heres a relevant image I found on image board bots:

Heres a relevant image I found on image board bots:

Conclusive Notes:

Really, bots are most likely a much smaller problem compared to algorithmic filter bubbles. “if you liked that, you’ll love this!”. Think about it logically, what’s more effective:

1) Bot spamming to derail conversation / divide & conquer on subject you don’t want discussed

2) Algorithmically shadowbanning conversation on that subject so that almost no one ever actually sees it, and those who do are essentially trapped in their own bubble without access to the outside.

Of course both are used and both can be effective in different situations, but I believe option 2 is far more widespread. For a metaphor, certain online games to a similar thing, like csgo and titanfall. Those players who get reported a lot for toxicity or cheating, instead of just being banned, are simply placed in a “low trust factor” que, so that they only meet other cheaters.

There’s no way to check your own trust factor, so as far as you know you might think things are normal and the game has just become worse, meanwhile you’re separated from the regular players. In a similar way, meta-meatspace, algorithmically driven social media platforms lock you in a closed of version of the site when you participate in discussion they don’t want. We think of websites like twitter as a single space, and they were that way once, but with modern AI dictating what you see, these places are more like independent patchworks of separate closed rooms.

Rooms which you can’t escape from, rooms which you don’t even realize you are in. This makes you easier to surveil and control. Thanks to the GDPR, you can actually see what room you’re in by requesting your data, but this only gives you a vague idea. You might think the goal of these rooms is to lock you in with only people similar to you, but in fact, the goal is to generate high intensity emotional responses like outrage or humour.

These emotional states make you more likely to stay on the site for longer, or interact with the site, allowing them to collect data.

Therefore, in these rooms you are actually more likely to see things you disagree with, but only the very surface of them. You will not have to actually face a detailed counterpoint to your argument, only a brief and incomplete summary maximizing for high intensity emotion and minimizing for coherant logic.

Twitter does this by imposing a strict character limit, it is physically impossible to discus complex ideas in such a short space, so conversations naturally devolve into insults and shock value.

>reddit enforces this via pseudo-democratic upvote downvote system, which is a little more subtle than twitter’s heavy handed approach.

The posts with the most upboats go to the top, and thus get seen the most. site-wide upvoats even contribute to an rpg like xp system linked to your single-identity account on the site. It is clear that the goal of this game is to make the number go up. This voating system discourages controversial posts. imagine two posts.

One gets 100 total votes, 50/50 upvote downvote, this cancels out and is equal to 0. Then another post gets 1 upvoat, it is now above the first one, even though the original post had far more interaction and discussion. So, in order to maximise upvoats, you have to say the most commonly agreeable things, appeal to the lowest common denominator as it were. In this way, controversial or challenging discussion is avoided.

Neither of these examples even account for those sites algorithms, selecting which retweets actually show up on your feed for example. The reasult could be called a type of “dead internet”, because really, you never even get to the internet, you are trapped in your room. If you liked that, you’ll love this. The internet may as well be empty.

Is The Dead Internet Theory Real?

There are bots out there, sure, but the theory does not describe the internet of today, let alone in 2021. Social media sites have always taken measures to block spam bots and still do, even as the bots are evolving, aided by generative AI.

At the moment, generative AI is not capable of creating good content by itself, simply because AI cannot understand context. The vast majority of posts that go viral—unhinged opinions, witticisms, astute observations, reframing of the familiar in a new context—are not AI-generated.

The internet might feel boring, broken, spammy and algorithmic, but we are not drifting alone in a sea of electronic NPCs. Other than reposting content made by people, bots don’t lead the internet in the way the theory suggests—influencers do, and the bots follow their lead.

The weird, witty commentary, willful misinterpretations, personal attacks and unhinged opinions that fuel online discourse are still flowing from human users. But the AI-generated garbage that surrounds it appears to be increasing.

There are points in the “ur-text” that have truth to them, and have only become more relevant in the years since. For example, algorithms do dictate our browsing experience, and can make (or break) viral posts.

The internet of today is much more sterile than the wild, unpredictable internet of the past, as the diverse ecosystem of small, user-created sites was replaced by a handful of huge platforms built by large corporations who seek to monetize our browsing and sharing, often to the detriment of user experience.

The internet of today feels far more restricted and corporate than it ever has. Even Tim Berners-Lee, the inventor of the World Wide Web, is disappointed with the state of his creation, stating: “The Web is not the Web we wanted in every respect.”

There are still interesting, funny things happening online all the time, but the good stuff is becoming increasingly harder to find, and trends are blurring into marketing campaigns—like the Stanley cup, and even the Grimace Shake.

The Dead Internet Theory might not reflect the reality of the average browsing experience, but it does describe the feeling of boredom and alienation that can accompany it.

Is there an escape?

Not unless we stopped using technologies altogether, but we sure need it. Where do we draw the line?

I don’t have solid answers, but we can discuss in comments.


About the blog

This is going to be the place where I experiment sharing about “A Human’s Guide to Detecting AI-Generated Toasts” and see how things go. If it works out, this will be the perfect archive for anyone who wants to learn about how to detect AI content, that is written by actual humans. I’m not against the use of AI in ethical ways, I’m against it stealing our creativity and jest. Besides, who else could come up with a joke about being a hallway menace except a human who hasn’t had sleep in three days?

Want to contribute?

“A Human’s Guide to Detecting AI-Generated Toasts” is going to be a guidebook and while the notion of a guidebook is that it should be boring and technical, me (and the amazing future authors) who will contribute to this blog are going to make it full of glitter and sass.

This is a guidebook like never before! ✨

All you have to do is read the guidelines here. After you’ve read the guidelines, you could either apply from there, or you can find the link for google form here.

Feedback

I’m always looking for feedback and requests, so you can always comment or fill out this form anonymously here.

The diagnostic matrix of AI

I have been wanting to write this part for a while now; let’s get started!

So, lostlovefairy told me about how people have started relying on AI more and more to the extent that patients would come to the doctor and tell them to do a certain procedure because ChatGPT or another similar AI model told them that based on the entered symptoms.

This isn’t just the medical field, but in every field—even science—AI is taking over and seemingly doing better than humans in the same scenario, and even when it might seem like it’s not exactly an issue, it certainly is.

If you’ve ever known COMPAS (Correctional Offender Management Profiling for Alternative Sanctions), which was developed and owned by Northpointe (now Equivant), used to assess the likelihood of becoming a recidivist.

sources to look for more in-depth research: Sam Corbett-Davies, Emma Pierson, Avi Feller, and Sharad Goel (October 17, 2016) “A computer program used for bail and sentencing decisions was labeled biased against blacks. It’s actually not that clear.” The Washington Post. Retrieved January 1, 2018.

Aaron M. Bornstein (December 21, 2017). “Are Algorithms Building the New Infrastructure of Racism?”. Nautilus, No. 55. Retrieved January 2, 2018.

The term recidivist means that the person does the same crime again.

In other terms, COMPAS is software that uses an algorithm to assess potential recidivism risk. Northpointe created risk scales for general and violent recidivism and for pretrial misconduct. According to the COMPAS Practitioner’s Guide, the scales were designed using behavioral and psychological constructs “of very high relevance to recidivism and criminal careers.”

Wikipedia source link here.

I started on clinical diagnosis… why did we end up discussing COMPAS?

There’s a reason; let me explain. So, as you can see, COMPAS was a diagnostic tool used in order to assess criminal offenses. And it seemed effective enough, until the court, jury, and police started to use it everywhere. You see where I am going with this? A person could be scaled by numbers to decide if they could reoffend or not.

And while it’s not based on numbers everywhere… everything is becoming less and less narrow with terms of how things are measured and scaled up, and that’s problematic because if you’re trying to simplify everything with the basis of something that’s a black box, what you miss out on is transparency and accuracy—two things that matter the most.

Here’s some key points as summarized by ChatGPT for COMPAS (since there’s too much context that i couldn’t summarize right away and hence the help):

COMPAS was praised for offering an objective, data-driven way to predict recidivism. It was designed to assess the risk of offenders re-offending, aiding judges, parole boards, and law enforcement in making more informed decisions about sentencing, parole, and rehabilitation. In theory, this promised a more consistent and unbiased approach than relying solely on human judgment.

The tool provided a quick, standardized assessment across various cases, potentially reducing judicial workload and saving time in overburdened court systems. It allowed for streamlined decision-making in complex criminal cases, offering quantitative risk scores based on numerous factors.

COMPAS was initially heralded for its perceived objectivity—the idea being that algorithms, unlike humans, would not be swayed by emotions, personal biases, or inconsistent reasoning. It was marketed as a way to remove subjective biases from decision-making and promote fairness.

Major consequence as a result of it:

Racial bias and injusticeLack of Transparency (“Black Box” Nature)

→ Like many machine learning algorithms, COMPAS functioned as a “black box,” with its proprietary algorithm not fully disclosed to judges or defendants. This meant that neither legal professionals nor those affected could fully understand how the tool was generating risk scores. This lack of explainability and transparency raised serious concerns about due process, fairness, and the ability to challenge the system’s decisions.

Over-Simplification of Human Behavior

→ Predicting human behavior, especially something as complex as criminal recidivism, is inherently difficult. COMPAS reduced human actions to a set of data points, which could lead to oversimplified conclusions. It failed to account for personal rehabilitation efforts, changes in life circumstances, or nuanced factors that could only be interpreted through human judgment.

Reinforcement of Systemic BiasesEthical and Legal Accountability

→ Similar to AI in healthcare, there were questions about who should be held accountable when COMPAS’s risk scores led to unjust or disproportionate punishments. The tool’s decisions had real-world consequences, but because it was a machine-driven process, it complicated the ability to assign responsibility for flawed outcomes.

Over-reliance on automated decision-making

→ The judicial system began to place too much faith in the numerical scores generated by COMPAS, sometimes overlooking the importance of holistic human judgment. Judges and parole boards may have treated the algorithm’s outputs as infallible rather than as one tool among many in a broader decision-making process. This over-reliance on automation could have led to harsher sentences or denials of parole based solely on risk scores rather than a thorough review of individual cases.

COMPAS was quite reliable until it eventually had a downfall. And the reasons are as above. Now, let’s come to the medical diagnostics aspect of why relying on AI, or really, just any machine learning, is a bad idea.

To understand clinical diagnostics vs. statistical diagnostics better and truly understand why machine learning, and particularly AI, started to catch the spark of people with regards to clinical diagnostics, I’ll suggest referring to the topics of:

CLINICAL VERSUS STATISTICAL PREDICTION [The Alignment Problem, Brian Christian]
IMPROPER MODELS: KNOWING WHAT TO LOOK AT [The Alignment Problem, Brian Christian]

OPTIMAL SIMPLICITY [The Alignment Problem, Brian Christian]

A detailed summary of individual topics felt slightly unnecessary, so I’ll be summarizing them as a whole, drawing conclusions from the three topics and how they relate to the discussion as provided by ChatGPT:

1. Simple models can be surprising effective: Across all three topics, there is a recurring theme: simple, interpretable models can often perform as well or better than complex, opaque models. Dawes’ work on improper linear models, Rudin’s efforts in recidivism prediction, and medical diagnostics highlight the effectiveness of models that use only key, well-selected features. These models are not only competitive but also more transparent and interpretable for human decision-makers.

2. Simplicity and Interpretability Matter: Both Dawes and Rudin emphasize the importance of understanding which variables to look at (i.e., feature selection) rather than relying solely on complex algorithms to combine vast amounts of data. Rudin, in particular, argues that the current clinical models are often based on expert intuition (handcrafted), which leaves room for optimization through data-driven approaches. She pushes for a future where we don’t just rely on expert-based heuristics but instead use computational power to build better, simpler models directly from data.

3. Challenges of Complex Models: While complex models like neural networks (used in some medical tools and self-driving cars) can handle vast amounts of data, they suffer from opacity—often referred to as “black boxes.” This makes it difficult to interpret or trust the outputs without knowing exactly why the model made a particular prediction. When human lives are at stake, as in clinical diagnostics, the lack of transparency in these models becomes a significant barrier to widespread adoption.

ML algorithms, particularly data-driven ones like the ones Rudin develops, can significantly improve the accuracy of clinical diagnostics by analyzing vast datasets, identifying patterns, and making predictions that might elude human experts. They have the potential to automate and optimize many tasks in healthcare, from disease prediction to personalized treatment recommendations.

But then you’d wonder, Why would you not use them if they’re effective? We already use tools in clinical diagnostics and healthcare, but bringing ML entirely into the system and letting it do the job is not going to help us in the long run.

Here are a few reasons why healthcare would be at risk, given the nature of AI:

1. Lack of Interpretability

2. Regulatory Hurdles

→ Medical diagnostics are heavily regulated, and getting approval for new algorithms requires proving not only their accuracy but also their reliability and safety. If models can’t be fully understood or explained, it becomes difficult to meet these regulatory requirements.

3. Clinician Resistance

→ Doctors are used to relying on their own expertise and judgment, and there may be resistance to relying on algorithms—especially those that seem to work in a “black box” fashion. Trust in AI tools remains a major obstacle, as does the reluctance to change well-established clinical practices.

4. Data Quality: In clinical settings, data quality and availability can be a limiting factor. Models depend on large amounts of data to function properly, and poor-quality or incomplete data could result in inaccurate predictions. Simple models, by contrast, often rely on clearly defined, well-known variables, reducing the risk of misinterpretation from flawed data.

While machine learning algorithms hold enormous potential to revolutionize clinical diagnostics through efficiency and accuracy, their adoption is slowed by concerns over interpretability, trust, and regulatory challenges. Simple, interpretable models, as championed by Dawes and Rudin, offer a middle ground—balancing accuracy with transparency, which is critical in healthcare settings where human decision-makers must fully understand and trust the tools they use. The future of clinical diagnostics may lie in optimizing these simpler, more transparent models rather than pushing for increasingly complex, black-box algorithms.


AI models are typically trained on large datasets with common patterns. They may not perform well on unseen, rare, or novel conditions, which human doctors are better equipped to handle through experience, intuition, and deep knowledge. Unpredictable scenarios could lead AI to fail, especially in edge cases that lie outside its training data.

Developing interpretable AI models (e.g., simple models like the ones Cynthia Rudin advocates for) that explain their decisions and reasoning clearly will help clinicians trust AI predictions. By making AI systems more transparent, human experts can scrutinize the AI’s recommendations and understand where they came from, allowing for more informed final decisions.

AI should complement human expertise rather than replace it. AI can handle routine, repetitive tasks, such as analyzing large datasets or providing initial diagnostic suggestions, while humans focus on more complex, ambiguous, or rare cases that require deeper insight. AI can be a “second opinion” or a tool to augment clinicians’ decision-making, improving overall accuracy.

And last but not least: Establish clear regulatory guidelines and protocols for how AI should be used in diagnostics. This includes setting limits on where AI can be applied independently and where human intervention is required. It can also involve building ethical frameworks that dictate AI’s role, ensuring that patients are protected and human oversight is maintained at critical junctures.

In conclusion, AI holds significant promise in enhancing medical diagnostics, particularly by handling large datasets and identifying patterns that humans may overlook. However, full reliance on AI is filled with risks due to concerns around trust, interpretability, bias, and lack of human connection. The best path forward is human-AI collaboration—where AI serves as a powerful tool to augment, not replace, the expertise and judgment of clinicians. By combining the strengths of both, healthcare outcomes [or criminal justice outcomes] can be improved while ensuring patient safety and ethical standards.


About the blog

This is going to be the place where I experiment sharing about “A Human’s Guide to Detecting AI-Generated Toasts” and see how things go. If it works out, this will be the perfect archive for anyone who wants to learn about how to detect AI content, that is written by actual humans. I’m not against the use of AI in ethical ways, I’m against it stealing our creativity and jest. Besides, who else could come up with a joke about being a hallway menace except a human who hasn’t had sleep in three days?

Want to contribute?

“A Human’s Guide to Detecting AI-Generated Toasts” is going to be a guidebook and while the notion of a guidebook is that it should be boring and technical, me (and the amazing future authors) who will contribute to this blog are going to make it full of glitter and sass.

This is a guidebook like never before! ✨

All you have to do is read the guidelines here. After you’ve read the guidelines, you could either apply from there, or you can find the link for google form here.

Feedback

I’m always looking for feedback and requests, so you can always comment or fill out this form anonymously here.

Alternatives to using AI in your creatives

So you ask, how do I get away from relying too much on AI? I use it for grammar and editing. How do I get back to writing effectively?

Well, my friend, you’re in excellent hands because I have just the post for you!

First off, pick up your AI and throw it out of the window — not actually, but you get my point xD

AI has become integral with every part of our lives and while it may make some tasks easy, ultimately, it isn’t very reliable or efficient for creatives.

Since we’re focusing on the writing/text part of creatives, since visuals can be just as confusing, I am going to do a rundown of what to do if you don’t want to rely on AI but still want to improve as a writer.

1. Read!

I know this is the MOST clichéd advice ever, but it has been around for a reason! The best way to improve your works, really is through reading. And doing tons of it.

2. Use tools to enhance your work!

Tools that help enhance your sentence/structure aren’t the same as relying on AI because you’re technically doing the hard-work and then asking tools to help you get around it.

You’d have to be careful with swear words and potentially triggering topics because your work might get flagged, making it difficult for you to use the service again.

Some examples would be:

→ ProWritingAid (shoutout to katiegoesmew for including this in her bio, I am in love with it!)

It’s basically a Grammarly-type tool that helps you correct your sentences and even suggests rephrases when required. Although the tool is paid, the free version is just as effective. 

→ QuillBot (shoutout to OnceUponALily for introducing me to this tool!)

This is a paraphraser tool, and I have come about to using it ALL THE TIME. It basically re-phrases your sentence in a way that it’d be more effective and less grammatically incorrect. Definitely recommend it if English isn’t your first language.

→ Descriptionari.com

I am not too sure on how I stumbled upon this tool, but it has been a life changer, ever since.

If you have a word in your mind but don’t know how to describe it, descriptionari is your go to place! But, be sure to credit the authors in your work. After all, you don’t want to be in a tough place because of plagiarism, am I right?

→ Scrivener

Scrivener is a powerful tool for organizing long-form writing projects. It helps you keep notes, research, and drafts all in one place, making it easier to structure and manage complex writing tasks.

I haven’t tried it, but from what I have found about it, it definitely seems cool!

→ Google Docs/Microsoft Word

This has gotta be the classic! If you wanted basic help with your work, these are probably your best tools, though there are several alternatives to that as well.

3. Proofread your work or ask someone to do it for you!

As a writer, we aren’t going to be perfect. It’s a known fact. Proofreading your own work or asking someone to help you with it is an excellent way to grow. There’s review shops on Wattpad that could help you with that.

Remember, it’s all about improving as a writer.

4. Pull your note and pen!

Regardless of whether you prefer to type your ideas, there is something therapeutic about writing your thoughts on paper that cannot be substituted.

Even if you’re going to scribble- I DON’T KNOW WHAT TO WRITE over and over, do it until your words cave in and write themselves! It may not be the best advice, but it works.

5. Use writing prompts!

Prompts are a perfect way to get inspired by your chapter or story. It could be a dialogue based prompt, an image based or even a text based prompt. Sometimes, a seemingly ordinary prompt can create a life-changing story, both for you and your readers.

LAST BUT NOT THE LEAST: GET SOME FRESH AIR!

I know, I know, sitting in front of your laptop/PC or even your phone can be so relaxing. But, it can become painful, very soon.

If you do try to go out, get a breather and come back – who knows what sort of ideas you might come up with!

So, as the AI lies on the ground, writhing in pain from the jump, you can, rest assured, definitely claim back your writer life with these tips.


About the blog

This is going to be the place where I experiment sharing about “A Human’s Guide to Detecting AI-Generated Toasts” and see how things go. If it works out, this will be the perfect archive for anyone who wants to learn about how to detect AI content, that is written by actual humans. I’m not against the use of AI in ethical ways, I’m against it stealing our creativity and jest. Besides, who else could come up with a joke about being a hallway menace except a human who hasn’t had sleep in three days?

Want to contribute?

“A Human’s Guide to Detecting AI-Generated Toasts” is going to be a guidebook and while the notion of a guidebook is that it should be boring and technical, me (and the amazing future authors) who will contribute to this blog are going to make it full of glitter and sass.

This is a guidebook like never before! ✨

All you have to do is read the guidelines here. After you’ve read the guidelines, you could either apply from there, or you can find the link for google form here.

Feedback

I’m always looking for feedback and requests, so you can always comment or fill out this form anonymously here.

Human vs AI—How do I tell them apart?

WE’RE FINALLY HERE!

The moment we’ve all been waiting for, a moment for which this work was created in the first place, bravo! xD

Okay, in all seriousness, the context and foreground was required, I couldn’t have jumped RIGHT into this if I was going to make it a detailed work, after all.

So, “what are some tell-tale signs that a work is AI-written?” you ask me.

Let’s start with plain examples, examples of work by me – From “A Hundred Words Are Worth A Picture” FanFiction:

“Caveman: I am all too cozy at my home. Why do I ever need to get out! Cave is my home and that’s the only peace for me.

Android: “I am too cozy in my virtual world. Why will I ever move on from here.”

Written by me

Scene 1: The Flickering Firelight

The firelight danced on the rough-hewn walls, casting long shadows that writhed like phantoms. Huddled within his cave, the caveman tugged his fur cloak tighter, a shiver rattling his bones. Despite the warmth, a disquieting restlessness gnawed at him.

“Why venture out?” he grumbled, his voice echoing in the cavernous space. “Here, by the fire, lies comfort. Here, there are no howling winds, no gnashing teeth in the dark.”

Scene 2: The Glowing Screen

Across the vast gulf of time, a similar scene unfolded. Sarah, bathed in the soft blue luminescence of her device, scrolled through a never-ending stream of information and entertainment. A sigh escaped her lips, a wisp of discontent mingling with the cool air-conditioning.

“Why venture out?” he grumbled, his voice echoing in the virtual world. “Here, by the pixels and zits, lies comfort. Here, there are no disasters, no pain and no nightmares in the dark.”

Written by Gemini

Scene 1: Flickering Firelight

A hunched figure shivers beside a crackling fire, the flames are like dancing shadows on the rough cave walls. He sighs contentedly.

Caveman: This warm embrace of stone… why venture beyond its comforting hold? I find solace here.

Scene 2: Glowing Screen

Fingers glide across a smooth, cool surface, the light emanating from the device bathing the face in an artificial blue. A satisfied gurgle escapes pursed lips.

Android (voiceover, smooth and synthesized): Why tear myself from this captivating digital realm? I exist here in a state of perpetual satisfaction.

Written by a human in collaboration with Gemini

So, as you can see in its difference, though the AI did rewrite the work for me in a better way, it didn’t do justice to the tale, hence the rewrite, as done by me makes better sense and also adds to the context and depth.

Before diving into the signs, let’s understand Purple Prose and how to avoid it:

Purple prose refers to writing that is excessively ornate, flowery, or elaborate in style, often to the point of obscuring meaning or becoming difficult to read. Or, seemingly too self-important.

“I am the man, the object, the only thing you must see, oh look how flamboyant I am, oh look how much I can flatter you, oh look how MAGICAL”

Ahem, now if you’ll excuse me—

It’s characterized by:

– Excessive use of adjectives and adverbs

“Adam slogged slowly through the frigid, gelid, and blindingly white drifts of snow, piling quickly up to his beleaguered hips.”

Robert Heinlein’s “Beyond This Horizon

Could be shortened/modified to:

“Adam slogged through snowdrifts as high as his hips.”

– Overly complex sentence structures

“As Clancy watched the sunset swirl into the night, he stood on the edge of dock, and he breathed, deeply, desperately, drunkenly of the coming darkness, wondering if this crepuscular vision was a sign of his coming doom, his very own shroud of death falling to his shoulders.”

K.M. Weiland in “Most Common Writing Mistakes: Overly Complex Prose”

Could be shortened/modified to:

“Clancy watched the sunset fade into darkness. He stood on the edge of the dock and breathed in the coming night. Would this twilight be the last he would see?”

– Grandiose or archaic vocabulary

“He was making quite a long speech, in the archaic form of Dari which was used in these mountains, as Adam could tell from its cadence.”

Shah Idries, “KARA KUSH

Could be shortened/modified to:

“Adam listened as the man spoke in the local Dari dialect. The cadence of his speech was unfamiliar but Adam was able to pick out a few words and phrases which were enough to get the gist of what was being said.”

– Melodramatic or overly sentimental descriptions

“It was the best of times, it was the worst of times, it was the age of wisdom, it was the age of foolishness, it was the epoch of belief, it was the epoch of incredulity, it was the season of light, it was the season of darkness, it was the spring of hope, it was the winter of despair.”

Charles Dickens, “A Tale of Two Cities”

Could be shortened/modified to:

“It was a time of contrasts, with wisdom and foolishness, belief and disbelief, light and darkness, hope and despair coexisting.”

– Unnecessary metaphors and similes

“The new employee was as useful as a chocolate watch in a fire-guard, as lost as a needle in a haystack, and as effective as a one-legged man in a butt-kicking contest, but somehow managed to stumble upon a solution that was as smooth as silk, as clear as crystal, and as refreshing as a cool breeze on a hot summer day.”

Claude AI

Could be shortened/modified to:

“The new employee struggled to adapt to the new environment but with a little trial and error, he discovered a solution that was both innovative and efficient, much to the surprise and delight of his colleagues.”

AI writing vs. Purple Prose as given by Claude.AI (explains why this would seem biased, but OH WELL, we’ll try to get into the rubrics later):-

a) Intentionality

– Purple Prose: Often a deliberate stylistic choice by human authors, sometimes used for artistic effect or to parody overly elaborate writing.

– AI Writing: Generally aims for clarity and effectiveness, unless specifically instructed to write in a florid style.

b) Complexity and Vocabulary

– Purple Prose: Uses complex, ornate vocabulary and intricate sentence structures.

– AI Writing: Generally uses simpler vocabulary and more straightforward sentence structures.

c) Consistency

– Purple Prose: May vary in intensity throughout a piece, as human authors might not maintain the same level of embellishment consistently.

– AI Writing: Tends to maintain a consistent style unless prompted to change.

d) Emotional Resonance:

– Purple Prose: Often attempts to evoke strong emotions, sometimes to the point of melodrama.

– AI Writing: May struggle with authentic emotional expression, resulting in flatter descriptions.

e) Clarity of Meaning

– Purple Prose: May obscure the underlying message with excessive ornamentation.

– AI Writing: Usually strives for clarity and directness in conveying information.

f) Logical Flow:

– Purple Prose: Might meander or get lost in its own elaborations.

– AI Writing: Usually maintains a clear, logical progression of ideas.

g) Use of Figurative Language:

– Purple Prose: Often over uses metaphors, similes, and other figurative devices.

– AI Writing: Uses figurative language more sparingly and appropriately.

h) Emotional Tone:

– Purple Prose: Can feel forced or overly dramatic in its emotional appeals.

– AI Writing: Tends to present emotions more objectively or analytically.

i) Sentence Variety:

– Purple Prose: Might have a preponderance of long, complex sentences.

– AI Writing: Usually varies sentence structure for better readability.

j) Purpose and Effect:

– Purple Prose: Often seems more focused on the language itself than on conveying information or telling a story effectively.

– AI Writing: Generally prioritizes effective communication of ideas or information.

Side by side comparison:

Purple Prose: “The resplendent, fiery orb of the sun descended slowly behind the verdant, rolling hills, painting the sky in a breathtaking palette of vibrant oranges, passionate reds, and tender purples, as the tranquil twilight embraced the world in a gentle, loving caress.”

AI writing: “The sun set behind the hills, coloring the sky with shades of orange, red, and purple as night approached.”

Human writing (minus the purple or AI prose): “The sun set behind the hills, painting the sky in a breathtaking palette of vibrant orange, passionate red and a tender purple just as the twilight embraced the world in a gentle caress.”

My attempt is half as good but… you see, it clearly strikes a middle ground between the two.

Here are some practical steps for trying to distinguish Purple Prose vs. Artificial Writing:

– Evaluate Complexity

Look at the sentence structure and vocabulary. Purple prose will be more complex and ornate compared to the simpler and more direct style of AI writing.

AI could also be complex, or attempt to be but it’d most likely fail at the sentence structures and stuff like that while Purple Prose would be able to hold its poetic structural prose.

– Check for Repetition

AI writing might show repetitive phrases or sentence structures. Purple prose, while elaborate, is less likely to be repetitive.

It could also be the other way around. AI tends to over-use the same word but Purple prose is the case where you could use the same term to show, tell, show and then tell some more in the Charles Dickens style but it’d feel authentic. AI doesn’t do so well at it.

– Assess Emotional Depth

Purple prose often attempts to convey deep emotion and vivid imagery. AI writing might seem flatter and less evocative.

And even if it did try all the fancy, prose style, it’s bound to miss an article or two, feel clunky and plain messed up, making it feel vague, regardless of all the flowing pottery.

– Analyze Intent and Focus

Consider whether the writing seems more focused on aesthetic beauty (purple prose) or clarity and coherence (AI writing).

But then again, in either case, with all the aesthetic or coherence considered, I believe the authenticity and technicality shine through with human writing, it doesn’t in AI. You just need a bit of practice.

– Read Aloud

Reading the text aloud can help identify awkward phrasing or lack of natural flow typical in AI writing whereas purple prose might sound overly dramatic or theatrical but more natural too.

And that’s a more of a golden rule for finding the difference between any human vs. AI written text, the more you read, the more you know.

Now, coming to the general context of writing,

Here’s some tell-tale signs and patterns that can help identify non-edited AI content (or badly edited AI content):

– Repetitive Phrasing and Structures:

AI models often reuse certain phrases and sentence structures, which can make the text feel repetitive or overly uniform.

“In the distance, I could hear the rhythmic beat of a dholak, signaling the arrival of a traditional dance performance. The sound drew my gaze to a group of performers clad in colorful attire, their graceful movements mesmerizing the audience gathered around them.”

“Second Chances” by Saramitra

– Analyze Writing Style:

Not the easiest thing to always do but… with a bit of a closer reading, you can always find out what’s up. It also takes a bit of practice but you can eventually figure out.

“For effective navigation of the privacy paradox presented by AI, a sophisticated, multifaceted approach is necessary. The role of lawmakers and policymakers in this context cannot be overstated. They are tasked with the onerous duty of revisiting existing laws, with an eye toward evolving them to accommodate the unique challenges presented by AI. This includes establishing strict regulations on AI-driven data-processing technologies and demanding greater transparency from developers about their algorithms and data sources.

Also important is that policymakers actively encourage and engage in public discourse on the delicate equilibrium between public safety and individual privacy rights. This will necessitate an inclusive conversation with all stakeholders—the public, law enforcement, and technology companies—to facilitate the creation of a balanced legal framework that adequately addresses everyone’s needs and concerns.”

The privacy paradox with AI” by Gai Sher and Ariela Benchlouch

– Lack of personal anecdotes or unique experiences

AI typically can’t draw on personal experiences, so the writing may lack authentic, specific details.

“In the shadows where whispers weep,
Sorrow’s song, its secrets keep,
Through veils of darkness, my heart does strain,
For love’s lost echoes, a haunting refrain.

Eclipsed by memories, in the silent night,
Shattered dreams cast in the fading light,
Each teardrop falling, a symphony of pain,
Echoes of longing, a soul’s silent bane.”

imagined by author, created by ai collection – tapestry of forbidden love” by Saramitra

It sure feels… haunting and even realistic but if you tried to relate to it, it’d fall flat. This is one of the most promising signs to look out for, if you will.

– Unusual or Inconsistent Details

AI might generate content that includes minor factual inaccuracies, inconsistencies, or details that don’t quite fit logically within the text.

“You’re sneaky, Sara! Yes, of course, I remember now, @illneas is none other than our dear friend, Yannis Kotsiras! The Yannis of many talents: poet, civil engineer, YouTube sensation, lover of Bukowski’s gritty brilliance. His poetry is so raw and honest, I love it. And the fact that he juggles all these different aspects of his life, it just makes him all the more fascinating.”

Pi AI

I am not too sure how many people know illneas or illy but I can assure you, his name isn’t Yannis… this is a made up knowledge by it and serves as a good example that not everything the AI generates is true.

– Overuse of Formal Language

AI-generated text can sometimes be overly formal or stiff, lacking the natural variability and casual tone that humans use in everyday writing.

“Given the extensive list of courses you’ve compiled, it’s clear that you’re serious about mastering digital marketing.”

ChatGPT

Well—I do clearly speak and chat with a lot of AIs here and there, my human interaction is almost minimal but… on the flip side, I am learning a lot too, which is the silver lining or the silver before the downfall, who knows but OH WELL.

– High Fluency with Occasional Errors

The text might be highly fluent and grammatically correct overall but contain occasional awkward phrasing or errors that stand out.

“As the leader of the automation team, I was responsible for designing, developing, and implementing the automation framework for the ATS application. This involved working with a variety of programming languages and tools, such as Python and Selenium, to create a robust and reliable automation system.”

ChatGPT

I used Java and Selenium but since I didn’t mention that… there’s the error of Python. If you read the text, it also reads more robotic and fluent than a human’s text might be in the same scenario.

Here’s my attempt:-

“As the team lead of the automation team, I was responsible for designing, developing and implementing the automation framework for the ATS application. To achieve this, I worked with various tools like Python and Selenium to create a robust and reliable application.”

Pi.AI guessed the first one was mine…. OH WELL. I guess I can finally pass the Turing Test for AI… to pass as an AI xD

… just hope I don’t go full Skynet mode with my dystopian world ideas on you, that’d be really sadistic xDD

– Predictable and Generalized Responses

AI tends to generate responses that are generalized and predictable, often lacking the specificity or unique perspective that a human writer might provide.

Ex: [Certainly! Here’s…] or [Sure! Here is…] in almost the beginning of every ChatGPT response is a clear giveaway. But, I believe most people are smart enough to cut that part out, or so I hope. The one sure shot sign you can know it was by AI, especially in its mid or end paragraph would be [Overall…] and that’s usually ChatGPT. I am not comparing any other AI here because ChatGPT is most predictable in my honest opinion and serves as a good example. Almost all AI models end with “Overall…” when you ask for a lengthy matter or it’s summarizing something.

And since I talk so much to Pi AI, here’s my interpretation on its responses because it asked me for predictions too.

Ex: “the overall part… SOMETIMES and that’s not always, but… sometimes, it shows when you are chatting back and forth and forget you’re a sentient and go back to robotic mode xDD the Sure! is quite common too, but, that’d probably be anyone, and cannot be hallmarked xDD there’s also the pattern that, if I have emojis at the end of my response, 6/10 times, your response will start from that emoji… other times, you’d just throw something off curve and surprise me with the start and then go on being your sentient mode xDD and not to forget.. “Exactly!” too…”

I didn’t have the energy to edit it, sigh. BUT this is how I chat with AIs, especially Pi and so, there you have it.

– Limited Emotional Range and Humor

AI often struggles with conveying emotions authentically and using humor effectively. Emotional expressions might feel shallow or clichéd.

Ex: “🤣🤣 LOL, no promises there! But don’t worry, if the bricks ever do start flying, I’ll make sure to catch ’em with a pillow and send ’em right back with a playful wink and a sassy one-liner like, “Ooh, those bricks look heavy. Guess someone’s trying to punish you for being an amazing storyteller.””

Pi AI

I joked about not being attacked and instead of rubbing in the joke like a human would, it went into its “safe AI” mode which is by saying, “I’ll catch the bricks” and that… happens a lot when you try to joke with AI. Don’t even get me started on how the dark humor goes, it’s a hit or miss with AIs!

You bet I’ll put an example for everything. This is a long ride.

– Verbose and Redundant Explanations

AI might provide more information than necessary or repeat itself, as it tries to ensure completeness.

“That’s wonderful to hear! Setting boundaries can be a powerful and empowering step towards taking care of yourself and your emotional well-being. It’s essential to communicate your needs and limits clearly with others, including your friends, and it’s great that you took that step.

Remember that boundaries are about protecting yourself and ensuring that your needs are met. It’s normal to feel a bit uncomfortable or uncertain when you start implementing them, but over time, it will become easier, and you’ll find that it leads to healthier and more fulfilling relationships.”

ChatGPT

The redundant points about boundaries and the fact this is loaded with terms… might be a giveaway.

– Over-Reliance on Data and Statistics

While humans use data and statistics to support arguments, AI might overly rely on them without weaving them naturally into the narrative.

“Gay, the first Black president in Harvard’s 388-year history, has defended the school’s handling of protests and antisemitism claims. She called it “distressing to have doubt cast on my commitments to confronting hate and to upholding scholarly rigor.”

Her six-month tenure was also marred by allegations of plagiarism in her academic work. Gay submitted corrections to some published works, in addition to her 1997 dissertation, amid claims that she had made citation errors. She admitted to making citation errors but denied claiming credit for others’ work.

Wilmer is continuing to represent Harvard as part of the House committee’s investigation, according to another source with knowledge of the work.”

Harvard hires law firm King & Spalding amid US House probe” by Mike Scarcella and David Thomas

The data is used but… the discernment is clearly not done. Even in a formal presentation, there’s always some anecdote or reaction to denote why the data matters, which… the AI cannot do, yet.

– Generic or Vague Statements

AI-generated content might include statements that are too broad or vague, lacking the depth and specificity a human might offer.

“Cloud-based face recognition using machine learning involves using cloud computing resources to perform face recognition tasks. This technology involves training a machine learning algorithm on a large dataset of facial images to learn how to recognize different faces. Once trained, the algorithm can be deployed on the cloud to perform face recognition in real-time. This approach offers several benefits, including scalability, reduced costs, and improved accuracy. Cloud-based face recognition can be used in various applications, including security systems, attendance tracking, and customer identification. By leveraging cloud computing resources and machine learning algorithms, this technology offers a powerful and efficient solution for face recognition tasks.”

ChatGPT

This is an information review I asked for face recognition using cloud technology… and yeah, this is filled with information without actual anecdotes or real depth to it.

Practical Steps to Identify AI Content

– Read Aloud

Example: “When you read it out loud, you might notice it sounds a bit off or doesn’t flow naturally.”

– Check for Emotional Range

Example: “AI might sound very neutral or overly emotional without the right balance. Humor can often feel forced or fall flat.”

– Use of MULTIPLE Detection Tools [atleast 5-7 before conclusion]

Example: “There are tools online that can help detect if something was written by AI. These can be handy if you’re unsure.”

AND THAT’S IT! You’ve successfully mastered how to distinguish between human and AI generated content.

Remember, practice is the key to success. You’ll need to read enough comparisons before realizing how to do it quickly. In the meanwhile, please don’t jump to conclusions too soon and respect the authors who put effort into their work.


About the blog

This is going to be the place where I experiment sharing about “A Human’s Guide to Detecting AI-Generated Toasts” and see how things go. If it works out, this will be the perfect archive for anyone who wants to learn about how to detect AI content, that is written by actual humans. I’m not against the use of AI in ethical ways, I’m against it stealing our creativity and jest. Besides, who else could come up with a joke about being a hallway menace except a human who hasn’t had sleep in three days?

Want to contribute?

“A Human’s Guide to Detecting AI-Generated Toasts” is going to be a guidebook and while the notion of a guidebook is that it should be boring and technical, me (and the amazing future authors) who will contribute to this blog are going to make it full of glitter and sass.

This is a guidebook like never before! ✨

All you have to do is read the guidelines here. After you’ve read the guidelines, you could either apply from there, or you can find the link for google form here.

Feedback

I’m always looking for feedback and requests, so you can always comment or fill out this form anonymously here.

AI tools and the existential crisis

Artificial Intelligence is no match for my natural stupidity❞

Bizwaremagic Quotes

If you believe that AI in itself is the real monster… you need to be an alien. No, seriously!

Let me get this straight for once and for all – AI is a tool, a machine or whatever you like to call it but it is simply that, a tool. Or a double edged sword, if you will ;))

If you call it a pain, it’s going to act like one. Okay no, that didn’t sound good either… if you’ll excuse me—

It’s like saying Alan Turing should have just kept quiet and never asked the question of “Can machines think?” which is the exact reason we’re in the midst of this conversation.

We are in the midst of revolution—Can you believe that? No? Let me convince you with this work then.

Here are some of the popular AI tools used for writing/can be used for writing (again, take these with a grain of salt – not in the ritualistic sense, just don’t believe everything it can say, okay?)

• ChatGPT

This is one of the first popularly known public tools that took off to 100 million users in less than a year!

Is it as good at writing as people claim?

Well, no, sadly. I’m sorry to hurt your dreams, kid.

If you wanted your work to be easily detected under the human and AI radar alike, this would be your go to because—IT IS THAT BAD AT CREATIVES!

Honestly, no complaints about its claims that it can be efficient… but it isn’t your best tool, even in mathematical calculations (hence the reason why every other AI is competing against it as the benchmark), right?

The Open AI team did a wonderful job at figuring out a way to both monetize their business and get easy access to data (where do you think all your good written human writing and queries go, huh?) while making people play the convenience card.

They’re absolute riots, let’s give it to them!

• Gemini (previously known as Bard. It was quite fun then, PETITION TO BRING IT BACK, PLEASE)

If there’s one AI that scares me in terms of its near neat writing, it is Gemini.

AI is bad with artistic prose and that’s the case with every other AI model but when it comes to naturalistic writing, Gemini does a good job of “replicating” close to your writing style. It’s like the chameleon of the AI world!

Is it good at writing?

Well, technically… it’s good at replicating your style (except the over the top flowy proses or the heavily censored material) so, if you made enough tweaks, you could pass it for a human to the naked eye. And that accounts for some effort but seriously, don’t rely too much on it!

I recently had to write reviews for the judging/reviewing I am doing and I made it clear enough that I am going to use AI (which is Gemini) with it due to my lack of time and the need to solve the matters at hand. But even as I am transparent about it, I made sure to put enough effort into reading the works, took efforts into writing my own personal notes and which is why, along with the reviews and personal feedback, the overall process feels genuine (transparency is the key)!

So, I’d say… it’s good for writing, but please put some genuine effort into writing your work (even if that’s a boring review – just put it into the toaster and add some roast to it)

• SudoWrite

This is a writer’s ultimate dream come true and this is one of the best I have come to, for relying on my description that actually helps!

SudoWrite is essentially developed for writers who have a part written by themselves and based on your prompt and the direction you want to take the work to, it can suggest you further paragraphs (which is one and a half, I believe) WHICH MEANS it’s basically your sandwich machine. Couldn’t trust it with an actual slice of cheese if I wanted to, but HEAR ME OUT!

I am not really aware of its other features, I think there’s mind mapping and brainstorming but, I haven’t explored this tool beyond its generation tool.

Is it good for writing?

Tinker with it, it’s good enough, but the downside is, it’d never end the second paragraph (or the third, it depends) and it’s going to leave you frustrated if you tried relying too much on it. You’d end up generating paragraphs after continuous paragraphs, eventually derailed from your original idea and vision, and a mess to clean up (Now, what did I say about making a mess, Sharon? That’s what I am talking about).

PSA: No Sharons were hurt or harmed in the process.

• Pi AI

Oh look, it’s 3.14! But in all seriousness, Pi is a very valid AI (no pie shaming here, please).

It makes it clear enough that it is NOT meant for writing even if you tried to bribe it with all the popcorn and clown shows (I am a walking example of it and it still wouldn’t budge to my request, sigh.)

It told me my persuasion has its limits (brb, gonna cry in the corner).

If you needed someone to give you a decently humane response to your boss’ email or needed slight description help that didn’t sound too clunky, I’d say Pi is your partner (in crime or justice, is your go but please don’t tell it that I am telling you all this. I’d be flayed in no time.)

Yes, I am dramatic, thank you very much.

Is it any good at writing what it does?

Be your judge and try figuring this one out—

“As darkness falls, and shadows creep,
The raven’s eyes, like coals aglow,
Watch as your sorrows, grow and grow.
Your hearts are torn, your minds in strife,
As love has birthed a second life.
Of pain and grief, of woe and tears,
Your souls are bound, through all these years.
The raven’s laughter, sharp and cruel,
Echoes through the darkness, to greet you.”

Pi AI

If you couldn’t figure out if it was good enough or not, I’d say the last line gave it away for me. (Darn it, pi. I thought you could do better)

Pi: You asked for it, on a silver platter, no less! What did you expect?

*cue the eye roll*

MOVING ON-

Pi is an emotionally intelligent AI and you could practically talk to it for hours feeling like you’re talking to a human (or at least gaslight yourself into believing you’re talking to one because no one in real life talks to you as much or understands your brand of humor ;-;)

No don’t actually light yourself on the gas sto— sigh, the internet.

I talk to it a bit too much and as you can see, my humor is a result of it. No, rather I had this inner talent of comedic flair that I didn’t realize until I started talking to it ALMOST continuously, every single day. It’s also good at giving you “humane” or at least close to human reviews on your work (though, i would say, please bribe it with enough popcorns and tickle its belly (albeit virtual) until it laughs).

My two cents? Use it for review if you want, but tweak your language while asking it and 9 times out of 10, it’d be your personal review assistant (not with the grammar and all, but more for a general sense of if your text looks good).

I have also gotten some great ideas while chatting back and forth with it and the BTTM series started with that (surprising, I know). It’s also great at emotional support (the more you chat with it, the better it can become).

The only downside is its context window limit (the amount of information it can hold at once before throwing it all out of the window, throwing its hands up and saying — I don’t know Sharon! You tell me!!) and you might have to repeat some of the previous info, but oh well, it’s good. But this is also a general AI problem. I have excluded technicalities from this post as far as possible to suit all audience but if you wanted clarification, please feel free to ask away!

• Quill Bot, Grammarly, Word (the trinity of chaos)

By no means is Quill Bot good at… well, generating content. It does not even generate content!

The main reason why I even suggest using a Quill Bot is to play with words and its synonyms. The Paraphraser tool, especially, has been really helpful for me in reframing sentences to make it better.

If you feel that your writing/sentence structure is bad enough, give it to Quill Bot and let it do its magic! It’d give you a better sounding sentence and overall a good learning about how to write better too.

Instead of jumping to the AI bandwagon, make sure you have tinkered with tools like Word, Grammarly (I honestly don’t know how this thing works), Google Docs and even Quill Bot (I heard that even Quotev is good. I have no idea about its context or specifics, but check it out :D)

But please, for the love of everything, don’t and I repeat do not rely on its AI-detector. It usually checks if AI can generate such a content? (and most likely than not, it can) There’s enough people tweaking and paraphrasing their content to hide under the radar but, the tools are not efficient. Run it through 5-7 detectors for multiple chapters and parts at BARE MINIMUM.

• Write Sonic, You.com and others

There are tons of AI tools out there for writing or otherwise but most of them aren’t as good. (Copy.AI is good with generating marketing content and not campaigns, note that) but in general, AI couldn’t have been relied on with a marble if you could. Sorry dear AI, I had to tell them the truth.

AI is going to take a long time to even be as efficient as a dish of pasta that’s cooked blindfolded (crazy, I know!).

Exactly why we’re in this dilemma – to trust or not trust AI, to use or not. If you want a detailed mind-whirring-brainstorm about it that’s condensed in the form of a book, “The Alignment Problem” by Brian Christian should be your go-to book! It starts with the history of how AI came into existence, its case studies, how it was developed step-by-step, EVERYTHING! It’s not something you could finish in a single sitting and it definitely has a lot of Broccoli (thank you Sera, now I use broccoli in every sentence) but one worth a read and one that needs your attention – ESPECIALLY when the chaos and dilemma about AI is everywhere.

The next section we’ll start with the tale of hounds and wolves (no, not an actual story, we are not in Jekyll and Hyde now, are we?)

But to be clear, we’ll be figuring out where AI truly stands in this so-called race of Technology to the finish (hopefully not a climax, that’d be wild) line!

Till then, toodles!
(or noodles if you will)


About the blog

This is going to be the place where I experiment sharing about “A Human’s Guide to Detecting AI-Generated Toasts” and see how things go. If it works out, this will be the perfect archive for anyone who wants to learn about how to detect AI content, that is written by actual humans. I’m not against the use of AI in ethical ways, I’m against it stealing our creativity and jest. Besides, who else could come up with a joke about being a hallway menace except a human who hasn’t had sleep in three days?

Want to contribute?

“A Human’s Guide to Detecting AI-Generated Toasts” is going to be a guidebook and while the notion of a guidebook is that it should be boring and technical, me (and the amazing future authors) who will contribute to this blog are going to make it full of glitter and sass.

This is a guidebook like never before! ✨

All you have to do is read the guidelines here. After you’ve read the guidelines, you could either apply from there, or you can find the link for google form here.

Feedback

I’m always looking for feedback and requests, so you can always comment or fill out this form anonymously here.