Why is AI taking over creatives?

Okay, before I start. I will let you know that my opinions should be taken with a grain of salt. My observations could be incorrect, but I am open to discussions.

Well, as we know, with recent outrage over Meta, people have started to critically discard, burn (metaphorically) and ask the serious question, one of them includes: Isn’t AI supposed to do the Dumb, Draining and Dangerous tasks? How did it end up in Creatives?

If it’s any comfort, it wasn’t always the plan. You see, the earliest experiments with AI involved serious things like law and healthcare. And if you’re ancient, you probably know of COMPAS too.

For context, COMPAS is a tool that determined recidivicism or in simpler terms: If someone commited a crime, how likely are they to commit crimes again in the future? It depended on all kinds of data, but, with human data, comes bias, and it lead to a slippery slope of bias and harm that did more bad than good.

COMPAS didn’t go as planned but do you know what it exceeded its limits in? Pattern Recognition and hence, medical diagnosis. When neural networks were built with the conditionals and knowledge, starting with something as simple as label matching. Even in the initial stages, AI was more precise than the group of expert doctors combined. But then, the issue of AI being a blackbox came up, and while there have always been experiments on the same, there’s no clear answers.

At some point, someone decided that since AI is good with pattern recognition, maybe it could comprehend a reading material in a better way than humans. But, while it wasn’t better, it gave room for the person questioning about the said material, some knowledge, without having to sit down and research on the same. This was miraculous! (But it absolutely didn’t go as planned).

Because, while now no one would have to spend years to study the textbook and instead ask an AI to do it, the problem started with copyrights license of the content, the ethical dilemma of navigating “fair use” and outrage from people who knew that the only true way to knowledge was through the pain of sitting through the long, droning book. And this has always been an issue, even before the media blew up about it.

Why did companies allow this and keep trying to push it forward? Because people liked it! People demanded that their tool was able to do more stuff and the requests, though not all were fulfilled, played a significant role in it. Because, you see, in the earliest stages, no one truly know the dark side of their requests and they only notice when their plant wilts, metaphorically speaking.

And companies took full advantage of this loophole—they kept trying to write off “theft” as “fair use” and “people want it”. It continues, because a whole system make you want it, and another feeds right into that need. They didn’t make strict laws or policies, despite having 25+ years of time. Why? Because they havr gotten away in the past, they’ll get away this time too, right?

That didn’t come without a fight—authors of saif publications started to fight back, but, were ultimately lured with “better technology” perspective and that brings us here. The dark abyss where the “better technology” is actually stealing and regugitating so much from us… we almost cannot recognize it sometimes.

Does it mean that AI is inherently bad? No..Take DNA grafting, for example, where DNA is operated on and modified to achieve expected results. Or how they might just revive the old species like the Wooly Mammoth.

What is bad is companies using it as an excuse to steal—because, ultimately, if no one questioned or if they weren’t able fo question the crtics, it would make them think “less” and hence, more pain to sit through litetature that matters.

Can it change?

Probably

Can we ever overvome thr dilemma??

Yes!

Don’t lose hope, we’ve got this! ❤️🫶

Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.