[This is an installment of a serial article that is being gathered here. -D ]
Let’s define our terms first.
Artificial intelligence (AI) refers to the development of computer systems that can perform tasks typically requiring human intelligence, such as visual perception, speech recognition, decision-making, and language translation. We want AI systems to learn from experience, adapt to new inputs, and perform human-like tasks autonomously. Key characteristics of AI include the ability to process lots of data, identify patterns, and make predictions or decisions based on that data.
Before you roll your eyes at yet another AI rant, I promise this is going to be focused on our topic at hand.
Generative AI is a subfield of artificial intelligence that focuses on creating (generating, get it) new content, such as text, images, audio, or video, based on patterns learned from existing data. Unlike traditional AI, which is primarily concerned with analyzing and making decisions based on input data, generative AI aims to create original content that resembles human-created work.
The field of artificial intelligence has been evolving since the 1950s. In the 1980s and 1990s, machine learning techniques, with cool names like “neural networks,” emerged as a means of learning from data. Deep learning (support for larger datasets, more complicated relationships, recursive error identification and correction) in the 2000s, coupled with the increasing availability of large datasets and powerful computing resources that had crawled out of the mud ages ago, accelerated the development of AI and generative AI.
In recent years, generative AI models like GPT (Generative Pre-trained Transformer) and GANs (Generative Adversarial Networks) have demonstrated remarkable capabilities in producing human-like text, images, and other content. If you want to understand them better, and this is not a joke, ask a generative AI to explain it to you. Nowadays I’m partial to Claude.
The AI-Driven Content Creation Scenario
Before you roll your eyes at yet another AI rant, I promise this is going to be focused on our topic at hand.
The key thing to contemplate when it comes to AI in the context of media and content creation is its ability to analyze an unimaginable amount of data to identify trends in consumer behavior and preferences. If you’ve been following along, this should sound familiar; advertising and push-based content delivery rely on this kind of data to keep the scroll going. By leveraging machine learning algorithms and “natural language processing” (speaking human), AI systems can process data from all kinds of sources (from social media to web analytics, user feedback, brand influence, geopolitical data, weather data, you name it) to uncover both broad (macro) and specific (micro) trends in media consumption.
That’s all fine, nothing new. Content creators and distributors using very smart algorithms to gain insight into what types of content are resonating, — we’re already doing this, we call it “the algorithm.”
I always mistype that word, so we’ll call it Algernon.
Right now, Algernon is sifting through mountains of fresh content generated by influencers and journalists and artists and bloggers and filmmakers, picking out little bits of media that will fit just right in your feed. As he gets smarter, he adapts better, and gets better at deciding how best to target every new piece of content he finds. When you pause on something, or like it, or share it, there’s a little flare in the pleasure center of Algernon’s brain, and a little connection is drawn between you and the creator of that content. Somehow, the creator gets compensated for having built that connection, and so the economics of push media remain intact; it costs time and money to produce content, but there is compensation to the creator if the content is popular.
So what happens when Algernon realizes that he can just create the content himself?
[You’re instinctively dismissing this idea. This is a good time for introspection.
We’ve explored nature of content distribution and ad targeting, and we’ve described how an AI “learns,” and we’ve observed how the traditional, broadcast-style pull media are fighting a challenging battle against the limbic satisfaction of the doomscroll. All I’ve done is suggested that the content creator in the push media equation can be replaced by an AI.
The most common objection I’ve heard can be summed up as “it’s not the same.” Put another way: AI-generated content is both readily distinguishable and inferior to human-generated content, and nobody’s going to fall for that.
“The industrial manufacture of furniture has so driven the handicraftsman out of the field that the public has lost its standard of quality.” - William Morris, 1913
The second most common objection is a belief that human institutions (colleges, governments, papers of record) won’t permit this level of disruption to the way things work.
“There is a certain class of men who see with extreme apprehension the progress of machinery, considering it as the declared enemy of the working classes, and who believe that the poverty and distress of the operatives are in proportion to its extension and improvement.” - Andrew Ure, 1836
My argument is this: AI-generated content can already be difficult or impossible to discern from human-generated content, and it will improve, and its improvement will be propelled by the most potent and unstoppable of all fuels - capitalism. There’s precedent in recent memory and in the history books. Look at it, look it in the face.]
[to be continued.]