Originality is the art of concealing your source

Generative AI in speech, text, and images is a way of ingesting large amounts of information specific to a domain and then regurgitating synthesized answers to questions posed about that information. This is basically the next evolutionary step of a search engine. The main difference is, the answer is provided by an in-house synthesis of the external data, rather than a simple redirect to the external data.
This is being implemented right now on the Google search page, for example. Calling it a search page is now inaccurate. Google vacuums up information from millions of websites, then regurgitates an answer to your query directly. You never perform a search. You never visit any of the websites the information was derived from. You are never aware of them, except in the case where Google is paid to advertise one to you.
If all those other pages didn’t exist, Google's generative AI answer would be useless trash. But those pages exist, and Google has absorbed them. In return, Google gives them ... absolutely nothing, but still manages to stand between you and them, redirecting you to somewhere else, or ideally, keeping you on Google permanently. It's convenient for you, profitable for Google, and slow starvation for every provider of content or information on the internet. Since its beginning as a search engine, Google has gone from middleman, to broker, to consultant. Instead of skimming some profit in a transaction between you and someone else, Google now does the entire transaction, and pockets the whole amount.
Reproducing another's work without compensation is already illegal, and has been for a long time. The only way this new process stays legal is if the work it ingests is sufficiently large or diluted enough that the regurgitated output looks different enough (to a human) that it does not resemble a mere copy, but is an interpretation or reconstruction. There is a threshold below which any reasonable author or editor would declare plagiarism, and human editors and authors have collectively learned that threshold for centuries. Pass that threshold, and your generative output is no longer plagiarism. It's legally untouchable.
An entity could ingest every jazz performance given by Mavis Staples, then churn out a thousand albums "in the style" of Mavis Staples, and would owe Mavis Staples nothing, while at the same time reducing the value of her discography to almost nothing. An entity could do the same for television shows, for novels - even non-fiction novels - even academic papers and scientific research - and owe the creators of these works nothing, even if they leveraged infinite regurgitated variations of the source material for their own purposes internally. Ingestion and regurgitation by generative AI is, at its core, doing for information what the mafia needs to do with money to hide it from the law: It is information laundering.
Imitation is the sincerest form of flattery, and there are often ways to leverage imitators of one's work to gain recognition or value for oneself. These all rely on the original author being able to participate in the same marketplace that the imitators are helping to grow. But what if the original author is shut out? What if the imitators have an incentive to pretend that the original author doesn't exist?
Obscuring the original source of any potential output is the essential new trait that generative AI brings to the table. Wait, that needs better emphasis: The WHOLE POINT of generative AI, as far as for-profit industry is concerned, is that it obscures original sources while still leveraging their content. It is, at long last, a legal shortcut through the ethical problems of copyright infringement, licensing, plagiarism, and piracy -- for those sufficiently powerful enough already to wield it. It is the Holy Grail for media giants. Any entity that can buy enough computing power can now engage in an entirely legal version of exactly what private citizens, authors, musicians, professors, lawyers, etc. are discouraged or even prohibited from doing. ... A prohibition that all those individuals collectively rely on to make a living from their work.
The motivation to obscure is subtle, but real. Any time an entity provides a clear reference to an individual external source, it is exposing itself to the need to reach some kind of legal or commercial or at the very least ethical negotiation with that source. That's never in their financial interest. Whether it's entertainment media, engineering plans, historical records, observational data, or even just a billion chat room conversations, there are licensing and privacy strings attached. But, launder all of that through a generative training set, and suddenly it's ... "Source material? What source material? There's no source material detectable in all these numbers. We dare you to prove otherwise." Perhaps you could hire a forensic investigator and a lawyer and subpoena their access logs, if they were dumb enough to keep any.
An obvious consequence of this is, to stay powerful or become more powerful in the information space, these entities must deliberately work towards the appearance of "originality" while at the same time absorbing external data, which means increasing the obscurity of their source material. In other words, they must endorse and expand a realm of information where the provenance of any one fact, any measured number, any chain of reasoning that leads outside their doors, cannot be established. The only exceptions allowable are those that do not threaten their profit stream, e.g. references to publicly available data. For everything else, it's better if they are the authority, and if you see them as such. If you want to push beyond the veil and examine their reasoning or references, you will get lost in a generative hall of mirrors. Ask an AI to explain how it reached some conclusion, and it will construct a plausible-looking response to your request, fresh from its data stores. The result isn't what you wanted. It's more akin to asking a child to explain why she didn't do her homework, and getting back an outrageous story constructed in the moment. That may seem unfair since generative AI does not actually try to deceive unless it's been trained to. But the point is, ... if it doesn't know, how could you?
This economic model has already proven to be ridiculously profitable for companies like OpenAI, Google, Adobe, et cetera. They devour information at near zero cost, create a massive bowl of generative AI stew, and rent you a spoon. Where would your search for knowledge have taken you, if not to them? Where would that money in your subscription fee have gone, if not to them? It's in the interest of those companies that you be prevented from knowing. Your dependency on them grows. The health of the information marketplace and the cultural landscape declines. Welcome to the information mafia.
Where do you suppose this leads?