In a recent article https://attestedmedia.com/artificial-intelligence-in-content-creation-friend-or-foe/  I advocated for a collaborative approach between Artificial Intelligence (AI) and human intelligence in content creation.

As a rapidly developing field, AI presents both exciting discoveries and ethical challenges, particularly with generative AI. A major concern is the potential development of General Artificial Intelligence (GAI), where AI could perform any intellectual task, a human can, like driving or writing a PhD thesis.

Recently, generative AI has empowered people globally to create high-quality essays, images, and audio quickly. These tasks previously demanded significant human effort and time.

AI’s indeed has the potential to replace human jobs. This is a valid fear, considering how AI is impacting fields like copywriting, editing, and publishing. However, history shows that innovation brings both job losses and job creation. The invention of mobile phones eliminated operator and switcher jobs, but the industry created far more new opportunities.

I think AI may simply amplify existing human problems. Issues like legitimacy, incompetent authority, fairness, and future anxieties could be exacerbated. Since AI builds on Large Language Models (LLMs) trained on internet data, AI-generated content reflects the biases expressed by online users.

My biggest concern lies with the amplification of deepfakes and plagiarism. I fear a generation may arise that undervalues due diligence and research. It’s becoming easier to plagiarize ideas and twist them without crediting the originator.

Last month, a colleague discovered an article from their blog republished elsewhere with AI-generated phrases woven in. This raises the question of how to approach this unfair situation, where someone harvests what they haven’t planted.

The ability to clone individuals and produce realistic fake videos and audio of seemingly authentic documentaries is deeply troubling. We risk reaching a point where truth becomes indistinguishable from lies. This potential for deepfakes aligns with a statement a friend shared with me from Neil Postman’s review of “Brave New World”: “Human beings will sink to a new low with the invention of convenience, then the lower standards become the normal.” Postman emphasizes the importance of critical thinking and questioning what we see, rather than blindly accepting media content.

This applies equally to AI. As AI becomes more convenient, we must use it responsibly and question any potential ethical issues arising from generated content.

I therefore propose that professionals to address these anxieties arising from the effects of AI usage by learning to use AI responsibly within their fields.

You can read the complete article on

Leave a Reply

Your email address will not be published. Required fields are marked *