Bestgamingpro

Product reviews, deals and the latest tech news

Once Fearful of AI, I Tried ChatGPT—It Transformed My Life

It’s easy to feel uneasy about AI, with its rapid advancements suggesting a thrilling yet ominous digital future. From its use in everything from content creation to disinformation, AI is a double-edged sword that can be both helpful and harmful, depending on how it’s applied.

For example, there’s a phenomenon called “slop,” referring to AI-generated content flooding social media. These surreal, sometimes disturbing graphics are designed to grab attention, pushing bizarre visuals to drive engagement. Think of images like Jesus morphed with prawns or other mashups that look almost photographic in detail. My social media feed, for instance, is filled with ads for retro 80s sci-fi, shiny shapewear, and anti-wrinkle cream for smokers. I suspect clicking on strange AI-generated “stills” of Star Wars set in a Ken Loach universe is part of the reason my algorithm is so chaotic.

But it goes deeper than just strange, eye-catching visuals. Some of these AI-enhanced images are being used in more insidious ways. Historians and researchers have flagged a troubling trend of AI-generated content being used to promote certain ideologies, especially through the glorification of Western architecture and art. These beautifully rendered Roman columns or classical libraries aren’t just harmless art—they’re subtle tools in a propaganda pipeline. What starts as admiration for classical beauty can lure users toward alt-right ideals, pushing coded messages of white European superiority.

As someone who researches disinformation, I find these developments disturbing, but it’s not just the images that are worrying. AI-generated deepfakes, especially in the political sphere, are becoming a powerful tool for misinformation. We’ve already seen examples, like AI-generated videos trashing political figures such as Kamala Harris, or a fake Trump calendar that I almost shared before realizing it was fake. These AI tools can churn out fake news articles, brigade comment sections with “authentic” sounding voices, or even create entirely fictional accounts to manipulate public opinion. It’s terrifying how quickly these tools can sow confusion during elections.

But my concerns about AI aren’t limited to its political misuse. As a writer, I rely on royalties, and I, along with many others, found out that our work was quietly absorbed to train AI models for tech giants like Meta—without compensation. Learning that my work was used to train AI systems for a trillion-dollar company without any payment was a slap in the face. Adding insult to injury, the rise of AI-generated books has flooded an already crowded market, making it even harder for human writers to compete.

Ironically, AI has also helped me in a very personal way. My journey with AI started out of professional curiosity—exploring its darker applications—but around the same time, I was diagnosed with ADHD. It didn’t take long to realize AI could also assist me in managing my symptoms. Time management, decision paralysis, and task prioritization are daily battles for someone with ADHD, but I found that AI could help organize my life in ways I struggled to on my own. It became my go-to assistant for planning my day, creating itineraries, tracking travel time, and even calculating nutritional content of meals. It even suggests what to wear based on the weather!

One of the most challenging aspects of ADHD is the shame that comes with asking for help on simple tasks that feel overwhelming. But AI doesn’t judge. It reminds me when my schedule is too packed, summarizes complex documents like letters from the bank, and even nudges me to take a break and play video games when I’m overworking. The relief is enormous—it’s like having a personal assistant who doesn’t grow impatient or tired.

Of course, this newfound reliance on AI doesn’t make me blind to its dangers. The Australian government is currently seeking public input to help establish regulations that could create much-needed guardrails for AI use. These efforts are critical, especially when you consider the malicious ways AI is already being used—from copyright theft to extremist recruitment and disinformation. It’s clear that we need to tackle these issues before they get further out of hand.

For me, AI is both a blessing and a curse. It has improved my day-to-day life significantly, yet I am fully aware of its potential to cause real harm if left unregulated. The key is finding a balance. As the tech historian Melvin Kranzberg famously said, “Technology is neither good nor bad; nor is it neutral.” How AI is used, and how we choose to regulate it, will determine whether it becomes a force for good or a tool for chaos. It’s up to us to shape that future.

Leave a Reply

Your email address will not be published. Required fields are marked *