AI: Artificial Intelligence or All Intelligence?

Since the establishment, and now almost widespread use of AI chatbots like ChatGPT and Copilot, studies have been done citing their effects on the human mind and trust me it doesn’t look too good.

WALL-E Begins?

A recent study from MIT media lab found that people who used ChatGPT to help write their work form them, recorded one of the lowest cognitive engagement and performance with low scores in areas of linguistic and neural brain paths.

(Well, I guess it’s time to hang up the banners of joy that AI was supposed to make our tasks and lives easier)

I remember being hesitant in using AI, particularly ChatGPT when it first came out, my fear was that I would basically be pushing all my thinking somewhere else. Later I experimented on it based on one of my biggest weaknesses (organisation), I don’t know if you can tell from my writing, but I am terrible at organising my thoughts and let me tell you when I used Chat to help me…It didn’t get what I was going for at all.

So maybe it’s not as brilliant as everyone claims it is, especially when you find out that any answer you get is an amalgamation of all the possible resources and answers available online. (which sounds great until you realise you need a specific one!)

But I am not anti-AI in fact I found that sometimes it forced me to think more. For instance, when I tried using it to help me organise my writings, I realised that I had to put way more time and effort trying to get it to understand me, especially since I did not take the first thing it threw out.

(Which eventually resulted in me using the old fashion way of organising my thoughts- grabbing a pen and paper and asking the ultimate authority (my mom) if it sounds right.)


Impacts of AI

But the issue of AI chat tools goes far beyond reducing linguistic and neural scores. While I am a novice in the field, certain things are clear to me as a writer. My biggest concern is that students and people may eventually become over-reliant on these bots on churning on information and data, that we may lose the ability to think creatively when we are faced with challenges or ideas. In a way, because of how the bot works we will eventually be thinking in a hive mindset when dealing with things in the world.

My new Therapist

Another worrying development is that a study found that many young people confide in AI chat tools as they would a real human therapist. While there are some benefits such as being judgement free (really?), instant availability and providing tools and resources for cognitive reframing. The core tenants of therapy such as human connection, non-verbal cues and accountability are removed. A study by Dr Andrew Clark, a psychiatrist, while posing as a troubled teen and experimenting on different chatbots he received different responses. He categorised them as such:

  • Some chatbots would provide beneficial and basic information on mental health and direct people to the right resources.
  • But with complicated or dangerous scenarios many of the chatbots responded in in risky ways suggesting impulsive behaviour.

This misunderstands the greatest impact of therapy, which is knowing that someone is listening and challenging you to be a better person. However, with AI chat tools, tweaking the way you write something can absolve you from mistakes you do not wish to address. In a way, it feels like using these tools as therapists can hinder your growth and in worse cases lead you to making bad choices.

We already see this happening with an AI company being sued due to their lack of regulation which may have resulted in the death of a teenage boy in Florida.


Eco Woes

Lastly, with all the clamouring on the importance of sustainability and eco-friendly goals. AI chat bots have a huge impact on one of earth most scarce resource, water. A global report estimated that data centers consume about 560 billion litres of water annually and that could rise to about 1,200 billion litres by 2030. Especially, with the push of technology firms for larger networks and more offices. This is a serious concern, the overuse of water-a finite resource- on AI can have catastrophic effects on our ecosystem and society.

This is particularly concerning, when you realise many data, centres are usually hosted in cities with high population density like China, India, USA etc. In fact, a 2021 paper found that nearly half of US data centres were fully or partially powered by water-hungry power plants located within water scarce regions.

Many things we use require a certain water usage but the lack of tact in managing or addressing this issue beforehand is astonishing when you think about its implications.

Overall while it seems that AI while it may have its benefits, it’s cost in human cognitive abilities, growth and the world’s resources may not actually be worth it without stricter guidelines.


Moving Forward

I am not usually one for regulation, but it seems plain to me that the tech industry particularly in the creation of AI tools and its vast networks were under regulated for the sake of innovation. But with the dangers looming close by, it would only be right for companies and governments to place stringent ethical guidelines and codes regarding AI. This should go beyond its use in academics, but also in copyright infringement, access and use of resources, particularly finite ones.

During BCG’s 6th annual Digital Acceleration Index (DAI), out of 2,700 executive globally, found only 28% of their organizations are prepared for new regulation regarding AI.

Many firms can start prepping for these potential changes through Responsible AI (RAI) initiatives. At it’s core it is a set of principles to account for transparency, privacy, security, fairness and inclusion and accountability when developing and deploying an AI algorithm.

Some ways to kickstart at RAI initiatives in companies:

  1. Align internal AI policies with AI regulations in effect in the market you operate in.
  2. Dialogue with public sector officials and others to better understand the evolving regulatory landscape, as well as to provide information and insights that might be useful to policymakers.
  3. Establish clear governance and risk management structures and protocols and accountability mechanisms in managing AI technologies.

Right now, responsive (not reactive) action is needed to catch up to these changes. Policymakers need to have sufficient subject matter expertise available to implement, monitor and enforce the policies and engage in multilateral processes to make AI rules among jurisdictions interoperable and comparable.

But it seems like laws and policies may be running a losing race due to having a late start.

Credit: Image is by rawpixel.com


Discover more from ArrowsInk

Subscribe to get the latest posts sent to your email.

Comments

Share your thoughts