Artificial Intelligence: Friend or Foe?

January 31, 2024

A lot has happened recently in the world of AI. Depending on how you define it, AI has been around for a number of years now. The term “artificial intelligence” was actually first coined in 1956 at a conference at Dartmouth College. There was a lot of talk back then about being able to solve the challenges of building AI technology within a generation. We’re talking about walking, talking robots that would be able to think for themselves.

As ambitious as the scientists were at that time, it never really panned out the way they had envisioned. In 1983, a movie called WarGames came out that was a bit ahead of its time. It was about a computer that was designed to simulate a nuclear war against Russia. It went rogue and almost started WW3, but it introduced the concept of machine learning to us novices. In the 1990s, Carnegie Mellon and IBM developed a supercomputer called Deep Blue that was able to — after a few upgrades — beat chess champion Garry Kasparov. It showed that computers could outthink humans if it had enough data.

Fast forward a decade or so, and technology has taken a giant step forward. Phones became computers, the Internet has connected everything to everyone, virtual reality is a thing now and the amount of data stored digitally is counted in zettabytes. Which you know is a lot of bytes because it starts with a “z”.

For the last decade or so, we have seen AI integrated into our everyday lives. Siri and Alexa can turn off your lights and order groceries. I know the only way I can get anywhere is by using AI-generated GPS directions. Google already knows what you want to search for before you reach for the keyboard. Autocorrect has made texting a denture. Dang it, I meant an adventure! And then there are Deep Fakes. Enough said.

The Rise of AI for All

In late 2022, a new tool came out that made me start to think. ChatGPT made a big splash on the scene and got quite a bit of attention. There was nothing extraordinary about what it did. It collected a bunch of information scraped from a wide range of resources on the Internet and it was trained to use a human language interface so you can basically talk to it as you would another person. Fun!

I have to be honest at this point. I have been in the information security field for about 25 years now. And my instincts have been honed to be skeptical of new technology. And when ChatGPT came out, that instinct hit hard. I remember friends and family asking what I thought about it, and I recall saying that I was terrified of it.

Just thinking logically, and based on my experiences with newer technologies, there are a number of things that concerned me:

  1. Where did the data come from, and could it be trusted? We all know the old saying, garbage in, garbage out. So, if the AI engine is crunching all of this data to come to its conclusions, the data needs to be accurate and reliable.

  2. How can we be sure of the integrity of the data? It may have gone in clean, but can we trust that it has not been manipulated at some point after it was loaded. Could it be hacked?

  3. How does it deal with bias and prejudice that is inherently part of the data? (Because let’s face it, humans created the data, so it will have human bias in it). Can the AI engine be trained to recognize bias, or can it be trained to be compassionate? A lack of compassion and a natural bias could potentially lead to some really bad advice.

Having these concerns, I did what any curious person would do. I asked ChatGPT about all of this. I got some pretty interesting responses that I won’t go into, but I do suggest that if you are reading this, you take a shot at asking for yourself.

I have now been using ChatGPT for a while and I see that Google and Bing have built AI into their search engines. There has been some talk of taking legislative action to try and curb the potential risks (good luck with that, it seems like the cat is out of the bag at this point). In the most recent writer’s strike, the use of AI in the entertainment industry was a critical negotiating point. Teachers are faced with students using AI to write research papers. Who knows, maybe this whole article was written using AI. So, it seems that we are now facing one of the most common issues we face with new technologies. How do we control it now that everyone is using it? I wish I had the answer.

Just to wrap this up, I really have only one recommendation for everyone. Stay skeptical, at least for now. It has the potential to be a wonderful tool that can automate complicated tasks and process information faster than we could have ever imagined. It will save lives and it will change the world. But just remember that anything that can be used for good, can also be used for bad. Cybercriminals are already using it to further their attacks.

Oh, and make sure you say please and thank you when using AI. You don’t want to be on its bad side when it takes over.

Previous
Previous

Revolutionizing Cybersecurity for Small Businesses: Introducing Security Moments