AI - It’s all good right?

(this is Human / Original content) 

By Chris Ogle on February 7th 2024

If you have read the previous blog post introducing AI then you will see the reasons for getting on board, and exploring the very many options available. The whole AI industry is at the point where if you are not experimenting already then you risk being left behind.

Before we rush in though and get totally absorbed (and that isn’t difficult) let us take a step back and view the whole thing with a critical eye. You know what they say, if something looks too good to be true then it probably is. But this is different I hear you say…

The AI sales team (there isn’t one) have some big guns. This technology saves you time and money, and could be the most intelligent and knowledgeable assistant you have ever hired. 

Ok what’s the problem? 

First some history and what is now before us… The Internet age is now commonly called the Information Age. The amount of information available to us and the speed in which organisations loaded up web sites was phenomenal. It then sparked the need for search engines to enable us to “Find” that data, which then required it to be organised, categorised and indexed. All this was fuelled by advertising and monetising it so that organisations could pay to be found. Search engines such as Google and later Bing proliferated.

Then… we have the advent of social media platforms where people could engage with others, share information and then, inevitably, advertise what they were doing. Stage 2.

Then, and we’re not through it yet (it is ongoing), the “internet of things”. This is where everything has a tag digitally emitting details about what it is. Sensor’s can then connect with these tags and interact with them, reading their data. Electronic payments by smartphones are such an example capable of emitting your payment details when you want to pay for something.

So we are up to date, what has all this got to do with AI? Well, let us look at another recent innovation. Apple phones just got an operating system upgrade with an AI powered Journaling App. To assist with your journaling the app digs into what the phone knows about what you did and suggests things like, you went to the shops at 2pm, visited a restaurant, got some fuel, sent an email, booked a flight… you get the picture.

You probably know all this, but AI changes everything. In order to be as useful as it can be it needs access to everything, similarly to a PA (personal assistant) of old, except, you have no idea where this information will end up, perhaps to the highest bidder, or worse.

By providing a ton of utility, AI is very attractive to Human’s. For example, what effort does it take to produce a blog post like this? With the help of AI, we could take a lot of the arduous work from the process, produce more frequently with the time saved, leading to more visibility, increased awareness and perhaps getting more sales. Who wouldn’t want that? It’s not that we’re lazy, we just like to be efficient with our time.

Where does AI like ChatGPT derive its information? 

What happens is that for quite a while the AI service must learn (training). That is where the deep data that is being used is harvested from all the known feeds. As you can imagine the information available on the internet is exhaustive… So there are 3 components to the process. Harvesting and data population (different depending on what the AI service is for) then, there is the conversation component which narrows down the request input via the UI (user interface), and finally the output. 

The data which is in the database is only as good as the mining. The conversation, only as good as the AI’s ability to understand and respond. The output algorithm is programmed to conform with the laws of the land, so for example, understanding what is racism or abuse. However, do we really know what has been taken out? What has been censored for example because that piece of information has been hidden. We never saw it, and so we do not know what we don’t know. Something to ponder. We are definitely accessing data that has been tampered with, maybe to “Keep us safe, and protect us, for our own good” (allegedly).

Let us have a look at AI from another angle. We know that the people creating these platforms are not doing it for our benefit really. There are of course other motives, such as money and even control. But what else? 

As we use the AI tools we are feeding the beast. Similarly to the Facebook platform it only has any value because we use it. We are basically providing data about how the human race reacts when presented with certain information, our preferences, our networks, and which groups we engage with. Of course the AI algorithms can analyse this data and profile each of us accordingly. We might be surprised at what information is stored about us, although allegedly we can download it all if we want. I suspect that would not include the analysis data that is being created from our interactions.

It is not beyond the realm of possibility that if the news about programmable money and CDBC’s (Central Bank Digital Currancy) is even 30% accurate that harvesting all this data, freely given by us, may be intended for some other purpose. Consider if the friends you associate with and the nature of the conversations entered into are not considered “permissible” by the powers that be… could limitations be put on you by the authorities? … I am not trying to be alarmist, but this is definitely as viable a future outcome as any other suggestion.

Echo chambers

Groups on services like Meta, WhatsApp, Reddit and others, allow us to gather with like minded souls. One of the drawbacks of these groups is that conversations tend to get one sided. Because people here are all sharing similar thoughts we never get to hear the other side or opposite thinking and the reasoning behind those thoughts.

Roll the time forward, when AI is capable (and is responsible for) posting into these platforms either on an account set up for the robot, or as an assistant for a regular human,  who will be able to determine who is the source of the post? Was it a post from a Human or an AI engine?

We could have a situation where AI robots are setting the tone, mode and nature of the conversation, and other AI robots are responding to them. This might lead to a very manipulative, censored, and controlled environment, which is very persuasive.

What if one of the AI services that helps us to create marketing material where we train the AI to look like us and sound like us? What we have to do is simply pose in front of the camera providing video from different angles, then, talk a few lines so that the AI engine can pick up our speech patterns and how we look when we talk…. The more the merrier. All good fun and very useful, effectively, we can be anywhere, anytime, promoting anything… wow, how powerful is that? 

But what if we have the wrong kind of friends (according to an analysis of our social connections) or, we are engaging in the wrong kind of conversations in our community… is it conceivable that a video could be engineered with our profile, (and others), having a conversation that never actually happened? Hey presto, a piece of evidence that never actually happened. I think we are way off this (at the moment) but with the speed of development and innovation, I wouldn’t be surprised if it was only just around the corner.

Chris is passionate about community and has been involved with Link4Growth and community building since the start in 2012. Chris now devotes most of his time to facilitate connection, collaboration and community in the district of South West Herts as well as supporting the Link4Growth Association.

Blog Author

Chris Ogle