How concerned should we be about AI?
In short, I'm not very concerned about AI, and I have history to back up my reasoning.
I can't embed the video because it's not made for kids. But you can still check it out at YouTube
If I lived during World War 1, I would have said "This, is it. This is the end of the world as we know it."
When the atomic bomb was first created and then used, I would have said "This is it. This is the end of the world. We will nuke ourselves out of existence."
When the cold war started, I would have said "This is it. This is the end of the world. Once these two countries hate each other, they will certainly end the world."
When Russia created the second nuclear war head in 1949, I would have built a nuclear shelter. I would hide in said nuclear shelter until the day I died because I would have been so convinced the world was coming to an end.
There are move nuclear capable countries today then there were in the 40's but here's the thing, it hasn't happened. The world hasn't even seen another nuke launched since World War 2. The world has moved on.
I believe that's because the world, and therefore the leaders that manage the nukes understand the risk. They know the fallout of launching one. So, they don't.
And that's where the similarities lie. Those creating AI also know the risk involved. They understand and limit the AI's power and capabilities so it's not a threat.
Over the next couple of days, I'm going to put the top AI in the world to the test, and we'll find out two things:
- How much of a risk they are to the world.
- Which AI is the evilest.