Let's get one fact straight it's only a matter of time before we successfully create an AI that surpasses the intelligence of a human being. This event is known as the singularity and estimates for this event to happen range from anywhere between 10 and 50 years from now.
So, what happens next is yet to be discovered but there are two prevailing theories. The first one is Humanity is fast-tracked for more amazing technological innovation than we have experienced in our entire history. The second one is Extinction at the hands of a sentient super AI who is indifferent to our survival. But anything can happen, so don’t take this two are options. As I said, “what happens next is yet to be discovered.”
Let's back up a little bit before judging AI Let’s understand it a little bit. AI, there are a few distinctions to make when talking about at the top advanced AI and on the low of the intelligence spectrum, there's weak or narrow AI. Software that is a useful fairly limited range of tasks. Alexa, Siri and all the other personal assistants are the most obvious examples but we're actually surrounded by narrow Al.
Your Netflix caters to your taste in movies and TV. Amazon predicts the kind of products you like based on your browsing habits and your purchase history. Machine learning algorithms represent another form of narrow Al any program that can adapt based upon user interaction can be lumped into this category. This type of artificial intelligence is rarely considered a threat take Siri, for example, there's no real intelligence there no mind and no self-awareness she’ll answer some questions for you but she is completely useless outside the scope of even fairly simple ones.
Moving up one level we have artificial general intelligence or AGI. This type of AI is able to apply it's intelligence to a much wider range of applications and can successfully perform any intellectual activity as human do. This level of AI is generally the use for projects aiming to develop an advanced artificial intelligence. Academics often referred to AGI with the ability to experience consciousness as strong AI. Think Hal from 2001 space Odyssey or the t800 from Terminator or Ava from ex-machine. These are machines or software that are just as intelligent or more intelligent than humans. But they have not entered a state of runaway self- improvement that would lead to the next stage of AI. As of 2018 there were at least 40 organisations around the world actively researching AGI.
Now let’s talk about the big one artificial super-intelligence, this is the level of AI that some of the world's leading experts believe could spell the end of mankind. A super-intelligence would be smarter and more capable than all humanities greatest minds put together and getting more every second. These AI would seem like God's to us mere mortals. Infinite knowledge, the ability to perform countless tasks at once, giving instant answers to questions we haven't even thought of yet.
The biggest question we have thought of regarding a super AI is whether or not it will be friendly? When we think of modern AI we think of giving it a task and it will perform that task then waiting for the next command. It's not friendly or unfriendly it is just build to perform its task. What happens when we have vastly more capable AI and we task it with ending world hunger? The easiest solution for the AI would be to eliminate humans and thus end world hunger for good. That may sound like a far-fetched example but consider how the AI would see the humans.
We look at insects and have little regards for their lives because they're insignificant and more importantly far less intelligent than we are. We step on ants without a moment's hesitation. How would an entity millions of times more intelligent than humans be likely to treat us? We tend to disregard the danger because we assume AI will be human-like. After all, we're making it in our own image this arrogance could lead us to develop a sentient machine that would see our existence as nothing more than a nuisance.
A parasite that causes more damage than anything else on the planet. In that regard, the machine would be right and if it considered us as a threat to its own existence it's unlikely that we could do anything to stop our ultimate destruction.
How can we stop a being that has access to all of our knowledge and more it can control our computer systems can create and destroy at will by using our always existing infrastructure or worse by building its own that is far beyond human understanding? If this super AI is truly sentient it would be able to communicate with us but whether we would even merit that communication is up in the air. This is why some of the leading experts true artificial intelligence To be our final invention the greatest achievement in history that leads to our own destruction.
One thing that movies and TV tend to get wrong about destructive AI is that it always seems to take the form of a humanoid robot. In reality, a super AI would have no need for a single human shaped enclosure. A sentient software based AI could travel instantly between electronic devices and inhabit many of them at once making it impossible to get rid of and we might be dead before we even realise the mistake we had made of course this may not come to pass.
Human has a way of surviving our most destructive inventions at least so far and even using that some technology to improve our lives. The same could be true of artificial intelligence if we could develop an AGI and train it to experience human emotions such as empathy, We could potentially create the most powerful force for good in the history of mankind. A benevolent AI whose only task is to improve the lives of humans and work towards a greater good could make for a future free of disease hunger and suffering. This AI wouldn't even have to be fully sentient. If it was intelligent enough to be turned loose on difficult problems but lacked any kind of native or self-awareness as long as we are careful with instructions. We could have an incredibly useful software-based assistant that could be put to work over a very wide range of applications. We could even integrate with this AI in the future by augmenting our brains to run the software. Kind of like a super advanced personal assistant in your head able to fetch answers teach you new things and help you make important decisions. No one's really sure what will happen when we finally reach the singularity but whatever happens, it will change the course of history forever.
Also Read:-AI | How AI Learn
So, what happens next is yet to be discovered but there are two prevailing theories. The first one is Humanity is fast-tracked for more amazing technological innovation than we have experienced in our entire history. The second one is Extinction at the hands of a sentient super AI who is indifferent to our survival. But anything can happen, so don’t take this two are options. As I said, “what happens next is yet to be discovered.”
Your Netflix caters to your taste in movies and TV. Amazon predicts the kind of products you like based on your browsing habits and your purchase history. Machine learning algorithms represent another form of narrow Al any program that can adapt based upon user interaction can be lumped into this category. This type of artificial intelligence is rarely considered a threat take Siri, for example, there's no real intelligence there no mind and no self-awareness she’ll answer some questions for you but she is completely useless outside the scope of even fairly simple ones.
Moving up one level we have artificial general intelligence or AGI. This type of AI is able to apply it's intelligence to a much wider range of applications and can successfully perform any intellectual activity as human do. This level of AI is generally the use for projects aiming to develop an advanced artificial intelligence. Academics often referred to AGI with the ability to experience consciousness as strong AI. Think Hal from 2001 space Odyssey or the t800 from Terminator or Ava from ex-machine. These are machines or software that are just as intelligent or more intelligent than humans. But they have not entered a state of runaway self- improvement that would lead to the next stage of AI. As of 2018 there were at least 40 organisations around the world actively researching AGI.
Now let’s talk about the big one artificial super-intelligence, this is the level of AI that some of the world's leading experts believe could spell the end of mankind. A super-intelligence would be smarter and more capable than all humanities greatest minds put together and getting more every second. These AI would seem like God's to us mere mortals. Infinite knowledge, the ability to perform countless tasks at once, giving instant answers to questions we haven't even thought of yet.
We look at insects and have little regards for their lives because they're insignificant and more importantly far less intelligent than we are. We step on ants without a moment's hesitation. How would an entity millions of times more intelligent than humans be likely to treat us? We tend to disregard the danger because we assume AI will be human-like. After all, we're making it in our own image this arrogance could lead us to develop a sentient machine that would see our existence as nothing more than a nuisance.
A parasite that causes more damage than anything else on the planet. In that regard, the machine would be right and if it considered us as a threat to its own existence it's unlikely that we could do anything to stop our ultimate destruction.
How can we stop a being that has access to all of our knowledge and more it can control our computer systems can create and destroy at will by using our always existing infrastructure or worse by building its own that is far beyond human understanding? If this super AI is truly sentient it would be able to communicate with us but whether we would even merit that communication is up in the air. This is why some of the leading experts true artificial intelligence To be our final invention the greatest achievement in history that leads to our own destruction.
One thing that movies and TV tend to get wrong about destructive AI is that it always seems to take the form of a humanoid robot. In reality, a super AI would have no need for a single human shaped enclosure. A sentient software based AI could travel instantly between electronic devices and inhabit many of them at once making it impossible to get rid of and we might be dead before we even realise the mistake we had made of course this may not come to pass.
Also Read:-AI | How AI Learn
No comments:
Post a Comment