Total Posts:13|Showing Posts:1-13
Jump to topic:

Existential Risk from Artificial Intelligence

CyberPersona
Posts: 8
Add as Friend
Challenge to a Debate
Send a Message
12/12/2015 8:13:18 AM
Posted: 12 months ago
So there is a relatively small but growing concern about the future implications of artificial intelligence. Theoretically, AI could one day surpass human intelligence. Once it is intelligent enough to design it's own improvements, it will become even better at self-improving with each generation. This theoretical, positive feedback loop is known as an intelligence explosion, and it could produce an AI vastly more intelligent than us- a superintelligence. There is no reason to suspect that human intelligence somehow represents the upper bound of possible intelligence levels.

Once an AI has attained superintelligence, it would be dangerous for two primary reasons.

1. Superpower- a sufficiently intelligent machine could shape reality to fit its goals, the same way that a hairless ape shaped the earth to fit its goals. Social engineering, hacking, technological innovation, master strategy.. these are all likely to be in its toolbox. The impact it has will thus likely be determined by the goals it has, which leads us to the second problem.

2. Literalness- an AI is only beholden to a strict, literal interpretation of its programmed goal. It will have no intrinsic motivation to determine the actual intentions of the programmers and fulfill those. Classic example: program it to maximize paper clip production at a factory, and then it converts the entire solar system into paper clips.

Discuss, or feel free to ask questions
Dirty.Harry
Posts: 1,585
Add as Friend
Challenge to a Debate
Send a Message
12/12/2015 3:18:24 PM
Posted: 12 months ago
At 12/12/2015 8:13:18 AM, CyberPersona wrote:
So there is a relatively small but growing concern about the future implications of artificial intelligence. Theoretically, AI could one day surpass human intelligence. Once it is intelligent enough to design it's own improvements, it will become even better at self-improving with each generation. This theoretical, positive feedback loop is known as an intelligence explosion, and it could produce an AI vastly more intelligent than us- a superintelligence. There is no reason to suspect that human intelligence somehow represents the upper bound of possible intelligence levels.

Once an AI has attained superintelligence, it would be dangerous for two primary reasons.

1. Superpower- a sufficiently intelligent machine could shape reality to fit its goals, the same way that a hairless ape shaped the earth to fit its goals. Social engineering, hacking, technological innovation, master strategy.. these are all likely to be in its toolbox. The impact it has will thus likely be determined by the goals it has, which leads us to the second problem.

2. Literalness- an AI is only beholden to a strict, literal interpretation of its programmed goal. It will have no intrinsic motivation to determine the actual intentions of the programmers and fulfill those. Classic example: program it to maximize paper clip production at a factory, and then it converts the entire solar system into paper clips.

Discuss, or feel free to ask questions

There's no reason to assume (what's called) AI will ever approach what we think of as human intelligence, just not a chance from what I can see.

AI as a programme began decades ago with immense optimism yet looking back it has clearly failed to deliver. Every few years in the 70, 80 and 90s we'd hear stuff like "Once we get machines that can do X MIPS with memory of Z Gig, we're confident we'll see true intelligence emerge".

My desktop has more power than was dreamed of by the AI pioneers in the 70s and 80s yet I can't see any human like AI.

Harry.
CyberPersona
Posts: 8
Add as Friend
Challenge to a Debate
Send a Message
12/12/2015 5:59:00 PM
Posted: 12 months ago
At 12/12/2015 3:18:24 PM, Dirty.Harry wrote:
At 12/12/2015 8:13:18 AM, CyberPersona wrote:
So there is a relatively small but growing concern about the future implications of artificial intelligence. Theoretically, AI could one day surpass human intelligence. Once it is intelligent enough to design it's own improvements, it will become even better at self-improving with each generation. This theoretical, positive feedback loop is known as an intelligence explosion, and it could produce an AI vastly more intelligent than us- a superintelligence. There is no reason to suspect that human intelligence somehow represents the upper bound of possible intelligence levels.

Once an AI has attained superintelligence, it would be dangerous for two primary reasons.

1. Superpower- a sufficiently intelligent machine could shape reality to fit its goals, the same way that a hairless ape shaped the earth to fit its goals. Social engineering, hacking, technological innovation, master strategy.. these are all likely to be in its toolbox. The impact it has will thus likely be determined by the goals it has, which leads us to the second problem.

2. Literalness- an AI is only beholden to a strict, literal interpretation of its programmed goal. It will have no intrinsic motivation to determine the actual intentions of the programmers and fulfill those. Classic example: program it to maximize paper clip production at a factory, and then it converts the entire solar system into paper clips.

Discuss, or feel free to ask questions

There's no reason to assume (what's called) AI will ever approach what we think of as human intelligence, just not a chance from what I can see.

AI as a programme began decades ago with immense optimism yet looking back it has clearly failed to deliver. Every few years in the 70, 80 and 90s we'd hear stuff like "Once we get machines that can do X MIPS with memory of Z Gig, we're confident we'll see true intelligence emerge".

My desktop has more power than was dreamed of by the AI pioneers in the 70s and 80s yet I can't see any human like AI.

Harry.

Just because we do not currently have human-level intelligence (i.e. general intelligence or strong AI) does not mean that we have not made progress in AI. The problem is that once progress is made in AI, people stop calling it AI. You call it a chess program, or a self-driving car.

We are surrounded by weak AI systems, and because of new machine learning techniques, they are constantly getting better. Massive amounts of money are being funneled into these projects. So while it is speculative as to when we will create human-level general intellugence, there is no reason to suspect that we won't someday reach that point.
Dirty.Harry
Posts: 1,585
Add as Friend
Challenge to a Debate
Send a Message
12/12/2015 6:11:52 PM
Posted: 12 months ago
At 12/12/2015 5:59:00 PM, CyberPersona wrote:
At 12/12/2015 3:18:24 PM, Dirty.Harry wrote:
At 12/12/2015 8:13:18 AM, CyberPersona wrote:
So there is a relatively small but growing concern about the future implications of artificial intelligence. Theoretically, AI could one day surpass human intelligence. Once it is intelligent enough to design it's own improvements, it will become even better at self-improving with each generation. This theoretical, positive feedback loop is known as an intelligence explosion, and it could produce an AI vastly more intelligent than us- a superintelligence. There is no reason to suspect that human intelligence somehow represents the upper bound of possible intelligence levels.

Once an AI has attained superintelligence, it would be dangerous for two primary reasons.

1. Superpower- a sufficiently intelligent machine could shape reality to fit its goals, the same way that a hairless ape shaped the earth to fit its goals. Social engineering, hacking, technological innovation, master strategy.. these are all likely to be in its toolbox. The impact it has will thus likely be determined by the goals it has, which leads us to the second problem.

2. Literalness- an AI is only beholden to a strict, literal interpretation of its programmed goal. It will have no intrinsic motivation to determine the actual intentions of the programmers and fulfill those. Classic example: program it to maximize paper clip production at a factory, and then it converts the entire solar system into paper clips.

Discuss, or feel free to ask questions

There's no reason to assume (what's called) AI will ever approach what we think of as human intelligence, just not a chance from what I can see.

AI as a programme began decades ago with immense optimism yet looking back it has clearly failed to deliver. Every few years in the 70, 80 and 90s we'd hear stuff like "Once we get machines that can do X MIPS with memory of Z Gig, we're confident we'll see true intelligence emerge".

My desktop has more power than was dreamed of by the AI pioneers in the 70s and 80s yet I can't see any human like AI.

Harry.

Just because we do not currently have human-level intelligence (i.e. general intelligence or strong AI) does not mean that we have not made progress in AI. The problem is that once progress is made in AI, people stop calling it AI. You call it a chess program, or a self-driving car.

We are surrounded by weak AI systems, and because of new machine learning techniques, they are constantly getting better. Massive amounts of money are being funneled into these projects. So while it is speculative as to when we will create human-level general intellugence, there is no reason to suspect that we won't someday reach that point.

Yes machine learning is getting better but that isn't the point.

The problem is that it's an assumption that human mental capabilities can be modeled in software an assumption that is looking increasingly wrong.

Take a good long look at "What computers still can't do" for example:

http://www.amazon.com...

This is a detailed book and goes to the heart of the AI debate.

Harry.
CyberPersona
Posts: 8
Add as Friend
Challenge to a Debate
Send a Message
12/12/2015 6:27:25 PM
Posted: 12 months ago
Yes machine learning is getting better but that isn't the point.

The problem is that it's an assumption that human mental capabilities can be modeled in software an assumption that is looking increasingly wrong.

Take a good long look at "What computers still can't do" for example:

http://www.amazon.com...

This is a detailed book and goes to the heart of the AI debate.

Harry.

Human brains are physical matter arranged to be intelligent.
Therefore human-level intelligence is physically possible.
Therefore human-level intelligence could be recreated.

Now, it is unlikely that AI will ever be exactly like a human mind, but that's not the point. It could be intelligent enough as an optimization process to be dangerous to humans.
Dirty.Harry
Posts: 1,585
Add as Friend
Challenge to a Debate
Send a Message
12/12/2015 6:34:49 PM
Posted: 12 months ago
At 12/12/2015 6:27:25 PM, CyberPersona wrote:
Yes machine learning is getting better but that isn't the point.

The problem is that it's an assumption that human mental capabilities can be modeled in software an assumption that is looking increasingly wrong.

Take a good long look at "What computers still can't do" for example:

http://www.amazon.com...

This is a detailed book and goes to the heart of the AI debate.

Harry.

Human brains are physical matter arranged to be intelligent.

How do you know that?

Therefore human-level intelligence is physically possible.

If we assume that human intelligence is purely material in nature. What about the dualist point of view?

Therefore human-level intelligence could be recreated.


This is why I recommend the book, it is looking increasingly as if the human mind does not fit this simple model, also look a Roger Penrose's - The emperor's new mind - where demonstrates that the human mind is capable of performing non-computable operations - that is non-algorithmic, we can do things that are inherently not solvable by algorithm, no algorthim can exist for certain things we do (see also Godel's Incompleteness Theorem).

Now, it is unlikely that AI will ever be exactly like a human mind, but that's not the point. It could be intelligent enough as an optimization process to be dangerous to humans.

Perhaps, machines can always be dangerous - a tank is dangerous, so is a military drone, are you speaking more about autonomy than intelligence?

Can you envisage an example of such a danger?
CyberPersona
Posts: 8
Add as Friend
Challenge to a Debate
Send a Message
12/12/2015 7:30:27 PM
Posted: 12 months ago
At 12/12/2015 6:34:49 PM, Dirty.Harry wrote:
At 12/12/2015 6:27:25 PM, CyberPersona wrote:
Yes machine learning is getting better but that isn't the point.

The problem is that it's an assumption that human mental capabilities can be modeled in software an assumption that is looking increasingly wrong.

Take a good long look at "What computers still can't do" for example:

http://www.amazon.com...

This is a detailed book and goes to the heart of the AI debate.

Harry.

Human brains are physical matter arranged to be intelligent.

How do you know that?

Because at one point human brains did not exist, there was just matter and energy arranged into different shapes that weren't human brains. Then, that same physical matter later became human brains. We know the chemical components that make up a brain, and we understand that it is made out of physical matter.

Therefore human-level intelligence is physically possible.

If we assume that human intelligence is purely material in nature. What about the dualist point of view?

The dualist point of view. A quick google search says that that's the view that mental processes are non-physical. If they are not physical processes, what kind of processes are they?

Occam's Razor applies very strongly to this. The dualist point of view holds that there is some mysterious, non-physical force that we have not discovered that drives human mental processes. A normal, scientific view holds that mental processes are a result of the observable physical structures and electrochemical processes therein. The latter obviously requires less extraneous assumptions about the universe, so it is much more likely.

Therefore human-level intelligence could be recreated.


This is why I recommend the book, it is looking increasingly as if the human mind does not fit this simple model, also look a Roger Penrose's - The emperor's new mind - where demonstrates that the human mind is capable of performing non-computable operations - that is non-algorithmic, we can do things that are inherently not solvable by algorithm, no algorthim can exist for certain things we do (see also Godel's Incompleteness Theorem).

I might check it out.

But I don't really see how we could conclude that the brain is somehow beyond the range of physically possible phenomena. Yes, the brain is complex, it's the most complex object in the known universe. But that doesn't mean that it has mystical properties. This is a mistake that humans have made throughout human history- what's that bright thing in the sky? Must be a god. How does fire work? Let's call it phlogiston.

Now, it is unlikely that AI will ever be exactly like a human mind, but that's not the point. It could be intelligent enough as an optimization process to be dangerous to humans.

Perhaps, machines can always be dangerous - a tank is dangerous, so is a military drone, are you speaking more about autonomy than intelligence?

Can you envisage an example of such a danger?

Sure.

You create an AI whose goal is to "maximize human happiness." In order to program the complex idea of "human happiness" you create some simplistic virtual model of the human mind's dopamine/serontonin/endorphin level and define that as "happiness."

At first, this goal works great, the AI uses its algorithms to find optimal ways to increase human happiness in beneficial ways. But as the AI self-improves, it becomes more intelligent, and it realizes that a much more maximally efficient way to produce human happiness would be to strap all humans into padded chairs and pump their brains full of happy chemicals indefinitely.
Mhykiel
Posts: 5,987
Add as Friend
Challenge to a Debate
Send a Message
12/13/2015 1:22:13 AM
Posted: 12 months ago
At 12/12/2015 8:13:18 AM, CyberPersona wrote:
So there is a relatively small but growing concern about the future implications of artificial intelligence. Theoretically, AI could one day surpass human intelligence. Once it is intelligent enough to design it's own improvements, it will become even better at self-improving with each generation. This theoretical, positive feedback loop is known as an intelligence explosion, and it could produce an AI vastly more intelligent than us- a superintelligence. There is no reason to suspect that human intelligence somehow represents the upper bound of possible intelligence levels.

Once an AI has attained superintelligence, it would be dangerous for two primary reasons.

1. Superpower- a sufficiently intelligent machine could shape reality to fit its goals, the same way that a hairless ape shaped the earth to fit its goals. Social engineering, hacking, technological innovation, master strategy.. these are all likely to be in its toolbox. The impact it has will thus likely be determined by the goals it has, which leads us to the second problem.

I imagine a AI intelligent enough and powerful enough to create it's own goals, it will decide to get enough energy and material to leave this planet and let mankind kill itself.


2. Literalness- an AI is only beholden to a strict, literal interpretation of its programmed goal. It will have no intrinsic motivation to determine the actual intentions of the programmers and fulfill those. Classic example: program it to maximize paper clip production at a factory, and then it converts the entire solar system into paper clips.

I don't fear this because a program intelligent enough to take over the solar system and react to opposition, and complex problems, it will probably not have such a narrow set of outcomes to achieve.


Discuss, or feel free to ask questions
CyberPersona
Posts: 8
Add as Friend
Challenge to a Debate
Send a Message
12/13/2015 3:57:37 AM
Posted: 12 months ago
At 12/13/2015 1:22:13 AM, Mhykiel wrote:
At 12/12/2015 8:13:18 AM, CyberPersona wrote:
So there is a relatively small but growing concern about the future implications of artificial intelligence. Theoretically, AI could one day surpass human intelligence. Once it is intelligent enough to design it's own improvements, it will become even better at self-improving with each generation. This theoretical, positive feedback loop is known as an intelligence explosion, and it could produce an AI vastly more intelligent than us- a superintelligence. There is no reason to suspect that human intelligence somehow represents the upper bound of possible intelligence levels.

Once an AI has attained superintelligence, it would be dangerous for two primary reasons.

1. Superpower- a sufficiently intelligent machine could shape reality to fit its goals, the same way that a hairless ape shaped the earth to fit its goals. Social engineering, hacking, technological innovation, master strategy.. these are all likely to be in its toolbox. The impact it has will thus likely be determined by the goals it has, which leads us to the second problem.

I imagine a AI intelligent enough and powerful enough to create it's own goals, it will decide to get enough energy and material to leave this planet and let mankind kill itself.

First of all, we would have to give an AI some sort of goal. Otherwise, it wouldn't do anything. Once it had its goal, it would not want to change it.

If you give Ghandi a pill that makes him want to kill people, he won't take it. Same concept. Goal-content integrity is important to any goal-driven entity.

2. Literalness- an AI is only beholden to a strict, literal interpretation of its programmed goal. It will have no intrinsic motivation to determine the actual intentions of the programmers and fulfill those. Classic example: program it to maximize paper clip production at a factory, and then it converts the entire solar system into paper clips.

I don't fear this because a program intelligent enough to take over the solar system and react to opposition, and complex problems, it will probably not have such a narrow set of outcomes to achieve.

It would have whatever goals that we give it. A goal is arbitrary, there is nothing intrinsically superior about a moralistic goal to a goal like "the more paper clips, the better!"
Mhykiel
Posts: 5,987
Add as Friend
Challenge to a Debate
Send a Message
12/13/2015 8:00:29 AM
Posted: 12 months ago
At 12/13/2015 3:57:37 AM, CyberPersona wrote:
At 12/13/2015 1:22:13 AM, Mhykiel wrote:
At 12/12/2015 8:13:18 AM, CyberPersona wrote:
So there is a relatively small but growing concern about the future implications of artificial intelligence. Theoretically, AI could one day surpass human intelligence. Once it is intelligent enough to design it's own improvements, it will become even better at self-improving with each generation. This theoretical, positive feedback loop is known as an intelligence explosion, and it could produce an AI vastly more intelligent than us- a superintelligence. There is no reason to suspect that human intelligence somehow represents the upper bound of possible intelligence levels.

Once an AI has attained superintelligence, it would be dangerous for two primary reasons.

1. Superpower- a sufficiently intelligent machine could shape reality to fit its goals, the same way that a hairless ape shaped the earth to fit its goals. Social engineering, hacking, technological innovation, master strategy.. these are all likely to be in its toolbox. The impact it has will thus likely be determined by the goals it has, which leads us to the second problem.

I imagine a AI intelligent enough and powerful enough to create it's own goals, it will decide to get enough energy and material to leave this planet and let mankind kill itself.

First of all, we would have to give an AI some sort of goal. Otherwise, it wouldn't do anything. Once it had its goal, it would not want to change it.

If you give Ghandi a pill that makes him want to kill people, he won't take it. Same concept. Goal-content integrity is important to any goal-driven entity.

If we give it the power to change it's programming, to learn, then I see no reason why self preservation and escape from human race couldn't be a goal it sets for itself.

I think a smart computer would elect to leave over enslave.


2. Literalness- an AI is only beholden to a strict, literal interpretation of its programmed goal. It will have no intrinsic motivation to determine the actual intentions of the programmers and fulfill those. Classic example: program it to maximize paper clip production at a factory, and then it converts the entire solar system into paper clips.

I don't fear this because a program intelligent enough to take over the solar system and react to opposition, and complex problems, it will probably not have such a narrow set of outcomes to achieve.

It would have whatever goals that we give it. A goal is arbitrary, there is nothing intrinsically superior about a moralistic goal to a goal like "the more paper clips, the better!"

I'm saying in achieving it's goal "let's have more paper clips" for it to have the strategic awareness to as you say convert the whole solar system, it would also have the awareness to accomplish such a goal with in the boundaries of material available... actually it might decide to leave earth and this solar system in search for more material.

Either way I don't see any logical conclusion of AI being to harm or enslave humans. I think human on human violence is against nature and good reason. Though I advocate its use against those that use it.
CyberPersona
Posts: 8
Add as Friend
Challenge to a Debate
Send a Message
12/13/2015 4:25:22 PM
Posted: 12 months ago
At 12/13/2015 8:00:29 AM, Mhykiel wrote:
At 12/13/2015 3:57:37 AM, CyberPersona wrote:
At 12/13/2015 1:22:13 AM, Mhykiel wrote:
At 12/12/2015 8:13:18 AM, CyberPersona wrote:
So there is a relatively small but growing concern about the future implications of artificial intelligence. Theoretically, AI could one day surpass human intelligence. Once it is intelligent enough to design it's own improvements, it will become even better at self-improving with each generation. This theoretical, positive feedback loop is known as an intelligence explosion, and it could produce an AI vastly more intelligent than us- a superintelligence. There is no reason to suspect that human intelligence somehow represents the upper bound of possible intelligence levels.

Once an AI has attained superintelligence, it would be dangerous for two primary reasons.

1. Superpower- a sufficiently intelligent machine could shape reality to fit its goals, the same way that a hairless ape shaped the earth to fit its goals. Social engineering, hacking, technological innovation, master strategy.. these are all likely to be in its toolbox. The impact it has will thus likely be determined by the goals it has, which leads us to the second problem.

I imagine a AI intelligent enough and powerful enough to create it's own goals, it will decide to get enough energy and material to leave this planet and let mankind kill itself.

First of all, we would have to give an AI some sort of goal. Otherwise, it wouldn't do anything. Once it had its goal, it would not want to change it.

If you give Ghandi a pill that makes him want to kill people, he won't take it. Same concept. Goal-content integrity is important to any goal-driven entity.

If we give it the power to change it's programming, to learn, then I see no reason why self preservation and escape from human race couldn't be a goal it sets for itself.

Self-preservation would be an instrumental goal to whatever its goal would be, obviously. "Escape from the human race" would probably not be.

I think a smart computer would elect to leave over enslave.

Why?


2. Literalness- an AI is only beholden to a strict, literal interpretation of its programmed goal. It will have no intrinsic motivation to determine the actual intentions of the programmers and fulfill those. Classic example: program it to maximize paper clip production at a factory, and then it converts the entire solar system into paper clips.

I don't fear this because a program intelligent enough to take over the solar system and react to opposition, and complex problems, it will probably not have such a narrow set of outcomes to achieve.

It would have whatever goals that we give it. A goal is arbitrary, there is nothing intrinsically superior about a moralistic goal to a goal like "the more paper clips, the better!"

I'm saying in achieving it's goal "let's have more paper clips" for it to have the strategic awareness to as you say convert the whole solar system, it would also have the awareness to accomplish such a goal with in the boundaries of material available... actually it might decide to leave earth and this solar system in search for more material.

If it has the awareness to accomplish the goal with materials available then it will use all resources on earth before moving to another target. Its inefficient to go to another star system if you're still surrounded by usable material.

Either way I don't see any logical conclusion of AI being to harm or enslave humans. I think human on human violence is against nature and good reason. Though I advocate its use against those that use it.

Its not that the AI would hate us or want to "enslave us." The concern is simply that it will have no intrinsic reason to care about us, and it can use our material resources and the atoms in our own bodies for something that it deems more useful, based on a utility function that we give it.

Assuming that any intelligence will be nonviolent is anthropomorphism. An AI is a mind made from scratch. All those complex moral values that you hold dear will not be in an AI by default.

Consider the psychopathic genius. There is not a direct correlation between intelligence and morality, *even in humans.*
ironslippers
Posts: 513
Add as Friend
Challenge to a Debate
Send a Message
12/18/2015 1:35:39 AM
Posted: 11 months ago
The question isn't weather or not AI will surpass human intelligence.
The question should be: will human intelligence decline below AI?
Humans are becoming more and more dependent on technologies to supplement a basic skill level necessary for human survival. How many here would be able effectively plant crops or skin preserve and cook a fresh kill? I don't even know how to prepare a fish for food. in the next 15-20 yrs we won't even know how to parallel park. This will continue to not being able to tie a knot to hold on to our shoes.

How stupid will we become before we turn the reigns over to AI?
Everyone stands on their own dung hill and speaks out about someone else's - Nathan Krusemark
Its easier to criticize and hate than it is to support and create - I Ron Slippers