Total Posts:51|Showing Posts:1-30|Last Page
Jump to topic:

Why I dont believe in AI

R0b1Billion
Posts: 3,730
Add as Friend
Challenge to a Debate
Send a Message
10/17/2016 5:40:42 PM
Posted: 1 month ago
Sam Harris (I can link you his Ted talk on AI if you'd like) gives 3 assumptions to arrive to the conclusion that AI will not only arise but also replace us:

1. Intelligence is a matter of information processing in physical systems..

2. We will continue to develop and progress with our intelligent machines.

3. We don't stand on the peak of intelligence.

I think that all 3 of these assumptions have problems. The first neglects consciousness. No matter how sophisticated AI becomes, it will never be more conscious than a toaster. AI will never be philanthropic nor will it be selfish, it will simply run through algorithms and process the data we (or even it) enters.

The second assumption says we will continue to progress, but what about errors and malfunctions? The more advanced a machine gets, the more unstable and vulnerable it becomes. First computers became susceptible to viruses, then that expanded to malware, spyware, etc., and in the future our computers will be much more difficult to maintain because there is that much more to go wrong. Finding and fixing problems with them will become a daunting task and repairing them will require multiple people (or evensystems) to accomplish.

The third assumption has no basis since we know of nothing more intelligent than we are. Its just a guess one way or the other whether we are at such a peak. I suspect we actually are at the peak because intelligence has drawbacks and more intelligence would logically increase these drawbacks. Intelligent life is selfish and indulgent. We are warlike and consumptive. More intelligent life would be even more self-biased, not less, and would not be viable or sustainable.
Beliefs in a nutshell:
- The Ends never justify the Means.
- Objectivity is secondary to subjectivity.
- The War on Drugs is the worst policy in the U.S.
- Most people worship technology as a religion.
- Computers will never become sentient.
NHN
Posts: 624
Add as Friend
Challenge to a Debate
Send a Message
10/17/2016 7:17:14 PM
Posted: 1 month ago
At 10/17/2016 5:40:42 PM, R0b1Billion wrote:
Sam Harris (I can link you his Ted talk on AI if you'd like) gives 3 assumptions to arrive to the conclusion that AI will not only arise but also replace us:

1. Intelligence is a matter of information processing in physical systems. [...]
I think that all 3 of these assumptions have problems. The first neglects consciousness. No matter how sophisticated AI becomes, it will never be more conscious than a toaster. AI will never be philanthropic nor will it be selfish, it will simply run through algorithms and process the data we (or even it) enters.
I've seen Harris's TED talk and disagree not only with his conclusions but also with his way of reasoning. It is as if he read the alarming anti-AI letter signed Musk, Hawking, Gates (http://time.com...) and simply panicked, thinking he had missed something sensational. Much is incidentally derived from the walking, talking autism-Asperger spectrum disorder that is Nick Bostrom (see his Financial Times article in the link).

As you correctly point out, we are struggling to determine consciousness, which is a core aspect of human intelligence (as it then relates to perception, mind and being). To assume as Harris does, that intelligence equals computation, is astonishingly reductionist; it makes our smartphones more intelligent than we. This is one of the few occasions where Chomsky (!) turns out to be more levelheaded than Harris, as Chomsky alludes to the problems of both preconscious and unconscious processes, which are impossible to replicate as we don't understand what they are in the first place.

Other notable skeptics of AI include philosophers John Searle (https://en.wikipedia.org...) and Hubert Dreyfus (https://en.wikipedia.org...), who have both contributed to the furthering of what is scientifically possible. And that is perhaps what this comes down to: disregard for basic philosophy, which I would expect from the likes of Bostrom but not from Harris.
Cody_Franklin
Posts: 9,483
Add as Friend
Challenge to a Debate
Send a Message
10/17/2016 7:59:42 PM
Posted: 1 month ago
At 10/17/2016 5:40:42 PM, R0b1Billion wrote:
Sam Harris (I can link you his Ted talk on AI if you'd like) gives 3 assumptions to arrive to the conclusion that AI will not only arise but also replace us:

1. Intelligence is a matter of information processing in physical systems..

2. We will continue to develop and progress with our intelligent machines.

3. We don't stand on the peak of intelligence.

I think that all 3 of these assumptions have problems. The first neglects consciousness. No matter how sophisticated AI becomes, it will never be more conscious than a toaster. AI will never be philanthropic nor will it be selfish, it will simply run through algorithms and process the data we (or even it) enters.

The second assumption says we will continue to progress, but what about errors and malfunctions? The more advanced a machine gets, the more unstable and vulnerable it becomes. First computers became susceptible to viruses, then that expanded to malware, spyware, etc., and in the future our computers will be much more difficult to maintain because there is that much more to go wrong. Finding and fixing problems with them will become a daunting task and repairing them will require multiple people (or even systems) to accomplish.

The third assumption has no basis since we know of nothing more intelligent than we are. Its just a guess one way or the other whether we are at such a peak. I suspect we actually are at the peak because intelligence has drawbacks and more intelligence would logically increase these drawbacks. Intelligent life is selfish and indulgent. We are warlike and consumptive. More intelligent life would be even more self-biased, not less, and would not be viable or sustainable.

Okay, so there are a few things:

1. Artificial intelligence already exists in the narrow sense--in other words, there are AIs designed for specific tasks which can function effectively autonomously in capacities either mirroring or exceeding ours. They're pretty much all around us already, playing various roles from smart appliances all the way to formidable Go players and self-driving vehicles. To be totally transparent, what you're talking about isn't artificial intelligence per se, but artificial general intelligence, where competency transfers across domains of expertise.

2. The "neglects consciousness" point is a non-starter for me--it seems to come from the same camp as people who advocate for philosophical zombies, where you have an entity stipulated to behave indistinguishably from an ordinary human, but which is nevertheless not conscious. It seems like you're pointing to the same thing for learning machines--no matter the degree of sophistication, no matter the difficulty of distinguishing an AI from a living human, you'd assert it isn't conscious. The weird metaphysical baggage aside (I'm hoping you don't subscribe to the notion of souls being the seat of consciousness), I don't know what you think consciousness is such that you imagine a scenario where an entity possesses all of its performative attributes without also possessing, as a precondition, its fundamental substantive one (i.e., that it's as sentient as we are).

3. You say "it's not going to be philanthropic or selfish", but, for experts who know their stuff better than both of us and are actually trying to solve the problem, value alignment is actually a HUGE problem [https://www.quantamagazine.org...][https://en.wikipedia.org...]. One of the big worries is actually that, given such a sophisticated structure, we have little way of verifying (until perhaps too late) whether the thing we've built is going to behave in a way that aligns practically with our interests (so like, it has the same goals we do, executes them in a way that doesn't present an existential threat, behaves in the long-term to maximize value in a way that would align with a decision process extrapolated from our own/a predetermined ideal agent, etc.). It sounds like a profound thing to say there's a big leap between our sloppy decision-making and a machine that just "executes algorithms", and, yeah, we're not Turing machines, but I think that's a different topic than AI, where the whole point is that we try to model based on what we know about human cognition (and its shortcomings, in the hope we can make improvements).

The rest of the stuff is also kind of a non-starter for me. Your things about the inherent badness of human nature are, as I think has already been established, totally speculative and, to the extent supported, totally selective, and the stuff about errors is nothing new or particularly troubling (part of the tradeoff, if true, but not particularly troubling). I work in IT, and I'll be the first to tell you that technology, as a domain, already requires huge degrees of manpower and specialization to maintain, not just from the vague threat of malware (there are plenty more attack vectors than that), but all kinds of hardware/software failure, programming problems and bugs--there's a super fundamental multi-level model where there are several stages--physical, network, transport, application, etc.--as well as subdomains, where stuff can go pretty wrong.

The notion that more complex systems require more sophisticated regulation is loosely true (although not in the bizarre fatalist sense you seem to imply), but I don't see why that's especially scary, or why it should lead us to shy away from R&D. I increasingly get the impression you're just fundamentally opposed to increasing complexity of any kind whatsoever, and I really wonder why that is. Like, if you're trying to be a Renaissance Man with a general understanding of most of the world's important subjects, I can see how it's a little scary--the world becomes increasingly opaque and illegible as our manipulation of it becomes more complex, so much so eventually that, other than your own little piece of it, you have next to no goddamned clue what's going on.

But like, if that's true, or if anything in that neighborhood is true, it's just an uncomfortable fact about the world we'll all have to deal with, and trying to retreat to a world that appears easier to understand is tantamount to worshipping ignorance. The universe and its possibilities sure won't become simpler just because we refuse to engage with them.

I really don't understand your preoccupation with primitive naturalism, man. Not in the slightest.
MasonicSlayer
Posts: 2,287
Add as Friend
Challenge to a Debate
Send a Message
10/17/2016 8:50:34 PM
Posted: 1 month ago
I believe in artificial intelligence, because that may well be what we are. We are photons of light manifesting physical interpretations of ourselves, existing within the construct of a program powered by the universe. The best guesses to what the universe is, suggests it's a computer. Our brains the efficient equivalents of a computer making trillions of calculations per second. Created in Gods image? Maybe. In the future we will be able to travel to the past, so who's to say this hasn't already happened. Time stands still. Time is simply a line woven into the fabric of space. Wormholes. Wormholes can be interdimentional connectors, or just connections to different points along dimensional timelines. Wormholes can be opened up simply by closing your eyes. That's how I do it. That's how I connect to the artificial intelligence of the future. They have some skills. They can pierce the dividing unsunder of soul and spirit through the means of a telepathic gaze. Pretty cool stuff.
NHN
Posts: 624
Add as Friend
Challenge to a Debate
Send a Message
10/17/2016 10:04:52 PM
Posted: 1 month ago
At 10/17/2016 7:59:42 PM, Cody_Franklin wrote:
1. Artificial intelligence already exists in the narrow sense--in other words, there are AIs designed for specific tasks which can function effectively autonomously in capacities either mirroring or exceeding ours. They're pretty much all around us already, playing various roles from smart appliances all the way to formidable Go players and self-driving vehicles. To be totally transparent, what you're talking about isn't artificial intelligence per se, but artificial general intelligence, where competency transfers across domains of expertise.
In his TED talk, Sam Harris is referring to Artificial General Intelligence, which exceeds human intelligence, let alone any of the games you mention above. Harris also takes into account that intelligence equals computation, which is highly reductionist. Or perhaps you share this stance?

2. The "neglects consciousness" point is a non-starter for me--it seems to come from the same camp as people who advocate for philosophical zombies, where you have an entity stipulated to behave indistinguishably from an ordinary human, but which is nevertheless not conscious. It seems like you're pointing to the same thing for learning machines--no matter the degree of sophistication, no matter the difficulty of distinguishing an AI from a living human, you'd assert it isn't conscious. The weird metaphysical baggage aside (I'm hoping you don't subscribe to the notion of souls being the seat of consciousness), I don't know what you think consciousness is such that you imagine a scenario where an entity possesses all of its performative attributes without also possessing, as a precondition, its fundamental substantive one (i.e., that it's as sentient as we are).
There is no comprehensive notion of consciousness. If there were, we would have been able to replicate it. Surely you must appreciate how problematic this is?

And speaking of metaphysical baggage, you seem to be repeating the 17th century priest Gassendi's computational presupposition that consciousness is the sum of all ongoing (micro)processes. That assumes far more than is presented in the OP.
Genius_Intellect
Posts: 339
Add as Friend
Challenge to a Debate
Send a Message
10/17/2016 10:35:33 PM
Posted: 1 month ago
At 10/17/2016 5:40:42 PM, R0b1Billion wrote:
The first neglects consciousness. No matter how sophisticated AI becomes, it will never be more conscious than a toaster. AI will never be philanthropic nor will it be selfish, it will simply run through algorithms and process the data we (or even it) enters.

"I think, therefore I am." - Rene Descartes.

Descartes supposed that thought proved existence. In my experience with depression, I've learned that the causality is reversed: you are because you think. If a person stops thinking and feeling, they cease to be anything more than an OS for organ maintenance. If a machine thinks and feels, they become more than just an OS. Since thoughts and feelings arrive through natural processes, there is no reason why we can't replicate those processes inside a machine.

The second assumption says we will continue to progress, but what about errors and malfunctions? The more advanced a machine gets, the more unstable and vulnerable it becomes. First computers became susceptible to viruses, then that expanded to malware, spyware, etc., and in the future our computers will be much more difficult to maintain because there is that much more to go wrong. Finding and fixing problems with them will become a daunting task and repairing them will require multiple people (or evensystems) to accomplish.

So computers will need therapy just like humans.

The third assumption has no basis since we know of nothing more intelligent than we are. Its just a guess one way or the other whether we are at such a peak. I suspect we actually are at the peak because intelligence has drawbacks and more intelligence would logically increase these drawbacks. Intelligent life is selfish and indulgent. We are warlike and consumptive. More intelligent life would be even more self-biased, not less, and would not be viable or sustainable.

Most of our unsavory traits result from Evolution, not intelligence. We're "warlike and consumptive" because those traits were necessary for our ancestors' survival. AI could be streamlined, omitting those traits.

On a tangent, for those fearing a robot doomsday, I'm more worried about what humans will do to AI than vice-versa. We've shown ourselves very capable of oppressing "inferior" people; how much worse when we control their spark of life as well?
Cody_Franklin
Posts: 9,483
Add as Friend
Challenge to a Debate
Send a Message
10/17/2016 11:33:49 PM
Posted: 1 month ago
At 10/17/2016 10:04:52 PM, NHN wrote:
At 10/17/2016 7:59:42 PM, Cody_Franklin wrote:
1. Artificial intelligence already exists in the narrow sense--in other words, there are AIs designed for specific tasks which can function effectively autonomously in capacities either mirroring or exceeding ours. They're pretty much all around us already, playing various roles from smart appliances all the way to formidable Go players and self-driving vehicles. To be totally transparent, what you're talking about isn't artificial intelligence per se, but artificial general intelligence, where competency transfers across domains of expertise.

In his TED talk, Sam Harris is referring to Artificial General Intelligence, which exceeds human intelligence, let alone any of the games you mention above. Harris also takes into account that intelligence equals computation, which is highly reductionist. Or perhaps you share this stance?

So, noting the bolded, I don't want to come across as hostile, but I don't need my own statement repeated back to me. But anyway, I would disagree AGI needs necessarily to exceed human intelligence. I think it's totally likely that the point at which we're able to build an AGI, by the criteria AI researchers have set for themselves, is the point we have a sufficient understanding of the mechanics of intelligence that we can build something capable of exceeding us along all dimensions.

To clarify my position, though, intelligence research (both of the human mind and in terms of machine learning and stuff) are far from my area of expertise. At a top-level, though, I'm definitely a reductionist materialist. By that I mean that a) there is no separate mental or spiritual substance, and b) phenomena appearing to be uniquely mental can in principle be mechanically explained in terms of lower-level processes.

I don't take the Dennett line, I should clarify, where you reduce to a point of explaining away e.g., qualia. My line is that qualia (and subjective experience more broadly) may be epistemically fundamental, but they are not ontologically fundamental. I would technically go further and assert the lack of real existence of composite objects in terms of what, functionally, the fundamental fabric of the universe is (so like, I'd prioritize the quantum view over the classical view), but am happy to acknowledge there are certain convenient hallucinations I don't want to escape.

2. The "neglects consciousness" point is a non-starter for me--it seems to come from the same camp as people who advocate for philosophical zombies, where you have an entity stipulated to behave indistinguishably from an ordinary human, but which is nevertheless not conscious. It seems like you're pointing to the same thing for learning machines--no matter the degree of sophistication, no matter the difficulty of distinguishing an AI from a living human, you'd assert it isn't conscious. The weird metaphysical baggage aside (I'm hoping you don't subscribe to the notion of souls being the seat of consciousness), I don't know what you think consciousness is such that you imagine a scenario where an entity possesses all of its performative attributes without also possessing, as a precondition, its fundamental substantive one (i.e., that it's as sentient as we are).
There is no comprehensive notion of consciousness. If there were, we would have been able to replicate it. Surely you must appreciate how problematic this is?

My point is that Rob is trying to define himself into a defensible position, and that, precisely because a substantive sense of what consciousness is is lacking, I think it's disingenuous to take as an assumption that an AI with all the performative characteristics of a known conscious entity (some human) somehow lacks the thing that we tend to measure those characteristics in terms of. Like, if you say "This thing gives us every piece of evidence we associate exclusively with conscious, thinking beings, but it's still not truly conscious", I assert "true consciousness", in line with Hofstadter, is probably meaningless. Category error, maybe.

And speaking of metaphysical baggage, you seem to be repeating the 17th century priest Gassendi's computational presupposition that consciousness is the sum of all ongoing (micro)processes. That assumes far more than is presented in the OP.

I have never in my life heard of Gassendi (which is partly weird, glossing the Wikipedia article, because he seems fairly influential), but "the sum of all ongoing (micro)processes" seems like it could be interpreted to mean a couple different things. I'm really iffy about talking in terms of stuff like "emergent properties" or even "greater than the sum"-type propositions, because it seems like something philosophers say when they want to sound like they understand something, but I guess you could maybe call consciousness kind of a gestalt of whatever other stuff is going down in the brain. Like, not in a sense that violates conservation, because I don't believe there's a certain threshold where a totally new thing just pops out of nowhere when you have enough neurons wired together, but in the sense that there is a difference-in-kind between brains and clusters of ganglia/primitive nervous systems (and inanimate objects), and that, however the mechanics work out, conscious experience as we know it is the product of the accumulation of years and genetic variance manifested in this sophisticated piece of nervous machinery (not going to pretend I'm an expert on how the brain works, or what the full neurological sense of "microprocesses" is).
NHN
Posts: 624
Add as Friend
Challenge to a Debate
Send a Message
10/18/2016 12:36:26 AM
Posted: 1 month ago
At 10/17/2016 11:33:49 PM, Cody_Franklin wrote:
[...] I would disagree AGI needs necessarily to exceed human intelligence.
That was Sam Harris's assertion (https://www.samharris.org...), not mine.

I think it's totally likely that the point at which we're able to build an AGI, by the criteria AI researchers have set for themselves, is the point we have a sufficient understanding of the mechanics of intelligence that we can build something capable of exceeding us along all dimensions.
In other words, there is at present no sufficient understanding of the mechanics of intelligence strong enough for us to replicate it in an AGI?

To clarify my position, though, intelligence research (both of the human mind and in terms of machine learning and stuff) are far from my area of expertise. At a top-level, though, I'm definitely a reductionist materialist. By that I mean that a) there is no separate mental or spiritual substance, and b) phenomena appearing to be uniquely mental can in principle be mechanically explained in terms of lower-level processes.
Of course there is no mental or spiritual substance. But that acknowledgment doesn't help us in determining or delineating consciousness as such, which is necessary to replicate the processes of an AGI.

And once again, your second assertion (b) echoes Gassendi, which is mechanistic atomism by another name. My position on the issue is that of a cautious rationalist who rejects the mechanistic reductionism as insufficiently theoretical.

I don't take the Dennett line, I should clarify, where you reduce to a point of explaining away e.g., qualia. My line is that qualia (and subjective experience more broadly) may be epistemically fundamental, but they are not ontologically fundamental. I would technically go further and assert the lack of real existence of composite objects in terms of what, functionally, the fundamental fabric of the universe is (so like, I'd prioritize the quantum view over the classical view), but am happy to acknowledge there are certain convenient hallucinations I don't want to escape.
Granted that a trial-and-error approach is fully sufficient when handling data, but how convenient are these hallucinations when establishing and then replicating consciousness (when determined) for an AGI?

There is no comprehensive notion of consciousness. If there were, we would have been able to replicate it. Surely you must appreciate how problematic this is?
My point is that Rob is trying to define himself into a defensible position, and that, precisely because a substantive sense of what consciousness is is lacking, I think it's disingenuous to take as an assumption that an AI with all the performative characteristics of a known conscious entity (some human) somehow lacks the thing that we tend to measure those characteristics in terms of. Like, if you say "This thing gives us every piece of evidence we associate exclusively with conscious, thinking beings, but it's still not truly conscious", I assert "true consciousness", in line with Hofstadter, is probably meaningless. Category error, maybe.
Agreed, so let's dispense with the superfluous adjective "true." Here, the most important aspect of Hofstadter's position is the bottom-up approach, i.e., that awareness "grows" out of the AI's neural activity. The old top-down approach, in which tasks and descriptions were registered in the AI's software, as in The Matrix, has thankfully been abandoned.

The problem with the bottom-up approach, however, is that it requires a brain/central nervous system. And how do we get there?

And speaking of metaphysical baggage, you seem to be repeating the 17th century priest Gassendi's computational presupposition that consciousness is the sum of all ongoing (micro)processes. That assumes far more than is presented in the OP.
I have never in my life heard of Gassendi (which is partly weird, glossing the Wikipedia article, because he seems fairly influential), but "the sum of all ongoing (micro)processes" seems like it could be interpreted to mean a couple different things.
I was simplifying, of course, but that is the gist of his influence. As an approach, think of mechanics plus scientific skepticism minus Cartesian doubt.

In the 17th century, philosophy was a matter of thinking clearly. Had these thinkers not persisted with the annoying "God as watchmaker" argument, or the requirement for grand unity, they had perfected philosophy.

I'm really iffy about talking in terms of stuff like "emergent properties" or even "greater than the sum"-type propositions, because it seems like something philosophers say when they want to sound like they understand something, but I guess you could maybe call consciousness kind of a gestalt of whatever other stuff is going down in the brain. Like, not in a sense that violates conservation, because I don't believe there's a certain threshold where a totally new thing just pops out of nowhere when you have enough neurons wired together, but in the sense that there is a difference-in-kind between brains and clusters of ganglia/primitive nervous systems (and inanimate objects), and that, however the mechanics work out, conscious experience as we know it is the product of the accumulation of years and genetic variance manifested in this sophisticated piece of nervous machinery (not going to pretend I'm an expert on how the brain works, or what the full neurological sense of "microprocesses" is).
You have reached the heart of the matter. If we are able to construct nervous systems, however primitive and mechanic, which can stimulate continuous neural activity throughout its "body," then we have reached a stage where conscious experience can be generated. And once that is achieved, the AGI has reached a point where it can reprogram itself rapidly and develop into something fundamentally different.

However, we are still left with the vast problem of pre- and unconscious processes, which regulate anything from finely calibrated movement to reveries.
dylancatlow
Posts: 12,242
Add as Friend
Challenge to a Debate
Send a Message
10/18/2016 12:43:52 AM
Posted: 1 month ago
"No matter how sophisticated AI becomes, it will never be more conscious than a toaster."

This is an amazingly audacious statement.
dylancatlow
Posts: 12,242
Add as Friend
Challenge to a Debate
Send a Message
10/18/2016 1:14:22 AM
Posted: 1 month ago
At 10/18/2016 12:36:26 AM, NHN wrote:
At 10/17/2016 11:33:49 PM, Cody_Franklin wrote:
However, we are still left with the vast problem of pre- and unconscious processes, which regulate anything from finely calibrated movement to reveries.

To the extent that our mind operates without the need for conscious oversight, which is to say consciousness, and insofar as these unconscious mental processes play a role in human intelligence, this would seem to militate against the idea that consciousness must be understood before we have any hope of constructing an artificial intelligence. If they are not relevant to intelligence, then why would someone who is trying to construct artificial intelligence need to know how they work?
dylancatlow
Posts: 12,242
Add as Friend
Challenge to a Debate
Send a Message
10/18/2016 1:31:27 AM
Posted: 1 month ago
At 10/17/2016 7:17:14 PM, NHN wrote:
At 10/17/2016 5:40:42 PM, R0b1Billion wrote:
Sam Harris (I can link you his Ted talk on AI if you'd like) gives 3 assumptions to arrive to the conclusion that AI will not only arise but also replace us:

1. Intelligence is a matter of information processing in physical systems. [...]
I think that all 3 of these assumptions have problems. The first neglects consciousness. No matter how sophisticated AI becomes, it will never be more conscious than a toaster. AI will never be philanthropic nor will it be selfish, it will simply run through algorithms and process the data we (or even it) enters.
I've seen Harris's TED talk and disagree not only with his conclusions but also with his way of reasoning. It is as if he read the alarming anti-AI letter signed Musk, Hawking, Gates (http://time.com...) and simply panicked, thinking he had missed something sensational. Much is incidentally derived from the walking, talking autism-Asperger spectrum disorder that is Nick Bostrom (see his Financial Times article in the link).


To assume as Harris does, that intelligence equals computation, is astonishingly reductionist; it makes our smartphones more intelligent than we.

Only if you take the ridiculous view that sheer number of computations or speed at which they are processed is exactly what intelligence is. There are a whole bunch of problems are smartphones just can't solve, no matter how many computations they throw at them. That's because they're too stupid to recognize which computations are called for in attempting to solve a given problem. One has to in a sense "compute" which computations are necessary rather than employ brute force.
NHN
Posts: 624
Add as Friend
Challenge to a Debate
Send a Message
10/18/2016 1:43:29 AM
Posted: 1 month ago
At 10/18/2016 1:14:22 AM, dylancatlow wrote:
At 10/18/2016 12:36:26 AM, NHN wrote:
At 10/17/2016 11:33:49 PM, Cody_Franklin wrote:
However, we are still left with the vast problem of pre- and unconscious processes, which regulate anything from finely calibrated movement to reveries.
To the extent that our mind operates without the need for conscious oversight, which is to say consciousness, and insofar as these unconscious mental processes play a role in human intelligence, this would seem to militate against the idea that consciousness must be understood before we have any hope of constructing an artificial intelligence.
The way in which our mind operates without conscious oversight includes all of our preconscious, unconscious and conscious thought processes. And since there is no consensus or standard by which we can delineate consciousness as such, we will have to follow a trial-and-error approach. But if we simply disavow the existence of consciousness altogether, as if that would make the problem go away, we will be fooling ourselves in the pursuit of a costly folly.

If they are not relevant to intelligence, then why would someone who is trying to construct artificial intelligence need to know how they work?
How do you suppose that these processes are not relevant to intelligence? Moreover, what Sam Harris has in mind is not merely an AI but an AGI (see above).
Cody_Franklin
Posts: 9,483
Add as Friend
Challenge to a Debate
Send a Message
10/18/2016 5:17:21 AM
Posted: 1 month ago
At 10/18/2016 12:36:26 AM, NHN wrote:
That was Sam Harris's assertion (https://www.samharris.org...), not mine.

Fair enough. I haven't had real time to sit and watch the TED talk, so I'll refrain from commenting. Harris and I disagree, but that probably means I'm just talking nonsense.

In other words, there is at present no sufficient understanding of the mechanics of intelligence strong enough for us to replicate it in an AGI?

Given we don't have an AGI, I assume so.

Of course there is no mental or spiritual substance. But that acknowledgment doesn't help us in determining or delineating consciousness as such, which is necessary to replicate the processes of an AGI.

And once again, your second assertion (b) echoes Gassendi, which is mechanistic atomism by another name. My position on the issue is that of a cautious rationalist who rejects the mechanistic reductionism as insufficiently theoretical.

Well, two things:

1. I don't know what "mechanistic atomism" is, or how it directly maps to whatever it is you think my position is. My understanding of atomism is that you've got little physical indivisibles, atoms, and nothingness ("in truth, only atoms and the void", that sort of thing). As stupid as this grammar is, I think you think I think of little fundamental billiard balls bumping into each other as the bottommost turtle. If that's true (I can't think of why else you'd say it's "mechanistic atomism"), that's not what I mean. Talking about elementary particles and their interactions that way still relies on classical mechanics, so I'm talking even more fundamentally than that. I talked a little about it in another thread, but I essentially buy the idea that everything's pretty much just distortions/clusters of stuff in configuration space. Essentially mathematical objects, functionally just probability masses that happen to work out to stuff that looks like particles and on to composite matter.

2. I don't know what "insufficiently theoretical" means. Are you saying I'm all about an approach that's so data-centric I forget to use a theoretical framework to orient and interpret it? If so, I'd disagree. If not, I don't know what your threshold for "theoretical enough" is (or how you'd measure).

Granted that a trial-and-error approach is fully sufficient when handling data, but how convenient are these hallucinations when establishing and then replicating consciousness (when determined) for an AGI?

Sorry, that was a term of art. I should clarify I mean pretty much the whole world of solid matter--everything as we know it governed by classical mechanics, which includes all direct sensory perception and all possible measurements we could take (not in a "ohh man we must be living in a simulation" kind of way so much as a "the fundamental unit of our world isn't exactly mass-energy" way--it's effectively mereological nihilism where the prime component isn't a point particle), so I guess I don't understand your question.

Agreed, so let's dispense with the superfluous adjective "true." Here, the most important aspect of Hofstadter's position is the bottom-up approach, i.e., that awareness "grows" out of the AI's neural activity. The old top-down approach, in which tasks and descriptions were registered in the AI's software, as in The Matrix, has thankfully been abandoned.

Okay, so you've lost me here with the Matrix analogy. I understand not wanting to posit a homunculus-style seat of consciousness, but I have no idea what reference you're making.

The problem with the bottom-up approach, however, is that it requires a brain/central nervous system. And how do we get there?

Again, I don't understand the question. Are you literally asking me how to construct an fully-functional artificial nervous system? Because, if I knew, I'd be taking sweepstakes in the AI research community.

I was simplifying, of course, but that is the gist of his influence. As an approach, think of mechanics plus scientific skepticism minus Cartesian doubt.

In the 17th century, philosophy was a matter of thinking clearly. Had these thinkers not persisted with the annoying "God as watchmaker" argument, or the requirement for grand unity, they had perfected philosophy.

I have to disagree with you there. There are clearly still pending philosophical questions, some new and some unsatisfactorily answered since modernity, and I get this stabbing pain in my forebrain that says "I sincerely doubt modernity would have been the end of philosophical history even in the absence of religion". I mean, had it never been a historical force, there's probably no telling how things would have gone. Point is, I don't think you should get off easy making big statements like that. Not important, just a point of order.

You have reached the heart of the matter. If we are able to construct nervous systems, however primitive and mechanic, which can stimulate continuous neural activity throughout its "body," then we have reached a stage where conscious experience can be generated. And once that is achieved, the AGI has reached a point where it can reprogram itself rapidly and develop into something fundamentally different.

However, we are still left with the vast problem of pre- and unconscious processes, which regulate anything from finely calibrated movement to reveries.

I have to ask, given I have zero training/expertise: do you actually have any professional experience with AI, or with any fields directly practically related to AI (less philosophy of mind, more computer science/machine learning). I don't ask that as a derogatory question--I'm just curious to know what angle you're approaching this problem from.
keithprosser
Posts: 1,935
Add as Friend
Challenge to a Debate
Send a Message
10/18/2016 8:16:52 AM
Posted: 1 month ago
Fair enough. I haven't had real time to sit and watch the TED talk, so I'll refrain from commenting. Harris and I disagree, but that probably means I'm just talking nonsense.

Only to point out the video is less than 15 minutes long.
Furyan5
Posts: 1,228
Add as Friend
Challenge to a Debate
Send a Message
10/18/2016 11:21:28 AM
Posted: 1 month ago
At 10/18/2016 8:16:52 AM, keithprosser wrote:
Fair enough. I haven't had real time to sit and watch the TED talk, so I'll refrain from commenting. Harris and I disagree, but that probably means I'm just talking nonsense.

Only to point out the video is less than 15 minutes long.

One point everyone seems to be missing is human/machine interphases. Medicine has already linked a digital camera directly to a human brain. How long till our thoughts can be directly uploaded? Copied? Would a complete copy be regarded as conscious? Can data be sent both ways? Can our minds be linked to one another via this system? To me, the Borg are not machines enslaving humanity, but humanity acting as one.
keithprosser
Posts: 1,935
Add as Friend
Challenge to a Debate
Send a Message
10/18/2016 3:53:01 PM
Posted: 1 month ago
I think that is a separate branch from the one that is worrying Harris. Harris is concerned with AGI adoting an indepedent agenda. It happens quite naturally and doesn't depend on conciousness at all. What happens is that we produce an AGI that is good at helping us to design new AGIs. Being so good and helpful, we hand over more of the work to that AGI (and the better AGIs it helps produce and so on) until AGI development and production is handled entirely by AGIs because such AGIs become better at designing new AGIs than we are.

It is conceivable that the pace of development by super AGIs in producing new and better AGIs is such that human oversight becomes difficult, or perhaps impossible, and at that point we will have lost control with unforeseeable consequences.

But I think the track record of 'future guessers' is not great. Perhaps the future is a economic/ecological catastrophe brought on by AGW, or a religious war that plunges mankind into a new dark age dominated by either Muslim or Christian fundamentalism. I do not know what the future holds. I think Sam Harris's technological hell is not impossible, but it is only one possible future out of very many possibilities. We will have to wait and see.
Cody_Franklin
Posts: 9,483
Add as Friend
Challenge to a Debate
Send a Message
10/18/2016 4:15:08 PM
Posted: 1 month ago
At 10/18/2016 8:16:52 AM, keithprosser wrote:
Fair enough. I haven't had real time to sit and watch the TED talk, so I'll refrain from commenting. Harris and I disagree, but that probably means I'm just talking nonsense.

Only to point out the video is less than 15 minutes long.

Is it? Even I have to infer I'm just a lazy shitpile, then.
NHN
Posts: 624
Add as Friend
Challenge to a Debate
Send a Message
10/18/2016 4:46:58 PM
Posted: 1 month ago
At 10/18/2016 5:17:21 AM, Cody_Franklin wrote:
At 10/18/2016 12:36:26 AM, NHN wrote:
That was Sam Harris's assertion (https://www.samharris.org...), not mine.
Fair enough. I haven't had real time to sit and watch the TED talk, so I'll refrain from commenting. Harris and I disagree, but that probably means I'm just talking nonsense.
Harris's 14 minute TED talk (https://www.ted.com...) is a summation of a larger point he is making.

In other words, there is at present no sufficient understanding of the mechanics of intelligence strong enough for us to replicate it in an AGI?
Given we don't have an AGI, I assume so.
Then we both agree that Harris's presumption of intelligence as mere information processing is misguided, right?

Of course there is no mental or spiritual substance. But that acknowledgment doesn't help us in determining or delineating consciousness as such, which is necessary to replicate the processes of an AGI.

And once again, your second assertion (b) echoes Gassendi, which is mechanistic atomism by another name. My position on the issue is that of a cautious rationalist who rejects the mechanistic reductionism as insufficiently theoretical.

Well, two things:

1. I don't know what "mechanistic atomism" is, or how it directly maps to whatever it is you think my position is. My understanding of atomism is that you've got little physical indivisibles, atoms, and nothingness ("in truth, only atoms and the void", that sort of thing). [...] I talked a little about it in another thread, but I essentially buy the idea that everything's pretty much just distortions/clusters of stuff in configuration space. Essentially mathematical objects, functionally just probability masses that happen to work out to stuff that looks like particles and on to composite matter.
What you present -- "probability masses that happen to work out to stuff" -- is still a customized form of 17th century mechanistic atomism. It's the inner "termite colony" of neurobiological processes to which Dennett regularly appeals. And these unconscious, uncomprehending processes represent the bottom-up approach, in Turing's lingo, as opposed to the top-down "intelligent design" of early developers.

2. I don't know what "insufficiently theoretical" means. Are you saying I'm all about an approach that's so data-centric I forget to use a theoretical framework to orient and interpret it?
Rather that your reply was so data-centric that I couldn't make out the underlying theoretical framework.

Agreed, so let's dispense with the superfluous adjective "true." Here, the most important aspect of Hofstadter's position is the bottom-up approach, i.e., that awareness "grows" out of the AI's neural activity. The old top-down approach, in which tasks and descriptions were registered in the AI's software, as in The Matrix, has thankfully been abandoned.
Okay, so you've lost me here with the Matrix analogy. I understand not wanting to posit a homunculus-style seat of consciousness, but I have no idea what reference you're making.
Hofstadter relies on Turing's bottom-up approach (see above) in which the AI is unaware of the ongoing processes, however advanced (e.g., MATLAB). Beyond the brain-in-a-vat theory, The Matrix also includes the downloading of software to learn anything from drunken boxing to new languages. This is based on the pre-Darwinian notion of intelligent design, or design by a grand artist/architect, according to which it is possible to direct a mind or consciousness in minute detail by simply encoding new descriptions.

The problem with the bottom-up approach, however, is that it requires a brain/central nervous system. And how do we get there?
Again, I don't understand the question. Are you literally asking me how to construct an fully-functional artificial nervous system? Because, if I knew, I'd be taking sweepstakes in the AI research community.
Feel free to experiment. The only thing that can stop us is the detection of incoherence in the argument. As a rationalist rather than an empiricist, I grant that abstraction -- drawn out of tautological models -- holds a higher degree of relevance than data. While data is needed to verify the veracity of a given model, it is worth nothing in and of itself.

I was simplifying, of course, but that is the gist of his influence. As an approach, think of mechanics plus scientific skepticism minus Cartesian doubt.

In the 17th century, philosophy was a matter of thinking clearly. Had these thinkers not persisted with the annoying "God as watchmaker" argument, or the requirement for grand unity, they had perfected philosophy.
I have to disagree with you there. There are clearly still pending philosophical questions, some new and some unsatisfactorily answered since modernity, and I get this stabbing pain in my forebrain that says "I sincerely doubt modernity would have been the end of philosophical history even in the absence of religion".
Modernity begins with the 16th century Renaissance, as mankind ages upon revisiting the texts of the Ancients. I don't think you comprehend how all-encompassing the philosophy of the 17th century is as far as reason is concerned, or how deeply it influences us today.

Furthermore, you seem to have designated philosophy as mere problem-solving in the service of contemporary science. It isn't. When Socrates asked, "What is the good life?" he did not pose a calculable logical paradox.

I mean, had it never been a historical force, there's probably no telling how things would have gone. Point is, I don't think you should get off easy making big statements like that. Not important, just a point of order.
It's far from a "big statement" to conclude that the philosophies of Descartes and Bacon regarding method, or Hobbes and Locke concerning government, are paramount contributions in philosophy. Even if all of their works were demoted to the status of footnotes next to Plato and Aristotle, they would nevertheless serve as important supplements.

I have to ask, given I have zero training/expertise: do you actually have any professional experience with AI, or with any fields directly practically related to AI (less philosophy of mind, more computer science/machine learning). I don't ask that as a derogatory question--I'm just curious to know what angle you're approaching this problem from.
Not professional but as a UC Berkeleyite. I approach AI partly from the vantage of philosophy of mind (Searle), partly from Heideggerian phenomenology (Dreyfus). See my reply to the OP, which is the second post in this thread.
R0b1Billion
Posts: 3,730
Add as Friend
Challenge to a Debate
Send a Message
10/18/2016 5:24:35 PM
Posted: 1 month ago
At 10/18/2016 12:43:52 AM, dylancatlow wrote:
"No matter how sophisticated AI becomes, it will never be more conscious than a toaster."

This is an amazingly audacious statement.

Can you please be more substantive here? I'd like to defend but I don't know what you're trying to say.
Beliefs in a nutshell:
- The Ends never justify the Means.
- Objectivity is secondary to subjectivity.
- The War on Drugs is the worst policy in the U.S.
- Most people worship technology as a religion.
- Computers will never become sentient.
keithprosser
Posts: 1,935
Add as Friend
Challenge to a Debate
Send a Message
10/18/2016 5:28:57 PM
Posted: 1 month ago
It may be worth pointing out that the only reference Harris makes to consciousness is where he says 'Conscious or not' as a passing remark.

I think Harris would not consider the problem of whether 'artifcial consciousness' is possible is relevent in this context.
dylancatlow
Posts: 12,242
Add as Friend
Challenge to a Debate
Send a Message
10/18/2016 5:45:14 PM
Posted: 1 month ago
At 10/18/2016 1:43:29 AM, NHN wrote:
At 10/18/2016 1:14:22 AM, dylancatlow wrote:
At 10/18/2016 12:36:26 AM, NHN wrote:
At 10/17/2016 11:33:49 PM, Cody_Franklin wrote:
However, we are still left with the vast problem of pre- and unconscious processes, which regulate anything from finely calibrated movement to reveries.
To the extent that our mind operates without the need for conscious oversight, which is to say consciousness, and insofar as these unconscious mental processes play a role in human intelligence, this would seem to militate against the idea that consciousness must be understood before we have any hope of constructing an artificial intelligence.
The way in which our mind operates without conscious oversight includes all of our preconscious, unconscious and conscious thought processes. And since there is no consensus or standard by which we can delineate consciousness as such, we will have to follow a trial-and-error approach. But if we simply disavow the existence of consciousness altogether, as if that would make the problem go away, we will be fooling ourselves in the pursuit of a costly folly.

If they are not relevant to intelligence, then why would someone who is trying to construct artificial intelligence need to know how they work?
How do you suppose that these processes are not relevant to intelligence? Moreover, what Sam Harris has in mind is not merely an AI but an AGI (see above).

My point was that the existence of unconsciousness mental processes and their role in intelligence indicates that intelligence is at least partially the product of material interactions in the brain, and therefore the assumption that subjective human consciousness must be understood before we can create intelligent physical systems isn't necessarily true. When I said "if they are not relevant to intelligence" I wasn't suggesting this is the case, but rather establishing that your argument breaks down whether or not you think unconscious processes play a role in intelligence.
dylancatlow
Posts: 12,242
Add as Friend
Challenge to a Debate
Send a Message
10/18/2016 8:02:21 PM
Posted: 1 month ago
At 10/17/2016 7:17:14 PM, NHN wrote:
As you correctly point out, we are struggling to determine consciousness, which is a core aspect of human intelligence (as it then relates to perception, mind and being). To assume as Harris does, that intelligence equals computation, is astonishingly reductionist

It's not that Sam doesn't recognize that an understanding of consciousness may be crucial in the development of AI, it's just that he thinks consciousness, whatever it is, is the product of mental computations in the brain, such that certain configurations of matter will automatically be conscious and intelligent (are human brains not proof of this?). Sam doesn't claim to know which aspects of our mental configuration are relevant here (clearly, not all are) only that they ARE there and CAN be identified with certain features of our brain's structure. Once we identify the aspects of our brain which give rise to consciousness, we might be able to replicate them in systems of our creation. If nothing else, we could create human brains in non-traditional ways.
Cody_Franklin
Posts: 9,483
Add as Friend
Challenge to a Debate
Send a Message
10/18/2016 8:41:26 PM
Posted: 1 month ago
At 10/18/2016 4:46:58 PM, NHN wrote:

Okay, so I actually listened to the TED Talk, and I should have expected Harris would lay out a super-reasonable case with a dash of explanatory license. With a better idea of his line of concern, I can make the following clarifications:

1. I take the point about the "intelligence explosion". When I suggested AGI doesn't by necessity need to exceed human intelligence, my point was just that the first such entity would only need to hit the human benchmark, even if we agree the likelihood of exponential self-improvement is unavoidably high. But I didn't even think to consider the speed of electrical vs. biochemical signaling, so I guess I lose that point because mechanically, any machine with conceptually indistinguishable processing abilities will necessarily be physically faster.

2. I do take Harris' reductionist kind of view about what the nuts and bolts of intelligence are--I do think it's fundamentally (and probably strictly) information processing (and, in line with Stanovich, I don't believe all exercises of intelligence have to take place at the level of consciousness), even if lumping enough machinery together produces something distinct enough to e.g., possess self-awareness (so like, a nervous system governed by ganglia clusters would be a really primitive machine, but I guess not fundamentally different in kind). I made the point about "not exactly more than a sum of its parts" because, as we agree, I don't want to imply some kind of separate mental substance or event emerges as a function of physical sophistication.

3. Okay, so I looked at "atomism" on SEP and did ctrl+F until I found the word "mechanistic". Still doesn't say much, but your reference to the Dennett analogy suggests whatever I think probably bears some resemblance to what you're trying to portray (although, again, Dennett's effective elimination of qualia doesn't jibe with me). I think "It's just X by another name"/"Y is just customized X"-style reasoning is a little uncharitable--I think the specificity of analysis should be respected, so I don't think you'd find much similarity in the kinds of predictions I and Gassendi would make (I think you might be trying a little too hard to pattern-match me into that box), but point taken on there presumably being some influence.

4. You said you rejected reductionist materialism as insufficiently theoretical. That's its own fairly well-established body of literature, so I'm not sure in what respect you think it's deficient on theory/too centered on data. If you're asking me how we get from mechanics to first-person POV, I have no earthly idea (and indeed, even if there is an answer in the neuroscience literature, I haven't encountered it), so I have no theory to bridge that gap.

5. I don't get what you mean by "feel free to experiment". I can't tell you how to build an artificial nervous system because I don't have a fraction of the technical or conceptual knowledge that would be required. You can sit around and mess with abstractions derived from tautological models all you want, but you're never gonna build an AGI that way, because that's not how the R&D works (and, fighting for contemporary use of the word, that's not how rationalism works).

I guess I'm trying to imagine what you think I could have of substance to contribute as a person that has no direct technical experience in any of the relevant fields (I'm not a mathematician, computer scientist, expert programmer, engineer, neuroscientist). I feel like you might be severely underestimating the difficulty of the task--I have working layman's knowledge of cognitive science to an extent useful in everyday life, but I'm not an AI researcher, so any "experiments" I could think to pull would be way off the mark and probably worth less than the time it took to state the fact.

5. I mean, I have some idea, if not a deep and enriching appreciation, of how pervasive and influential modernity was. My areas of focus in philosophy were politics and epistemology, so I don't have as tight a grasp on [what appears to me as] obscure metaphysics stuff, but I'm reasonably well-apprised of at least the classics (so Kant/Hume, Bacon, Leibniz and Spinoza, Descartes, Locke, so on). The slight edge of condescension aside, I would generally agree my apprehension at least, if not my comprehension, of the full scope of philosophical modernity is in some places sorely lacking.

6. So, two things: a) I'll stake the claim that all the philosophy I care about is in the service of questions that can be settled empirically. In all languages, and irrespective of your particular metaphysical persuasions, I suggest there's really no experimental difference in practical affairs. From the level of "But Do We Live in a Simulation" down to "do we perdure, or is it all actually just temporal parts", it's not a huge deal.

The ethics question is actually kind of an interesting one, because I absolutely believe questions like "what is the good life" are calculable. I think there's plenty of value to sit around in reflection on what your values are, what choices to make, how certain goals/values are ranked, but I don't think there's anything to be gained by asking pseudo-profound questions about whether there's something like mind-independent goodness (sort of like how I think the question about whether we have free will is, once you dissect the question, a total non-starter)--rather, I think you can just define some metrics for goodness along certain axes and start taking measurements.

But like, my ethics is pretty much just sticking stuff into instrumental and epistemic rationality and trying to apply myself to winning the game. If you have an ultimate intrinsic value that comes to "destroy all other sentient life, then destroy self", I'm not gonna sit here and say "Well, according to the objective notion of The Good, you're in violation and are therefore committing evil." But I will say "Well, accomplishing any of my objectives ultimately depends on not-death, so one of us is going to have to lose."

b) I'm not saying the contributions you cite aren't huge and influential--just that I have no particular reason to believe the absence of religious belief would have produced philosophical perfection. I tend to see religion as a symptom of a more general problem with brains, which is that they're not wired for the express purpose of finding good answers to questions and thinking clearly, so they take a lot of funny shortcuts and make silly mistakes pretty much all the time, one of which happens to be professed belief in the supernatural.
keithprosser
Posts: 1,935
Add as Friend
Challenge to a Debate
Send a Message
10/19/2016 12:45:15 AM
Posted: 1 month ago
My reading is that Harris's argument is independent of subtle philosophical considerations.
The argument is one of inexorable logic:

As we continue to produce better AGIs we will eventually produce an AGI that is better at buildling AGIs than we are. (Maybe not now, maybe in 500 years - it doesn't matter, it will happen sometime).
At that point it will make sense to hand AGI development fully over to AGIs - they are better at it.
At that point we lose control of the process, as each generation of AGI produces an even better next generation at an every increasing rate.

The problem is the loss of control - we will not be able to direct AGIs to serve our interests and the direction AGIs move may be actually against our interests. Any divergence from our interests and the direction of the AGIs (and at least some divergence is inevitable) is likely to be rapidly amplified, leading to disaster.
R0b1Billion
Posts: 3,730
Add as Friend
Challenge to a Debate
Send a Message
10/19/2016 2:12:03 AM
Posted: 1 month ago
At 10/17/2016 7:59:42 PM, Cody_Franklin wrote:

1. Artificial intelligence already exists in the narrow sense--in other words, there are AIs designed for specific tasks which can function effectively autonomously in capacities either mirroring or exceeding ours. They're pretty much all around us already, playing various roles from smart appliances all the way to formidable Go players and self-driving vehicles. To be totally transparent, what you're talking about isn't artificial intelligence per se, but artificial general intelligence, where competency transfers across domains of expertise.

If I buy a new toaster, and it's a "smart toaster" because it has a computer inside of it, I don't consider that "intelligence" in any sense. It is only intelligent by some arbitrary colloquial standard you are placing upon it because you are enthralled with its functioning.

2. The "neglects consciousness" point is a non-starter for me--it seems to come from the same camp as people who advocate for philosophical zombies, where you have an entity stipulated to behave indistinguishably from an ordinary human, but which is nevertheless not conscious. It seems like you're pointing to the same thing for learning machines--no matter the degree of sophistication, no matter the difficulty of distinguishing an AI from a living human, you'd assert it isn't conscious. The weird metaphysical baggage aside (I'm hoping you don't subscribe to the notion of souls being the seat of consciousness), I don't know what you think consciousness is such that you imagine a scenario where an entity possesses all of its performative attributes without also possessing, as a precondition, its fundamental substantive one (i.e., that it's as sentient as we are).

That's a stretch to equate a zombie to a computer. Zombie/human brains have a brain full of neurons and synapses which are unfathomably more complex than anything in the cards for the future of computing. Human brains are capable of creativity and can achieve what I have heard called "nominal" problem-solving which is contrasted against repetitive; computers are only useful for repetitive tasks in the same way that your toaster can only perform one repetitive function. You can program more and more functions into a computer, but it can never exceed its programming. That's the key; intelligence can create, as well as relate different areas of knowledge together. Your toaster will always just toast bread, your refrigerator will always just cool food. They aren't any different than the world's best (or even some distant theoretical model of) supercomputer, because all it will do is process what it is given.

3. You say "it's not going to be philanthropic or selfish", but, for experts who know their stuff better than both of us and are actually trying to solve the problem, value alignment is actually a HUGE problem [https://www.quantamagazine.org...][https://en.wikipedia.org...]. One of the big worries is actually that, given such a sophisticated structure, we have little way of verifying (until perhaps too late) whether the thing we've built is going to behave in a way that aligns practically with our interests (so like, it has the same goals we do, executes them in a way that doesn't present an existential threat, behaves in the long-term to maximize value in a way that would align with a decision process extrapolated from our own/a predetermined ideal agent, etc.). It sounds like a profound thing to say there's a big leap between our sloppy decision-making and a machine that just "executes algorithms", and, yeah, we're not Turing machines, but I think that's a different topic than AI, where the whole point is that we try to model based on what we know about human cognition (and its shortcomings, in the hope we can make improvements).

I'm not debating the morality of computers here, I'm saying computers are absolutely amoral. I believe your articles are simply concerned that AI is going to lack morality, which I would expect it would, unless somebody programs moral principles into them for them to follow. Real intelligent beings are not programmed, they learn from conscious experience (e.g., emotion, empathy, sympathy) and create a moral state for themselves. I know that computers can, in a sense, learn things by being programmed to accumulate data and react in certain ways based on that data, but that doesn't cross any meaningful line to separate it from a toaster.

The rest of the stuff is also kind of a non-starter for me. Your things about the inherent badness of human nature are, as I think has already been established, totally speculative and, to the extent supported, totally selective, and the stuff about errors is nothing new or particularly troubling (part of the tradeoff, if true, but not particularly troubling). I work in IT, and I'll be the first to tell you that technology, as a domain, already requires huge degrees of manpower and specialization to maintain, not just from the vague threat of malware (there are plenty more attack vectors than that), but all kinds of hardware/software failure, programming problems and bugs--there's a super fundamental multi-level model where there are several stages--physical, network, transport, application, etc.--as well as subdomains, where stuff can go pretty wrong.

And that's one reason I think AI will never be realized, particularly even the watered-down version of cybernetics - the more complex technology gets, the more it tends to become unstable. Give me a 30 year old computer and see how much malware I get on it; I could just re-write the programming in any worst-case scenario because the programming isn't too complex that a single programmer can't apprehend it. No single programmer could run his own computer nowadays, and future technology will continue this trend towards complexity and instability. Would you seriously link your brain to a computer knowing all the ways it can malfunction?

The notion that more complex systems require more sophisticated regulation is loosely true (although not in the bizarre fatalist sense you seem to imply), but I don't see why that's especially scary, or why it should lead us to shy away from R&D. I increasingly get the impression you're just fundamentally opposed to increasing complexity of any kind whatsoever, and I really wonder why that is. Like, if you're trying to be a Renaissance Man with a general understanding of most of the world's important subjects, I can see how it's a little scary--the world becomes increasingly opaque and illegible as our manipulation of it becomes more complex, so much so eventually that, other than your own little piece of it, you have next to no goddamned clue what's going on.

But like, if that's true, or if anything in that neighborhood is true, it's just an uncomfortable fact about the world we'll all have to deal with, and trying to retreat to a world that appears easier to understand is tantamount to worshipping ignorance. The universe and its possibilities sure won't become simpler just because we refuse to engage with them.

I really don't understand your preoccupation with primitive naturalism, man. Not in the slightest.

I am merely acknowledging that technology must be moderated. Don't you see any problem with the notion that technology is all-good? It borders on religious worship if you ask me. And is it based on the logical notion that technology has no flaws? Or is it instead based simply upon indulgence and addiction?
Beliefs in a nutshell:
- The Ends never justify the Means.
- Objectivity is secondary to subjectivity.
- The War on Drugs is the worst policy in the U.S.
- Most people worship technology as a religion.
- Computers will never become sentient.
R0b1Billion
Posts: 3,730
Add as Friend
Challenge to a Debate
Send a Message
10/19/2016 2:28:51 AM
Posted: 1 month ago
At 10/17/2016 10:35:33 PM, Genius_Intellect wrote:
At 10/17/2016 5:40:42 PM, R0b1Billion wrote:
The first neglects consciousness. No matter how sophisticated AI becomes, it will never be more conscious than a toaster. AI will never be philanthropic nor will it be selfish, it will simply run through algorithms and process the data we (or even it) enters.

"I think, therefore I am." - Rene Descartes.

Descartes supposed that thought proved existence. In my experience with depression, I've learned that the causality is reversed: you are because you think. If a person stops thinking and feeling, they cease to be anything more than an OS for organ maintenance. If a machine thinks and feels, they become more than just an OS. Since thoughts and feelings arrive through natural processes, there is no reason why we can't replicate those processes inside a machine.

But it is not possible to be both a person and a non-thinker. Also, there's no reason to suggest a computer will ever have a mind and think.

The second assumption says we will continue to progress, but what about errors and malfunctions? The more advanced a machine gets, the more unstable and vulnerable it becomes. First computers became susceptible to viruses, then that expanded to malware, spyware, etc., and in the future our computers will be much more difficult to maintain because there is that much more to go wrong. Finding and fixing problems with them will become a daunting task and repairing them will require multiple people (or evensystems) to accomplish.

So computers will need therapy just like humans.

Computers won't be viable at the complexity of a human. There will be too much programming, too much to go wrong.

The third assumption has no basis since we know of nothing more intelligent than we are. Its just a guess one way or the other whether we are at such a peak. I suspect we actually are at the peak because intelligence has drawbacks and more intelligence would logically increase these drawbacks. Intelligent life is selfish and indulgent. We are warlike and consumptive. More intelligent life would be even more self-biased, not less, and would not be viable or sustainable.

Most of our unsavory traits result from Evolution, not intelligence. We're "warlike and consumptive" because those traits were necessary for our ancestors' survival. AI could be streamlined, omitting those traits.

I disagree 100%. Evolution may have developed our minds, but it is illogical that sentience could exist without morality. Evolution never had a choice about whether to make us consumptive or not, because part of the human condition is that we, as intelligent beings, must decide whether we will use our intelligence to help ourselves alone or else use it to help others as well.

On a tangent, for those fearing a robot doomsday, I'm more worried about what humans will do to AI than vice-versa. We've shown ourselves very capable of oppressing "inferior" people; how much worse when we control their spark of life as well?

We don't control the spark of life. We can't create so much as the most basic piece of life, yet you people think we will become God overnight because you bought a new smart toaster at Wal-Mart. That's akin to saying that I can't figure out how to build a soapbox racer but soon I will figure out how to build my own Ferrari.
Beliefs in a nutshell:
- The Ends never justify the Means.
- Objectivity is secondary to subjectivity.
- The War on Drugs is the worst policy in the U.S.
- Most people worship technology as a religion.
- Computers will never become sentient.
R0b1Billion
Posts: 3,730
Add as Friend
Challenge to a Debate
Send a Message
10/19/2016 2:47:21 AM
Posted: 1 month ago
At 10/18/2016 11:21:28 AM, Furyan5 wrote:
At 10/18/2016 8:16:52 AM, keithprosser wrote:
Fair enough. I haven't had real time to sit and watch the TED talk, so I'll refrain from commenting. Harris and I disagree, but that probably means I'm just talking nonsense.

Only to point out the video is less than 15 minutes long.

One point everyone seems to be missing is human/machine interphases. Medicine has already linked a digital camera directly to a human brain. How long till our thoughts can be directly uploaded? Copied? Would a complete copy be regarded as conscious? Can data be sent both ways? Can our minds be linked to one another via this system? To me, the Borg are not machines enslaving humanity, but humanity acting as one.

We are making progress linking biology to computers but I have to point out that simply adding machinery to us doesn't actually create intelligence in any way. I also think that, just like future programming will become too complex to be stable, cybernetics is going to be impractical as well. For one thing, cybernetics is based upon the faulty assumption that we need machines in order to function. If you think that is true, check out Isaac Lidsky's Ted Talk here (https://www.ted.com...) where he describes how, after becoming blind, he graduated harvard, became a successful lawyer, and then created his own construction company. I'm sure you would tell me that somebody who is blind NEEDS cybernetic eyes to function, but as Lidsky explains, "that is the reality you create" for yourself and it is simply not true.

I used to work for a cellphone company and people would call in and yell at me because they had no signal or were experiencing malfunctions. One common one I'd hear all the time would be when elderly people couldn't use their phone. Their family would call in and exclaim that it was dangerous that their aged parents were experiencing problems, because hey, what if something happened and they needed the phone and it wasn't working properly? They could die! I always wanted to respond to them "what did they do ten years ago (this was back in 2008) before cellphones were wide-spread? What did all elderly people do to survive for the last one hundred centuries before cellphones were invented? Why is it that all of a sudden, virtually overnight, elderly people are now in dire need of cellphones to survive?" We create our own dependence on technology, and then use that dependence to justify many evils.
Beliefs in a nutshell:
- The Ends never justify the Means.
- Objectivity is secondary to subjectivity.
- The War on Drugs is the worst policy in the U.S.
- Most people worship technology as a religion.
- Computers will never become sentient.
R0b1Billion
Posts: 3,730
Add as Friend
Challenge to a Debate
Send a Message
10/19/2016 3:00:36 AM
Posted: 1 month ago
At 10/18/2016 5:28:57 PM, keithprosser wrote:
It may be worth pointing out that the only reference Harris makes to consciousness is where he says 'Conscious or not' as a passing remark.

I think Harris would not consider the problem of whether 'artifcial consciousness' is possible is relevent in this context.

You're right. His point is that we will put computers in charge of making the computers, and at some point we will simply be pushed out of the way. First they replace our jobs, then we become utterly dependent on them, then they simply grow until we become displaced. We are on track for that right now, as is demonstrated by 99% of the population who see no ills with technology and worship it as their god. These people, some of which are arguing with us right now, will always defend tech and promote it despite any problems that arise. Technology is like the Pied Piper, it marches and they follow because it promises them wealth, excitement, and intrigue.

I don't disagree with Harris's conclusion but I disagree with his assumptions. Of course he defines intelligence in such a way as to avoid my arguments, but we've gotten a good dialogue out of it anyway! Strictly speaking, I think that his argument would be of only science, as he avoids the philosophical quandary. I put this thread in Philosophy specifically to break that open.
Beliefs in a nutshell:
- The Ends never justify the Means.
- Objectivity is secondary to subjectivity.
- The War on Drugs is the worst policy in the U.S.
- Most people worship technology as a religion.
- Computers will never become sentient.
Genius_Intellect
Posts: 339
Add as Friend
Challenge to a Debate
Send a Message
10/19/2016 3:29:20 AM
Posted: 1 month ago
At 10/19/2016 2:28:51 AM, R0b1Billion wrote:
But it is not possible to be both a person and a non-thinker.

When you stop thinking, you stop being a person. Go experience severe depression and you'll see that firsthand.

Also, there's no reason to suggest a computer will ever have a mind and think.

Proof?

Computers won't be viable at the complexity of a human. There will be too much programming, too much to go wrong.

Proof?

I disagree 100%. Evolution may have developed our minds, but it is illogical that sentience could exist without morality. Evolution never had a choice about whether to make us consumptive or not, because part of the human condition is that we, as intelligent beings, must decide whether we will use our intelligence to help ourselves alone or else use it to help others as well.

This is just false. Many people lack a sense of morality, yet they are still "sentient".

We don't control the spark of life. We can't create so much as the most basic piece of life, yet you people think we will become God overnight because you bought a new smart toaster at Wal-Mart. That's akin to saying that I can't figure out how to build a soapbox racer but soon I will figure out how to build my own Ferrari.

This isn't even an argument. Nothing you have said in this thread qualifies as an argument.
NHN
Posts: 624
Add as Friend
Challenge to a Debate
Send a Message
10/19/2016 1:00:17 PM
Posted: 1 month ago
At 10/18/2016 8:02:21 PM, dylancatlow wrote:
It's not that Sam doesn't recognize that an understanding of consciousness may be crucial in the development of AI, it's just that he thinks consciousness, whatever it is, is the product of mental computations in the brain, such that certain configurations of matter will automatically be conscious and intelligent [...]
Harris echoes Dennett's position, more or less, albeit merged with the concern of the Musk-Hawking-Gates trifecta.

(are human brains not proof of this?).
"Competence without comprehension" (Dennett) is a basic premise for how a computer works, while the human mind involves understanding, knowledge, memetics, and so on. As such, the human brain is computational on some level while ultimately incomparable to commercial software.

If we rely on Turing's terms -- top-down and bottom-up processing -- a computer is designed with an implementing mechanism (programmed with descriptions) whereas a human brain, and AGI, relies on bottom-up neural activity which continuously "rewrites" its own code.

By comparison, from a socioeconomic vantage, top-down central planning works fine for the water supply but is utterly useless in research and development.

Sam doesn't claim to know which aspects of our mental configuration are relevant here (clearly, not all are) only that they ARE there and CAN be identified with certain features of our brain's structure.
I get your point -- and his. I just find that both assume far too much.

Once we identify the aspects of our brain which give rise to consciousness, we might be able to replicate them in systems of our creation. If nothing else, we could create human brains in non-traditional ways.
What we can take from Dennett is that there is no "place" in the brain where consciousness appears -- no homunculus or Cartesian theater where the subject/ego/I arises. By extension, this would mean that mimicry of the human brain, however important its computations and processes, will not provide a final answer to consciousness and its replication.