Consciousness Can Be Transferred From The Brain Into A Different Substrate
Debate Rounds (5)
(1)So from that biological fact, I argue that if you could create a machine as complex as the human brain, consciousness would also magically arise in the machine, and the machine would be able to talk back to you from the experience of sensation if the proper stimuli were fed into it. The machine consciousness could be named with a human name, and you could ask him/her what they thought about various things, the same as you talk with any human.
(2)I also argue that the consciousness that makes up you could be transferred to the machine, as long as the informational pattern were preserved.
(3)Lastly I argue that if you wanted to learn German for example, you don't need to learn it the old fashion way. We could just "tweak some wires", reproduce the state that your brain would be in, had you have learned German, and it would be the same thing. No one would be able to tell the difference between a person that learned German the "old fashioned way", or a person that had their brain modified in some futuristic neural surgery.
First round is for acceptance, and laying out any sort of definitions or facts you want to present.
Here is your flaw: there is no way you could make something as complex as the human brain. It contains 100 billion neurons (1). You can't even count that high, as if you counted one second per number, it'd still take over 3,000 years to count to 100 billion, and that's not including time for sleeping and eating and etc.
Also, consciousness isn't physical. If you can't get me a theory on how to make consciousness, or if you can't get someone else's theory on how to make consciousness, it can't be proven to be something that is just "lying around".
The definition you're looking for is this:
: the condition of being conscious : the normal state of being awake and able to understand what is happening around you (2)
It's a state. Can you create "being on fire"? No, cause it's not an object. Can you create "living"? No, cause it's a condition or a state. You can genetically alter things like choosing your kids hair color (they do this), but you can not change consciousness. Nothing on ear can create information or consciousness in a brain out of thin air.
On a side note, we don't care how many neurons are in the brain. If we want to consider the practical aspects, Ray Kurzweil estimated the computational efficiency of the brain to be about 10^16 calculations per second (page 125, ). Ray Kurzweil estimates the memory of the brain, considering 10^4 bits allocated per neural connection to be 10^18 bits (page 127, ).John Von Neumann estimated the memory requirement of the brain to be 2.8*10^20 bits per human lifetime without the act of forgetting (page 64, ).
But these are just numbers, the question is can we built a machine to handle them? The brain performs logical and computational operations. Some of its processes are analog, and some of them are digital. We've built machines to perform these processes; the brain is not necessarily more complicated, it's just a lot of it. We know for a fact that humans are made of nothing but atoms, and atoms obey the laws of nature. It is ignorant to say that we will never figure out how the brain is built, because we know it's just built of atoms and molecules and signals and logical organs. The holy grail will be figuring out this mysterious blueprint. We have the parts, we just don't have the schematic. We can probe the brain as much as we need, and we can try and replicate it. It does not seem impossible to do so.
I can't create "being on fire"... which is an incoherent sentence anyways. I can light something on fire, because the laws of nature are always present to enable the hydrocarbon combustion. Similarly, the laws of nature that enable consciousness are always present, we just need something that can be "lit on fire" which would be a replica of the human brain. Admittedly, we don't even know how to "reboot" a dead brain even if it came from a person who died just a second ago. But if we could fuel it and set up all it's inputs and outputs, I see no reason why it wouldn't work, as long as it wasn't busted.
The Singularity is Near, Ray Kurzweil
The Computer & the Brain, John Von Neumann
Something that is conscious is able to pass the Turing test. That's how we identify the presence of consciousness. Anything that has this quality is said to be "conscious". I think that you are convinced I am a human with this quality, are you not? You haven't met me in person, but you know that I am a person who lives somewhere on the planet and has thoughts. In fact if I was a robot, I would have won the debate already because I would have proved to you that robots could be conscious. If I am a human, then that proves we can agree on what consciousness is (for without it this debate would never have happened). We don't understand consciousness fully, but we know what it is on a high, abstract level.
Be I robot or human, I have consciousness.
You have brought up discernment, and you are trying to say that non-human entities do not have this ability. That's just preposterous; animals/machines can tell the difference between different objects, or different sounds, or different quantities of things. If you are going to claim that animals and machines can't discern things like humans can, you are going to need to provide me with an example of something that a machine can't be designed to discern. You can try to do this, but you won't be able to think of anything and when you finally give up and admit you can't find an example (or I explain a way for a machine to discern your example), that will be a point in my favour.
So here is your chance to identify at least one thing man can discern, and the animals and machines can't.
An example of a human desire is a desire for salt, fat and sugar. This is an example of a desire that you have, but didn't choose. If you could choose not to have this desire, obesity wouldn't be as much of a problem as it is now. People can't just forcibly change their desire for greasy, salty, tasty food. If they could, they'd reduce desire for unhealthy foods and increase their desire for the (currently) less tasty but more healthy foods. Just from this simple proof by contradiction, it's obvious that not all of the desires humans have are under their control.
An example of something an animal (a bird building a nest) desires would be twigs or leaves or other bits of foliage. This is an example of a desire an animal has, where the animal also has the ability to choose how to satiate it. The bird desires to obtain materials to build a nest, but the bird has full discretion over what specific twigs he chooses. The bird does not uncontrollably/unwillingly retrieve the first twig he sees, he has the ability to search the ground for the perfect twigs.
I'm justified therefore to say "humans have desires, and there are cases where they cannot choose what item they need to fill their desire. Animals have desires, and there are cases where the animal can choose which item shall fill that desire.
Since human morality is an evolutionary trait, animals must have it too. Animals have morality. There was a study done where rhesus monkeys were aware that operating a food chain to get a pellet also caused a companion monkey to receive an electrical shock. Some of the monkeys opted to starve rather than obtain immoral food pellets . That shows that you were wrong about animals being unable to discern right from wrong.
I may as well bury you while I'm at it. It would be painstakingly easy to replace monkeys with "moral robots" in that experiment. Make a robot that's programmed to pull the foodchain when the shocker is inactive, and program it not to pull the foodchain when the shocker is active. We've just built moral robots, that are even more ethically intelligent than the monkeys since the monkeys didn't all refrain from delivering shocks to their companions for food. The robots discern right and wrong with 100% accuracy.
So there you have it, monkeys and robots can discern good from bad. Ergo a human brain is not required to discern right from wrong. The ability can be programmed into a machine if you want - we can offload this piece of our consciousness into a different substrate, to word it in a way that explicitly refers to the resolution.
 http://www.madisonmonkeys.com... retrieved from http://ajp.psychiatryonline.org...
Also, you haven't once told me how this consciousness is made. Your whole argument has been based on an assumption that assumes that because animal minds work like humans that robots do the same. That is quite bogus as you haven't proved once that animals and robot minds work the same. You have attempted to state that A=B, but haven't proved that B=C and still assume that A=C even though you have no proof for it.
Earlier you claimed animals can't discern morality. Then I showed an example of monkeys choosing for themselves to suffer, so that their companions don't have to. By the way if you read the experiment you'd see that there were 2 food chains, one which shocked and one which didn't. The monkeys that starved themselves refused to even experiment and figure out that one of the chains was harmless. That's a pretty clear example of an animal discerning right from wrong; what makes it more interesting is that the monkeys who starved themselves were the ones who knew what the shock felt like.
You then turned around and said the monkey is doing this for personal pleasure, not for ethical reasons (which makes no sense because the monkey starves as a result of not pulling the chain. On the hierarchy of needs, food trumps social interaction. The experimenters made the 2 pulley system the only source of food for the monkey, and some of the monkeys chose to starve for days. That's not pleasure). I think you've used the "No True Scotsman" fallacy by trying to say "but the monkeys aren't making a true moral decision, they did it for pleasure".
If I haven't told you how consciousness is made, it's because it's one of the contemporary mysteries of science and no one knows. But from the a strong premise and deductive arguments, we can get some clues as to whether or not it's possible.
Also I want to note that humans are animals. So animal minds aren't akin to human brains, they are the same thing. Whatever you are claiming with "A=B, B=C thus C=A" is wrong, go check out what biology has to say about humans literally being animals (not being separate from them as some religions happen to claim).
If you start with the premise "The brain is made of only particles, atoms, and molecules operating under the laws of nature" you can start to see what consciousness is. If you can refute that premise you'll destroy what I've said before and am reiterating here. But since the brain is only made of these basic elements, if we can figure out the blueprint for the brain, or in other words fully decode the human genome (which we sequenced over a decade ago), then given the proper tools we can build a brain. I'm simply arguing "it can be done".
And it has to be possible if the brain is only made of basic elements. We just don't have the tools to rearrange the basic elements to make a brain at the current level of modern science. We're pretty advanced so far though, we've emulated chunks of the brain with computers, such as the ability to perform logic and arithmetic, and computers actually surpass the brain in this area. We've also figured out how some specific parts of the brain work, on some level (if you've seen footage of reading the mind's visual stimulus).
My argument is that it is possible to build a brain, and offload the informational pattern that makes up a person's consciousness into this brain, in the same way a piece of software can be installed on any computer.
Thanks for reading! This was a fun debate. Pro had some good points, and he made it more challenging than i anticipated. Please don't judge this debate on who you agree with, and I hope you enjoyed!
1 votes has been placed for this debate.
Vote Placed by philochristos 2 years ago
|Agreed with before the debate:||-||-||0 points|
|Agreed with after the debate:||-||-||0 points|
|Who had better conduct:||-||-||1 point|
|Had better spelling and grammar:||-||-||1 point|
|Made more convincing arguments:||-||-||3 points|
|Used the most reliable sources:||-||-||2 points|
|Total points awarded:||6||0|
Reasons for voting decision: Con challenged Pro to explain how consciousness arises in a brain. I think that was a good strategy because unless we know how consciousness arises, we can't know whether it can be duplicated in a machine or whether it's the sort of thing that could only arise in a brain because of the peculiar properties of a brain. Pro appeared to think complexity (whatever that means) was the only thing necessary. Con came very close to winning the debate on that point alone because the burden of proof was on Pro to demonstrate possibility, not on Con to show impossibility. But I think the over all quality of arguments from Pro far out-weighed the quality of arguments coming from Con. Pro dealt with Con's points in a thorough manner, whereas Con's responses were short and ignored much of what Pro said. Con also made frequent mistakes (i.e. "cause" instead of "because"), so I gave S&G to Pro. Sources also to pro for backing up his figures and facts with good references.
You are not eligible to vote on this debate
This debate has been configured to only allow voters who meet the requirements set by the debaters. This debate either has an Elo score requirement or is to be voted on by a select panel of judges.