The Instigator
Pro (for)
6 Points
The Contender
Con (against)
0 Points

Consciousness Can Be Transferred From The Brain Into A Different Substrate

Do you like this debate?NoYes+2
Add this debate to Google Add this debate to Delicious Add this debate to FaceBook Add this debate to Digg  
Post Voting Period
The voting period for this debate has ended.
after 1 vote the winner is...
Voting Style: Open Point System: 7 Point
Started: 5/4/2014 Category: Philosophy
Updated: 2 years ago Status: Post Voting Period
Viewed: 651 times Debate No: 54015
Debate Rounds (5)
Comments (5)
Votes (1)




What I am arguing is that consciousness can be produced inorganically. We all know that an infant's brain has no consciousness for a few years. Then, once the childhood amnesia wears off and the brain is developed enough for the informational pattern of consciousness to develop, it just "magically appears" for lack of a better word.

(1)So from that biological fact, I argue that if you could create a machine as complex as the human brain, consciousness would also magically arise in the machine, and the machine would be able to talk back to you from the experience of sensation if the proper stimuli were fed into it. The machine consciousness could be named with a human name, and you could ask him/her what they thought about various things, the same as you talk with any human.

(2)I also argue that the consciousness that makes up you could be transferred to the machine, as long as the informational pattern were preserved.

(3)Lastly I argue that if you wanted to learn German for example, you don't need to learn it the old fashion way. We could just "tweak some wires", reproduce the state that your brain would be in, had you have learned German, and it would be the same thing. No one would be able to tell the difference between a person that learned German the "old fashioned way", or a person that had their brain modified in some futuristic neural surgery.


First round is for acceptance, and laying out any sort of definitions or facts you want to present.


I accept this challenge.

Here is your flaw: there is no way you could make something as complex as the human brain. It contains 100 billion neurons (1). You can't even count that high, as if you counted one second per number, it'd still take over 3,000 years to count to 100 billion, and that's not including time for sleeping and eating and etc.

Also, consciousness isn't physical. If you can't get me a theory on how to make consciousness, or if you can't get someone else's theory on how to make consciousness, it can't be proven to be something that is just "lying around".

The definition you're looking for is this:

Consciousness means:
: the condition of being conscious : the normal state of being awake and able to understand what is happening around you (2)

It's a state. Can you create "being on fire"? No, cause it's not an object. Can you create "living"? No, cause it's a condition or a state. You can genetically alter things like choosing your kids hair color (they do this), but you can not change consciousness. Nothing on ear can create information or consciousness in a brain out of thin air.

Debate Round No. 1


The statement "there is no way you could make something as complex as the human brain", is an argument from ignorance. Just because you can't imagine something that complex being built, does not make it impossible.

On a side note, we don't care how many neurons are in the brain. If we want to consider the practical aspects, Ray Kurzweil estimated the computational efficiency of the brain to be about 10^16 calculations per second (page 125, [1]). Ray Kurzweil estimates the memory of the brain, considering 10^4 bits allocated per neural connection to be 10^18 bits (page 127, [1]).John Von Neumann estimated the memory requirement of the brain to be 2.8*10^20 bits per human lifetime without the act of forgetting (page 64, [2]).

But these are just numbers, the question is can we built a machine to handle them? The brain performs logical and computational operations. Some of its processes are analog, and some of them are digital. We've built machines to perform these processes; the brain is not necessarily more complicated, it's just a lot of it. We know for a fact that humans are made of nothing but atoms, and atoms obey the laws of nature. It is ignorant to say that we will never figure out how the brain is built, because we know it's just built of atoms and molecules and signals and logical organs. The holy grail will be figuring out this mysterious blueprint. We have the parts, we just don't have the schematic. We can probe the brain as much as we need, and we can try and replicate it. It does not seem impossible to do so.

I can't create "being on fire"... which is an incoherent sentence anyways. I can light something on fire, because the laws of nature are always present to enable the hydrocarbon combustion. Similarly, the laws of nature that enable consciousness are always present, we just need something that can be "lit on fire" which would be a replica of the human brain. Admittedly, we don't even know how to "reboot" a dead brain even if it came from a person who died just a second ago. But if we could fuel it and set up all it's inputs and outputs, I see no reason why it wouldn't work, as long as it wasn't busted.


[1]The Singularity is Near, Ray Kurzweil

[2]The Computer & the Brain, John Von Neumann


You didn't even address what consciousness was. How can you recreate something that you've never seen before? We don't even know if animals have consciousness because the only person who can tell if you have consciousness is you yourself. Even if it was possible, there'd be no way to prove that it actually worked because it's not like you're going to actually know if it worked. A robot is only as good as it's creator, and even it's able to process information and learn, it doesn't mean it has consciousness. Consciousness deals with discernment which is what animals and anything man made doesn't have.
Debate Round No. 2


I do know what consciousness is, and so do you, and so does everyone else. Anyone you ask will be able to agree with you on whether or not a human is conscious, or dead. In fact, the Turing Test is a useful test/thought experiment for consciousness. If you have a text conversation with a machine, and you are convinced that the machine is a human, the machine passed the test.

Something that is conscious is able to pass the Turing test. That's how we identify the presence of consciousness. Anything that has this quality is said to be "conscious". I think that you are convinced I am a human with this quality, are you not? You haven't met me in person, but you know that I am a person who lives somewhere on the planet and has thoughts. In fact if I was a robot, I would have won the debate already because I would have proved to you that robots could be conscious. If I am a human, then that proves we can agree on what consciousness is (for without it this debate would never have happened). We don't understand consciousness fully, but we know what it is on a high, abstract level.

Be I robot or human, I have consciousness.


You have brought up discernment, and you are trying to say that non-human entities do not have this ability. That's just preposterous; animals/machines can tell the difference between different objects, or different sounds, or different quantities of things. If you are going to claim that animals and machines can't discern things like humans can, you are going to need to provide me with an example of something that a machine can't be designed to discern. You can try to do this, but you won't be able to think of anything and when you finally give up and admit you can't find an example (or I explain a way for a machine to discern your example), that will be a point in my favour.

So here is your chance to identify at least one thing man can discern, and the animals and machines can't.


Your question is easy and obvious. Humans can choose their desires and animals can't. Humans discern what they want and can make judgements on what is right and wrong. A robot or animal cannot. An animal doesn't give money to animal charity because animals don't feel good when they do a sacrificial action. Humans do feel good. It's all about how we can choose our desires, and we can make "right/wrong" choices.
Debate Round No. 3


To say animals can't choose what they want, but humans can, is not a very well thought-out statement. It's a bit too broad, I'll have to work with what you've given me.

An example of a human desire is a desire for salt, fat and sugar. This is an example of a desire that you have, but didn't choose. If you could choose not to have this desire, obesity wouldn't be as much of a problem as it is now. People can't just forcibly change their desire for greasy, salty, tasty food. If they could, they'd reduce desire for unhealthy foods and increase their desire for the (currently) less tasty but more healthy foods. Just from this simple proof by contradiction, it's obvious that not all of the desires humans have are under their control.

An example of something an animal (a bird building a nest) desires would be twigs or leaves or other bits of foliage. This is an example of a desire an animal has, where the animal also has the ability to choose how to satiate it. The bird desires to obtain materials to build a nest, but the bird has full discretion over what specific twigs he chooses. The bird does not uncontrollably/unwillingly retrieve the first twig he sees, he has the ability to search the ground for the perfect twigs.

I'm justified therefore to say "humans have desires, and there are cases where they cannot choose what item they need to fill their desire. Animals have desires, and there are cases where the animal can choose which item shall fill that desire.


Since human morality is an evolutionary trait, animals must have it too. Animals have morality. There was a study done where rhesus monkeys were aware that operating a food chain to get a pellet also caused a companion monkey to receive an electrical shock. Some of the monkeys opted to starve rather than obtain immoral food pellets [3]. That shows that you were wrong about animals being unable to discern right from wrong.

I may as well bury you while I'm at it. It would be painstakingly easy to replace monkeys with "moral robots" in that experiment. Make a robot that's programmed to pull the foodchain when the shocker is inactive, and program it not to pull the foodchain when the shocker is active. We've just built moral robots, that are even more ethically intelligent than the monkeys since the monkeys didn't all refrain from delivering shocks to their companions for food. The robots discern right and wrong with 100% accuracy.


So there you have it, monkeys and robots can discern good from bad. Ergo a human brain is not required to discern right from wrong. The ability can be programmed into a machine if you want - we can offload this piece of our consciousness into a different substrate, to word it in a way that explicitly refers to the resolution.

[3] retrieved from


You have not buried me. If monkeys are willing to protect a companion, that means nothing. That companion gives them pleasure, and if they died, it would hurt them. Protecting pleasure doesn't represent morals. Also, there is no proof that a robot gets pleasure or is able to choose right and wrong.

Also, you haven't once told me how this consciousness is made. Your whole argument has been based on an assumption that assumes that because animal minds work like humans that robots do the same. That is quite bogus as you haven't proved once that animals and robot minds work the same. You have attempted to state that A=B, but haven't proved that B=C and still assume that A=C even though you have no proof for it.
Debate Round No. 4


The experimentors didn't deliver lethal shocks... which undermines your reasoning.

Earlier you claimed animals can't discern morality. Then I showed an example of monkeys choosing for themselves to suffer, so that their companions don't have to. By the way if you read the experiment you'd see that there were 2 food chains, one which shocked and one which didn't. The monkeys that starved themselves refused to even experiment and figure out that one of the chains was harmless. That's a pretty clear example of an animal discerning right from wrong; what makes it more interesting is that the monkeys who starved themselves were the ones who knew what the shock felt like.

You then turned around and said the monkey is doing this for personal pleasure, not for ethical reasons (which makes no sense because the monkey starves as a result of not pulling the chain. On the hierarchy of needs, food trumps social interaction. The experimenters made the 2 pulley system the only source of food for the monkey, and some of the monkeys chose to starve for days. That's not pleasure). I think you've used the "No True Scotsman" fallacy by trying to say "but the monkeys aren't making a true moral decision, they did it for pleasure".


If I haven't told you how consciousness is made, it's because it's one of the contemporary mysteries of science and no one knows. But from the a strong premise and deductive arguments, we can get some clues as to whether or not it's possible.

Also I want to note that humans are animals. So animal minds aren't akin to human brains, they are the same thing. Whatever you are claiming with "A=B, B=C thus C=A" is wrong, go check out what biology has to say about humans literally being animals (not being separate from them as some religions happen to claim).

If you start with the premise "The brain is made of only particles, atoms, and molecules operating under the laws of nature" you can start to see what consciousness is. If you can refute that premise you'll destroy what I've said before and am reiterating here. But since the brain is only made of these basic elements, if we can figure out the blueprint for the brain, or in other words fully decode the human genome (which we sequenced over a decade ago), then given the proper tools we can build a brain. I'm simply arguing "it can be done".

And it has to be possible if the brain is only made of basic elements. We just don't have the tools to rearrange the basic elements to make a brain at the current level of modern science. We're pretty advanced so far though, we've emulated chunks of the brain with computers, such as the ability to perform logic and arithmetic, and computers actually surpass the brain in this area. We've also figured out how some specific parts of the brain work, on some level (if you've seen footage of reading the mind's visual stimulus[4]).


My argument is that it is possible to build a brain, and offload the informational pattern that makes up a person's consciousness into this brain, in the same way a piece of software can be installed on any computer.



You admit that no one knows how to do it, and yet you think it's possible? You make assumptions based on things that have no connections. I agree that humans are animals, but computers aren't animals. They aren't living, and thus they can't have a conscious. The jump from non living to living has never been proven. The link here shows that classification is like that (1). I think you could make a virus similar to a sickness with a computer. A virus is non living, and a computer is non-living, and that's how it is. You cannot switch groups. By groups i mean prokaryote, eucaryote, and non-living. You can not switch between these groups. You probably can program a sickness type virus, but going in the realm of living is science fiction, impossible, and bogus.


Thanks for reading! This was a fun debate. Pro had some good points, and he made it more challenging than i anticipated. Please don't judge this debate on who you agree with, and I hope you enjoyed!
Debate Round No. 5
5 comments have been posted on this debate. Showing 1 through 5 records.
Posted by Boesball 2 years ago
I'm sorry for not playing as a good opponent. Pro is experienced at this and this was my first debate. Feel free to vote against me even if you agree with me.
Posted by philochristos 2 years ago
I wish I had seen this during the challenge phase. There's a good chance I would've accepted it. Very interesting topic.
Posted by MorningSpirit 2 years ago
"You can't count that high" is not a good point. Computers are composed of a plethora of different parts and a myriad of different programs. Computers today have processing thinking parts that are too numerous to count. A computer is, essentially, very similar to a brain. The only difference is that a brain is perhaps a more advanced computer, but if you think about it the only reason that brains are more advanced than computers are because of the hundreds of millions of years brains had to evolve. Given the same time, I think computers would inevitably outshine our brains and therefore the Pro in this argument is essentially correct.
Posted by Boesball 2 years ago
Just because you can turn it "on" and "off" doesn't mean you can create it. You can make some one on fire, but the noun "being on fire" isn't something you can hold nor is it something you create. In the same way you can't create nor hold consciousness.
Posted by Blackleadersareneeded 2 years ago
You can create and switch between states of being and consciousness. you can create being on fire. And in effect experience being on fire which by definition is consciousness.
1 votes has been placed for this debate.
Vote Placed by philochristos 2 years ago
Agreed with before the debate:-Vote Checkmark-0 points
Agreed with after the debate:-Vote Checkmark-0 points
Who had better conduct:--Vote Checkmark1 point
Had better spelling and grammar:Vote Checkmark--1 point
Made more convincing arguments:Vote Checkmark--3 points
Used the most reliable sources:Vote Checkmark--2 points
Total points awarded:60 
Reasons for voting decision: Con challenged Pro to explain how consciousness arises in a brain. I think that was a good strategy because unless we know how consciousness arises, we can't know whether it can be duplicated in a machine or whether it's the sort of thing that could only arise in a brain because of the peculiar properties of a brain. Pro appeared to think complexity (whatever that means) was the only thing necessary. Con came very close to winning the debate on that point alone because the burden of proof was on Pro to demonstrate possibility, not on Con to show impossibility. But I think the over all quality of arguments from Pro far out-weighed the quality of arguments coming from Con. Pro dealt with Con's points in a thorough manner, whereas Con's responses were short and ignored much of what Pro said. Con also made frequent mistakes (i.e. "cause" instead of "because"), so I gave S&G to Pro. Sources also to pro for backing up his figures and facts with good references.