Hal is not the scariest movie villain because he's a killer robot. He's the scariest movie villain because he's NOT a killer robot. That's the mistake so many people make about 2001, is that Hal malfunctioned. He didn't malfunction. He did everything he was programmed to- complete the task and eliminate threats. He's not a maniac, and he's not sadistic. He doesn't have any emotions, he's just doing what he does. It's what he represents- efficiency. People will go to enormous lengths to make things easier, so they program a robot computer to do it for them. No matter how many precautions we can take, the computers will become self-aware. It's only a matter of time.
Getting two or more people to agree on a specific definition of what self-aware is might be difficult. Building a machine that is then self-aware will never happen. It's a glorified calculator and does what it is programmed to do. Back in the fifties it was wrongly characterized as an "electronic brain", but it wasn't even close to being a brain. Computer scientists may be able to program a thought process simulation, but that program will not make a machine self-aware. A program could be written so that when the question, "Are you self-aware?" is typed into a computer it will then respond with, "YES." Would that mean the computer is self-aware? I don't think so.
I don't think that computers can ever be self aware. They're only as good as the human input they receive from programmers or other system architects, which means they're limited as to what they can think of, which in turn limits self-awareness. They can already speculate on outcomes, but only because the humans that created such programs made it that way.
There is absolutely no chance that a computer will decide to do something different on its own. The difference between a computer and an animal is that an animal can refuse to do something. The computer cannot say no, because it was never programmed to do so. Likewise, the computer's "intelligence" is only as smart as the programmer. Therefore, we are never going to be in danger of a virtual consciousness because we can not begin to simulate one.