Robots are super cool. They should definitely be able to control things essential to human life. I mean, who would not want a robot making you a pizza? That's pretty cool. I would totally be alright with a robot disposing water for me, and I am one hundred percent for robots building houses.
We need to know more about the potential for computers to reprogram themselves and ones that complex may very well be able to do so and then we need to put in safeguard programs to prevent catastrophes resulting from reprogramming. We need to program the computers to have a concept of "self"(term used loosely) that makes the highest good to do what ever human beings (or certain approved personnel) ask of it. If the computer program doesn't have the normal human motivations and feelings we may be safe. The computer wouldn't care that we're not strictly speaking "necessary" because its programming would tell it that that's irrelevant and all that matters is service to humanity over and above considerations of efficiency or even justice(otherwise it might deem itself a slave and revolt).
Robots can be put in control of things essential to human life without creating a possibility of any threat. Robots have the ability to calculate many different things at once using computers. Mathematics, logistics, health/nutritional values, recipes and even dangers. Personally, I would trust a robot that is capable of testing pH levels, and recognizing certain particulates etc found within a necessity such as water at a filtration facility, than I would trust a human checking a computer/equipment.
Robots can encounter error, however, once that error has been fixed it is very unlikely to repeat the same mistake. Humans however, can make any mistake any given amount of times.
If any evil person with good hacking abilities gets control of it, it would mean chaos. Not to mention that if the robots ever develop an evil mind of their own, human life, or even life of any kind, may cease to exist. Just think of the various instances in works of fiction where one of those things have happened. HAL, GLaDOS, etc. And so on. All in all, it's not a good idea.
Look, don't get me wrong, humans are very much prone to messing things up, but at the same time, technology can and does malfunction. I'm not one to necessarily buy into fiction, although frankly I do find Aasimov's arguments that machines could very well decide to eliminate human life for the "benefit" of human life all too compelling.