Robots don't have souls? How would anyone ever know that? What do you think makes them work?
Anyway, since social decision making and artificial systems are my professional specialty, I'd like to weigh in on this one.
First, "rights" are a kind of social fiction based entirely in rhetoric -- they are compact decision making tools, but they are not built for solving these kinds of problems. I don't actually have a "right" not to be a slave. What I have is a society that has laws telling people not to make me a slave because we, as a social system, have agreed that it is the best decision. In other words we don't have rights, we have laws. We have laws because we want to make good decisions.
(I could go on for days about what makes a decision "good" or not, but that is a deeper analysis than what is warranted here.)
Secondly, emotions are not all that important to the discussion, per se. You could always make a Pascal's Wager-esque assertion that everything has emotions, but that would be inefficient and tedious. Thus, being efficient and not tedious are more of a priority than respecting the emotional state of a thing. The reason human emotions are important is because they greatly influence our behavior and our ability to make good decisions. There is no need to (legally) respect emotions that don't lead to good behavior. (I.e, no law to protect someone for hurting someone else "because they were angry.") Your emotions aren't a matter of consideration unless they are socially constructive.
Thirdly, there are some very complex types of large-scale decision-making that interface with emotions -- compassion and social fears and whatnot. The benefit of these things is that they are a powerful social learning mechanism, which is important because (in the deeper analysis I mentioned above), good decisions are a result of recursive tool-building, which necessitates a form of social learning. That's just an elaborate way of saying that human beings can learn how to make good decisions by both having and reacting to emotions. It may even be possible that a fully recursive decision-making system (with some scale restraints) can't exist without emotions, but that is highly debatable.
Lastly, to answer the original question: creating machines with emotions would be an astounding and remarkable breakthrough in technology. We could could these machines to use in a huge number of novel ways. But what would the laws be protecting? Laws don't protect you because you have emotions, they protect you because you have individuality, as individuality means you have the potential to solve novel social problems. So long as a computer can do the same, it should be protected just the same. However, if the computer can be recreated on a whim, why protect it differently from property?
Edited by RetraRoyale, 01 July 2013 - 01:03 PM.