Ted is right, I think, that the trivial case is uninteresting: A holds a belief, B finds it evil: yawn. The more interesting case is, can A both hold belief P, and also hold belief Q "it is evil to believe P"?
I think it is clearly possible to consistently hold both P and Q in principle. It is not however possible to consistently hold P, Q, and R where R is something like "it is never immoral to hold a true belief", or "the moral life requires a correct and unbiased assessment of the truth", or "the truth will set you free", or something like that.
Now, R is a common, or popular, belief. It's sort of the bedrock of liberal (in the European or 18th c. sense of "liberal"), rationalist ethics, the kind of thing that had people come up with freedom of the press and speech. However, I don't think it's a universally held truth. There are both traditionally conservative and postmodern framings of the question which cast doubt on R, and in addition, I expect there are more people who believe they believe R, than who really believe R if we would judge their "real beliefs" by working backwards from their actions, or even from a close examination of their actual phenomenological experience.
So: there may be a class of beliefs which are true, but lead to evil actions. (Again, we're holding the beholder constant here, or else this is nontrivial.)
It may be the case, for instance, that Zaphod believes suicide to be evil, and yet Zaphod believes the fact of the matter to be that life is unendurably bleak: in this case, for instance, as Moles suggests, perhaps the correct moral action is for Zaphod to attempt to willingly deceive himself about the nature of life. If you react to this in revulsion, then you probably strongly hold some version of R, such that the importance of truth to a moral life means that the harm of willful self-deception outweighs even the strongest value you can imagine Zaphod placing on his own life-despite-its-unendurable-bleakness.
Then, too, the humanist/rationalist commitment to R tends to rest on a confidence in human beings as total logic engines, capable of a sufficiently full understanding of the world, of holding consistent beliefs, and of acting according to these beliefs. But, in fact, there are lots of reasons to doubt this model.
If you know, for instance, that, since you're an actual human and not a Pure Rational Mind, your beliefs are in fact not purely a consequence of your impartial weighing of the evidence, but are largely influenced by things like aesthetics, socialization, tribal affiliation, rhetoric and emotion, then you have a different relationship to belief than the one classical liberalism gives you. There may then be certain things which you do not in fact believe, but which you aspire to believe, and in this context there is nothing self-deceptive about such an aspiration.
If you believe X because you hang out with the primate troop which reinforces believing X, but you would like to believe Y, you can sever your ties with the X-troop and go join the Y-troop in the confidence that there is a good chance that pretty soon you will come to believe Y. (This is, I would hazard, how the majority of religious and political conversion, in any direction, actually happens.)
It would be possible to argue that what is happening here is that you actually come to believe Y earlier, and go to the Y-troop simply in order to have the freedom to articulate your belief without censure. But I don't think that's always what's going on, and many people in the world believe precisely this (anti-classical-liberal) proposition, that you should not expose yourself to contexts in which you may come to believe the wrong things, because you should have the humility to accept your own fragile, swayable nature. And while I'm a fan of free inquiry and therefore don't believe in this conclusion entirely, I do actually think the premise is closer to the truth than the notion of us as pure logic engines.
So, with this "respecting mental fragility" model, -- if we accept that our brains are meat engines with quirks, not containers of consistent sets of logical propositions -- if we take Mary Anne's model, she might believe a) and b) strongly, and, as Ted says, not believe c). That is, she may believe she is wiser and knows what's best, but also believe that it is nonetheless immoral for her to act on this knowledge. But she may also know that her commitment to not-c) is weak -- that she is in constant danger of yielding to the temptation to control the lives of others. (Because, after all, while people do sometimes do -- from my POV -- wrong things because of wrong beliefs, they also often do things they themselves believe to be wrong; we call this "succumbing to temptation", and it seems to be a fact of human psychology).
Thus, Mary Anne may attempt, on a daily basis, to weaken her belief in a) and b) -- to convince herself that she is in fact perhaps not wiser than everyone else, that perhaps her counsel would produce bad results. Even if she has no logically admissable evidence, she can attempt to persuade herself by those non-logical means we mentioned above to influence belief: rhetoric, emotion, social pressure. She may surround herself with people who doubt her superior judgement; she may force herself to envision visceral images of the harm her judgements could lead to; she may scold herself for her arrogance, all in order to convince herself of things she in fact knows to be inaccurate -- as a bulwark against succumbing to the temptation to control the lives of others with her Mutant Leadership Powers(tm). It may be vastly psychologically easier to resist controlling others on the basis that it might not help, than on the basis that it is wrong even though it would help. And thus she may morally attempt to deceive herself about a) and b), because she holds the wrong of such self-deception to be outweighed by the good of her successful adherence to the dictates of her belief in not-c).