Statements of moral/ethical evaluations are often confronted in turn by the varyingly self-righteous demand, “Who are we to judge?”
Anyone who has taught classes in ethics (and I've done so both in traditional “brick and mortar” settings as well as online) will encounter that phrase repeatedly. It is an only slightly more specific version of the basic question “Says who?” to any claim of ethical evaluation.
The overt nature of the logical fallacies involved in such questions should scarcely require notice, which makes it all the more irritating that the fallacies do, indeed, need to be constantly and repetitiously addressed. Among other things, the “who” questions are bald-faced Red Herring fallacies as well as Complex Questions because the issue of “who” is irrelevant, and presupposes as facts matters that are in reality patently false: The question takes for granted the implicit assumption that the only way an answer of ethical standing or judgment can be determined is by a kerygmatic and bald-facedly authoritarian announcement.
There is an even deeper issue to be examined in the above, but before getting to it one must first set aside a common misunderstanding to an ambiguity in the verb “to judge” and its various cognates. On the one hand, there is the sense of judgment in which one “passes sentence” even to the point that one actualizes the punitive conditions of one's condemnation. This is the sense of “to judge” that the Scripture itself condemned. (The New Testament, at least; the Old joyfully wallowed in sadistically dishing it out.)
But there is a second sense of “to judge,” and that is to give a reasoned evaluation. Persons eager to eschew the Old Testament version often enough abandon with equal elan any claim or pretense to even allow, much less employ, reason in ethical matters, because reason involves making judgments. The breathtaking absurdity of such a move is called “relativism,” and is the deeper issue hinted at above. But to close out this sub-topic, the proper response to the accusatory claim, “Who are you to judge,” is quite simply, “Who do you imagine I have to be? It is altogether sufficient that I am a rational agent for me to pass a reasoned evaluation on this subject.”
The antithesis of reasoned evaluation is relativism. Relativism is in essence the denial that any rational standard exists, can be found or can be made that could suffice to license any judgment beyond that of the utterly whimsical. Any judgment – oh, excuse me, “evaluation” – is always and only “relative” to some person, or group, or “perspective” for its validity. There is no objectivity to such standards according to the relativist, only what happens to be asserted within some “local” bracket or frame of reference whose validity is exhausted by the person or persons making the claim.
Occasionally the attempt is made to expand the perimeter of that bracket, to make it bigger and therefore “more important.” For example, rather than collapsing into pure subjective (that is, individual) relativism, an appeal might be made to the society or culture group. But such appeals do not escape the basic logical problems of relativism (all of which orbit around the proclaimed absence of objective standards). After all, what standard of objective evidence could the relativist possibly appeal to that would put the culture group (“cultural relativism”) upon a “better,” “higher,” “more logically valid” footing than the most perfectly arbitrary and capricious claims of any particular individual? Indeed, what are the claims of a culture beyond the collective claims of the individuals who compose it, and who just happen to more or less agree with one another in their various relativistic “perspectives?” Since the claims of the culture have no better logical footing than those of the individual, “cultural relativism” has no reasoned basis with which to differentiate from the most aggressively subjective relativism imaginable.
The above line of reasonings might be criticized on the grounds that it represent a “Slippery Slope” argument which many credible sources would call a fallacy. But one would have to believe in the objective standards of sound reasons and effective inquiry – i.e. “logic” – to advance such an argument in the first place. But the only way one can believe in such a standard is to also be convinced of its value. Which is to say, logic as a cognitive standard can only be rationally judged to be of any value if one is rationally satisfied that one ought to reason in such a way. But any claim of “ought” is ultimately an ethical claim. Any claim that one “ought” to reason logically is a claim about the Right thing to do, the Good form of reasoning, the Valuable method of inquiry.
Lest there be any lingering doubt, the above argument does indeed place various ethical propositions at the very foundation of even the possibility of reasoned inquiry. Before there can be science, before there can be logic, before there can be any concern for truth at all, there must first be an accepted axiom that such things are good. So any attempt to quarantine relativism to “merely” an ethical claim is doomed to have that quarantine shattered by the rational necessity of an ethical commitment to the truth.
If we accept the reality of facts and of the meaningfulness of rational standards of inquiry into those facts, then we have already committed ourselves to the objective reality of at least some standards of value and evaluation. This commitment is to the objective reality of such standards, but how far does such a commitment take us? Even if we accept the moral burden of truth, what guarantee does this give us that there is anything like a general moral reality of “The Good,” that such a reality is open to anything like rational inquiry, or that our ideas of such things amount to anything more than our parochial tastes and preferences? Another way of phrasing this would be: “What guarantee do have that moral inquiry will actually succeed, that is provide us with genuinely rational, objective standards of moral evaluation?”
The only response to this question is that there is no proof that any inquiry will succeed except the actual success of that inquiry, at which point the question is moot. The fact that a subject of inquiry proves difficult is no excuse for assuming that the inquiry itself is pointless. After thousands of years since the West's first written inquiries into the subject, we still do not have an adequate or complete theory of gravity, and yet gravity is one of the most manifestly obvious relational structures in our lives. But anyone who came forward and suggested that inquiry into gravity was pointless, or that gravity had no objective reality would surely be dismissed outright as an utter fool. So by the same token, the fact that moral questions do not yield themselves up to casual investigation or reduce themselves to trivial simplicities is no evidence that said questions are meaningless, or that their answers are no more than capricious declarations. Answers may be hard to find but that (by itself!) does not mean they do not exist.
One might respond to the above by asking how it differs from an overt (and ultimately relativistic) leap of blind faith? After all, doesn't the above basically say we should just “hope” that things will work out? But such a question presupposes – either implicitly or explicitly – a false dichotomy. Just because we can offer no ironclad proof of a proposition (the evidently implicit part) does not mean that accepting that proposition is an act of faith. In the case here, there are obvious practical reasons for taking seriously the claim that there are real standards in the world and that inquiry is a rationally meaningful activity: namely, without such an axiom, inquiry itself cannot even get started! Inquiry might fail, but never even trying is the only thing that can guarantee such failure a priori. To reject the axiom that inquiry is meaningful and the results can be objective, in the absence of genuinely compelling reasons to do so, is the ultimate form of irrationalism since it undercuts the very possibility of success while offering nothing in compensation. This is true whether the inquiry is scientific or moral. But this is precisely what relativism does: it rejects the objectivity of standards of inquiry and standards revealed by inquiry. It thus rejects rationality itself.
Given the logical vacuity of relativism one might wonder why so many otherwise well meaning persons would endorse it, particularly around issues of ethics? As a general rule it is unwise to arrogate to one's self the privilege of saying what another person's thoughts and beliefs “really” are, yet I would like to cautiously suggest a speculative response to the above question. I suspect that there is another false dichotomy at play here: person's who have been variously hurt or offended by some viciously absolutist ethical systems of standards leap to the unjustified conclusion that the only protection offered is the denial of all standards in relativism. In other words, the retreat into relativism is an attempt to find tolerance in an otherwise intolerant world.
But this retreat is “absolutely” self-defeating. After all, in the absence of objectively compelling standards, how can one possibly make a credible claim that, say, herding millions of people into gas chambers is “wrong”? To say that genocide is “evil” one has to have a standard of evaluation by which such claims can themselves be justified. And why should anyone care about tolerance, unless there was something “valuable” and “good” in it? Absent such a standard, why not be intolerant, even viciously so? Without a standard to appeal to, all the relativists can say is that they don't like intolerance. But without a standard of evaluation, why should anyone care what relativists or anyone else feels about things?
Indeed, in the absence of objectively valid standards, the only people who really “get” the world are the sociopaths, the Bernie Madoffs and the Ted Bundys. This is because in the absence of objectively valid standards of ethics, then the only thing left is the power to get away with things. And while it is surely the case that this latter is exactly the only rule many people operate by, relativism reduces us to the enthusiastic endorsement of this rule as the only one there is. Any discussion of “ought” becomes meaningless.
 Some people use the terms “moral” and “ethical” (and their respective variations) to mean different things: “moral” for value-oriented practices of a particular culture and “ethical” as the abstract theory of what ought to be done. I, however, make no such distinction. It is not a matter of what the terms “really” mean: Both terms have the same root meaning, the first coming from Latin and the second from Greek, so any claim about how they “really” ought to be used is largely just pretentiousness. Since the above mentioned distinction is not one that I'll have occasion to use, I choose to treat them as meaning the same thing and will alternate between them as stylistic balance and personal whimsy happen to move me at the time.
 http://www.fallacyfiles.org/redherrf.html , http://www.fallacyfiles.org/loadques.html . Any person with even a passing interest in logic and its numerous fallacious misuses should have the Fallacy Files ( http:www.fallacyfiles.org ) on their favorite list.
 There is, in point of fact, a fairly well formed answer that can be given to this frankly rhetorical question. But such an answer takes for granted the objective and rational validity of certain standards of evaluation for the establishment of its claim. The only way a relativist could endorse such a response is by first explicitly rejecting relativism. The overtly self-contradictory nature of such a move will not, in general, trouble the genuine relativist, but for the rest of us it will place their claims squarely where they belong: on the same level as barnyard noises. I'll offer some comments on the role(s) of the community and the individual in moral inquiry in a later essay, the third of this series.
 For example, http://www.fallacyfiles.org/slipslop.html .
 One might compare the discussion in Edgar Sheffield Brightman's Moral Laws, The Abingdon Press (1933), especially the first third of the book, to page 125.
 A particularly tricky issue this: when do we finally admit that a question is badly framed &/or that an answer is not to be found? Areas of formal logic, mathematics, and abstract computer science are the one's where genuine proofs of impossibility are likeliest to come about. I suspect that in more empirical matters the principle set of metrics will be pragmatic ones of interest and use. Even in formal arenas, the driving factor is often a pragmatic one: if people did not find the conjectural aspect of Fermat's “Last Theorem” to be of inherent interest, then they would not have fretted over it for over 350 years until the Wiles-Taylor proof. Wikipedia provides a reasonably accurate and quite accessible discussion of this latter: http://en.wikipedia.org/wiki/Fermat%27s_Last_Theorem . In the case of ethical matters per se, the interest is there already, so the practical reasons for engaging the issue are real and trump any nihilistic laziness.
 Which is yet another informal logical fallacy: http://www.fallacyfiles.org/eitheror.html