On The Bases For Morality: An Exchange
    By Massimo Pigliucci | January 23rd 2010 12:43 PM | 16 comments | Print | E-mail | Track Comments
    About Massimo

    Massimo Pigliucci is Professor of Philosophy at the City University of New York.

    His research focuses on the structure of evolutionary


    View Massimo's Profile

    [this is a post in two sections, the first by my friend Julia Gailef, a journalist, the second being my response, below]

    I hope Massimo won't start regretting his generous invitation for me to co-blog with him (hi readers! great to be here!) if I kick things off by immediately and publicly disagreeing with him. He and I have been having a debate on moral philosophy for the last few weeks, and after the twentieth iteration of the same arguments we decided it makes sense to invite you all to weigh in, at the very least because we're tired of the sound of our own voices by now. Massimo asked me to lay out the debate, and then he'll follow up with his own post next week.

    So, I agree with Massimo that moral reasoning is possible, given a set of initial axioms. (Axioms are the starting assumptions on which all of your moral judgments are based, like the concept of certain fundamental rights, or tit-for-tat justice, or protecting individual liberty, or maximizing total happiness). Where I disagree with him is over his belief that it is possible to use scientific facts to justify selecting one particular set of initial axioms over another.

    Roughly speaking, Massimo starts with biological and neuroscientificfacts such as "Human welfare requires things like health, freedom, etc." and "Humans are wired to care about each other's welfare," and from these he derives the conclusion, "Therefore, it is moral to act in a way that increases those things which are necessary for human welfare." In my opinion, this is an example of what is sometimes called the naturalistic fallacy: telling me scientific facts doesn't tell me how to act on those facts, and the alleged point of moral principles is to tell me how to act. Science can tell me that if I want to make other people happier, then treating them in certain ways -- giving them health, freedom, and so on -- will accomplish that goal. But science can't tell me whether making other people happier should be my goal.

    Alternately, you could use evolutionary biology and neuroscience to argue that being kind to others is the best way to maximize one's own happiness, thanks to the way our brains have become wired over the course of our evolution as social animals. I agree that there's some truth to this claim, but I deny that we can derive any moral principles from it -- it implies only an appeal to self-interest that happens, through lucky circumstances, to have positive consequences for others. (Furthermore, if your moral imperative takes this form, the implication is that if for some reason I were wired differently, then being unkind would not be immoral.) 

    The difficulty of deriving facts about how people ought to behave from facts about how the world is was most famously articulated by David Hume in his Treatise of Human Nature (1739):

    "In every system of morality, which I have hitherto met with, I have always remark'd, that the author proceeds for some time in the ordinary ways of reasoning, and establishes the being of a God, or makes observations concerning human affairs; when all of a sudden I am surpriz'd to find, that instead of the usual copulations of propositions, is, and is not, I meet with no proposition that is not connected with an ought, or an ought not. This change is imperceptible; but is however, of the last consequence. For as this ought, or ought not, expresses some new relation or affirmation, 'tis necessary that it shou'd be observ'd and explain'd; and at the same time that a reason should be given; for what seems altogether inconceivable, how this new relation can be a deduction from others, which are entirely different from it."

    This is called the "is-ought problem", or sometimes "Hume's Guillotine" (because it severs any connection between "is"- and "ought"-statements). My understanding is that Hume is generally believed to have meant not just that people jump from "is to "ought" without sufficient justification, but that such a jump is in fact logically impossible. There have been a number of attempts to make that jump (here's a famous one by John Searle), though I've found them pretty weak, as have other people with much more philosophical expertise than me.

    With that in mind, I can't see any way in which a claim of the kind Massimo is making -- "doing X increases human welfare, therefore X is the moral thing to do" -- could logically hold, unless you're simply defining the word "moral" to mean "that which increases human welfare," in which case the statement is tautologically true. But I'm not sure what we gain by simply inventing a new word for a concept that already exists. 

    Fortunately, even though I think the blade of Hume's guillotine is inescapably sharp in the philosophical world, I don't think it has the power to sever much in the real world. Because, thanks to some combination of evolutionary biology and social conditioning, I do enjoy being kind, and I do want to reduce other people's suffering -- and I would want to do those things even without a rational justification for why that's "moral." And I believe most people would feel the same way.

    But if someone didn't care about other people's welfare, I couldn't accuse him of irrationality. He would be committing no fallacy in his reasoning, nor would he be acting against any of his own preferences. (If he wanted to increase human welfare and yet he knowingly acted in a way that reduced human welfare, then I could legitimately call him irrational.)

    Massimo, I believe I've represented our disagreement accurately, but please correct me if I haven't! *thwack* Ball's in your court!

    >> Massimo's response <<

    I want to thank Julia, our new regular contributor to Rationally Speaking for an honest and clear presentation of her doubts about the possibility of moral philosophy. Judging from the comments to her post, a good number of our readers seem to agree with her position, which is essentially one of moral skepticism, inevitably leading to a morally relativistic position (although she says that she gets her own moral sense from the way she is wired as a social primate, she also admits that she could not honestly blame someone who acted differently and had no inclination to be kind to others or help human welfare).

    First off, then, let me suggest that I don’t think anyone is really a moral relativist, not even Julia. Moral relativism, or moral skepticism, is akin to skepticism about the existence of the world: it may be ultimately impossible to conclusively refute in an air-tight logical manner, but no one actually lives in this way, and no one really believes it. (Bertrand Russell once famously said that he wished that all those people who deny the existence of a wall would get into a car and drive straight into the wall at a speed proportional to their lack of belief in the existence of said wall. I am not aware of the actual experiment ever having been carried out, but of course, as any good skeptic knows, even if the people in the car all died this would not prove the existence of the wall — though as Russell remarked rather drily, we would get rid of a number of bad philosophers... But I digress.)

    Second, although this discussion is fascinating and I think useful for our readers, neither Julia nor I can possibly hope to settle in this context a complex issue that defines a whole field, that of metaethics, or the rational justification of ethical thinking. Despite the fact that both Julia and several of our readers are dismissive of philosophy as a type of inquiry (a rather curiously anti-intellectual position, in my opinion), I urge the rest of you to read this excellent introductory essay in the Stanford Encyclopedia of Philosophy to begin to dig deeper.

    All of the above said, let me finally get to the meat of Julia’s essay. Let’s start with this business of “axioms.” During one of our discussions over dinner I brought up the idea of axioms in ethics to refute a point that moral skeptics never fail to bring up, despite its obvious weakness: ethical reasoning is fluff because there are no moral empirical facts. But the skeptics curiously seem to miss an obvious case study which reveals the hollowness of their position. There are in fact well established and unquestionably serious areas of human endeavor for which “facts” are irrelevant. Consider the entire field of mathematics, for instance. I hope no one here will suggest that mathematical reasoning is arbitrary or without foundations. And yet mathematical theorems are valid / invalid regardlessof any empirical fact abut the world.

    This example should not be taken lightly, because it is a devastating objection to the moral skeptic, although we need to understand exactly what I am saying here. I am not suggesting that ethics and math are on the same footing, far from it. Rather, I am demonstrating beyond doubt that lack of empirical facts per se in no way precludes the ability of the human mind to reason rigorously about certain entities. It is an interesting philosophical (imagine that!) discussion whether mathematicians discovermathematical truths or they invent them, but in either case such inventions or discoveries are both rigorous and non-arbitrary.

    It is of course true that the early 20th century quest for an ultimate, self-contained logical foundation for mathematics failed (see Russell and Whitehead’s Principia Mathematica) and was ultimately shown to be a mirage by Godel with his incompleteness theorem. Still, no one would argue that because of that mathematics is an arbitrary castle built on clouds. (Indeed, if we take that sort of skeptical position, then even Julia’s much touted empirical science gets into deep trouble, as rather ironically shown by Hume himself with his problem of induction.)

    Indeed, I think that ethics is in some sense on a firmer foundation than math, because we can use empirical data from evolutionary biology andcognitive science to provide us with relevant empirical facts in which to ground our enterprise. As I will argue in a minute, this is not at all an instance of Hume’s naturalistic fallacy.

    To begin with, I define ethics as that branch of philosophy that deals with the maximization of human welfare and flourishing. I’m sure this will disappoint Julia and others, but I simply don’t understand what else they might possibly wish to include in a talk about ethics. Neither Julia nor I believe in morality as imposed by a god, for a variety of reasons, including the fact that there is not a shred of evidence in favor of the existence of any gods, but more importantly because of the decisive (again, philosophical!) argument known as Euthyphro’s dilemma, in which Plato showed that gods are simply irrelevant to the question of morality.

    So yes, for me morality is neither arbitrary (the relativist position) norabsolute (the typical religionist position, though Kant also famously attempted to arrive at a logically necessary ethics via an entirely secular route — and failed). Rather, I think of morality as something that makes sense only for human beings and other relevantly similar species. By relevantly similar, I mean social animals with brains complex enough to be able to reflect on what they are doing and why they are doing it (that is, being able to philosophize!). As far as I know, Homo sapiens is currently the only such species on planet Earth, though of course there may be others elsewhere in the cosmos.

    By definition, then, something is moral in my book if it increases human welfare and flourishing (I am leaving aside for the moment the issue of animal rights, which would be an unnecessary distraction at this point. Interestingly, consequentialists like Peter Singer have tackled that problem, and Julia presented herself to me once as a consequentialist — apparently without realizing that a moral skeptic cannot also coherently endorse a particular school of ethics. For the record, I incline toward virtue ethics.)

    It is at this point that Julia accuses me of committing the naturalistic fallacy, that is of deriving an “ought” from an “is.” There are several issues to be considered here. First, contrary to what Julia maintains, it is not at all clear that Hume argued that the is/ought connection is impossible, he may simply have been saying that if one wishes to make that connection the project has to be pursued by explicitly unpacking how said connection works or can be justified. Second, of course, as much as I myself love Hume, I don’t think the guy was infallible, and generally speaking invoking authority truly is a logical fallacy.

    To be as clear as possible, then, I define as moral an action that increases human welfare and/or flourishing (and yes, I’m aware that the latter two also need to be discussed and unpacked, but this is a blog post, not a treatise), and then ask biologists and cognitive scientists to provide me with some empirical points of reference so that my concept of human flourishing is based as much as possible on the so highly valued empirical data.

    Here is where Julia makes a subtle, but revealing, shift: she writes that “science can tell me that if I want to make other people happier, then treating them in certain ways — giving them health, freedom, and so on — will accomplish that goal. But science can't tell me whether making other people happier should be my goal.” But ethics is not about what an individual may or may not want, it is about the species as a whole (and possibly beyond, see my comment on Singer above). Julia of course may reject the idea of behaving herself so as to increase human flourishing, but then she is by definition acting immorally (or at least amorally). She may shrug her shoulders and keep going with her life, of course, but most of us are going to think of her as immoral (she isn’t, by the way, she is one of the nicest people I’ve met).

    What I’ve got so far, then, is a working definition of morality and some empirical evidence (from science) of what helps human beings flourish. Why do I need philosophy? Because biology provides us only with a very limited sense of morality, an instinct that there are right and wrong things. But that instinct was shaped — slowly and inefficiently — by a blind natural process that simply maximized survival and reproduction. Once human beings became able to reflect on what they were doing they immediately developed an enlarged sense of flourishing that is not limited to personal safety, food and sex. We also want to enjoy life, be free to explore opportunities, to speak our mind, to admire art, to pursue knowledge, and so on.

    Our instincts become a less and less reliable guide when the circle of flourishing is thus enlarged. For instance, it is a universal moral intuitionamong human cultures that randomly killing members of your group is bad (psychopaths, or to put it as Julia does, people with a different wiring, are not exceptions, they prove the rule: we put them away whenever we encounter them). But natural selection probably also bred into us an instinctive distrust of outsiders. It has taken thousands of years of moral progress (not an oxymoron!) to slowly realize that there is no rationally defensible distinction between in-group and out-group, which means that we need philosophical reflection to build on our natural biological instinct and come up with the humanity-wide rule that it is wrong to randomly kill anyone, regardless of which group s/he happens to belongs to as a matter of accident of birth.

    To summarize, then, I think that:

    1. The objection that moral reasoning is not based on empirical facts is irrelevant, since there are other non-arbitrary human endeavors that are also so characterized and yet we do not reject them on those grounds(mathematics, logic itself).

    2. I define ethics/morality as concerned with exploring the sort of behaviors that augment human (and possibly beyond human) welfare and flourishing. Since this is a definition, it cannot be argued for, only either accepted or rejected. And yes, definitions are tautologies, but they are nonetheless very useful (all of math can be thought of as a tautology, and so is every single entry in a dictionary).

    3. Some empirical facts from evolutionary biology and cognitive science inform us as to where and why we have a moral instinct to begin with, and also about what sort of behaviors do in fact increase human flourishing. It is because of this that I can confidently say, for instance, that genital mutilation of small girls is wrong regardless of which culture practices it and why.

    4. To move beyond the narrow sense of flourishing that generated our moral instincts we need to be able to reflect about these issues in a rational and empirically-informed manner. That is, we need to do science-informed philosophy (or what I call sci-phi).

    One more thing: I really don’t think Hume would be upset with any of the above, and I believe he would invite me over for a meal (he enjoyed dinner parties) to amicably explore our differences of opinion. As he famously put it: “Truth springs from argument amongst friends.”


    Gerhard Adam
    There's certainly a lot to think about, but I see one immediate problem in this discussion.  The reference to "humans" is too general and doesn't really apply in any context.  No matter how social the animal, there is little to infer that their social behavior (and even altruistic) behavior arbitrarily extends to every member of the species.  Most social interactions are quite group specific and that creates a bit of a problem.

    If we don't consider this, then we have to consider the situation of one group exploiting another (i.e. Nazi Germany and the Holocaust) and being forced to argue that entering a war to end such a conflict is immoral and irrational.  Since it is obvious that war cannot be said to make people happy nor allow them to flourish.  Certainly "some" people may achieve this end, but clearly others won't.  To a lesser extend we run into a similar problem when it comes to the law and trying to establish the moral foundations on which it may ride.

    In a nutshell, since biology is incapable of rendering a species-wide argument about behavior, we can certainly argue that there is sufficient commonality so that with provides a basis whereby groups segregated by culture can develop a moral sense, but it would be hard to demonstrate this as a pure biological trait.
    Mundus vult decipi
    Gerhard Adam
    It is because of this that I can confidently say, for instance, that genital mutilation of small girls is wrong regardless of which culture practices it and why.
    What about genital mutilation of small boys (i.e. circumcision).  It is clear that the physical differences are more profound, but is that the basis for regarding one as wrong while the other is acceptable?  To what extent does culture justify such actions in determining right or wrong?

    More problematic is the term "flourishing" because it doesn't really describe any particular type of outcome.  If genital mutilation is wrong, then is lack of schooling wrong?  How about the lack of economic resources?  How do we avoid degenerating the argument into a "everything must be absolutely equal" situation if we're going to determine what constitutes moral/immoral behavior on the part of others. 
    Mundus vult decipi
    It is different because boys are circumsized as infants, usually in a sterile hospital setting - and this actions has some health benefits in reducing the spread of AIDs and improving hygene.

    Whereas girls are "circumsized" as 12 year olds, not in hospitals, not in sterile condictions, what exactly is removed varies from group to group - and this greatly impedes sexual pleasure and functioning - in addition to having a far higher morality rate - usually from infection.

    Comparing the two to my mind seems like a person complaining about losing their finger to a person who's lost a leg.

    Gerhard Adam
    It is different because boys are circumsized as infants, usually in a sterile hospital setting - and this actions has some health benefits in reducing the spread of AIDs and improving hygene.
    Not necessarily so.  There are plenty of instances of circumcision at a much later age, and in some of those same societies, it certainly isn't done in sterile hospital settings.  As for the purported benefits, those are justifications and not universal views.

    In the end, the point is that there is a mutilation occurring without the explicit consent of the patient.  The justification is always cultural, so if we justify one, then what allows us to claim that the other is unjustified.

    Bear in mind that your view on female circumcision is correct in my view, but it is equally obvious that that is not the view shared by men and women of cultures that perform it.  So, if it were just men that would be one thing, but even the women of such cultures often support the ritual, so it comes back to how we define what is morally acceptable behavior and how we can explain it without the cultural dimension.
    Whereas girls are "circumsized" as 12 year olds, not in hospitals, not in sterile condictions, what exactly is removed varies from group to group - and this greatly impedes sexual pleasure and functioning - in addition to having a far higher morality rate - usually from infection.
    While I understand and agree with your position, but that's also my own cultural bias showing.  Would your opinion be different if it were done in sterile hospital conditions?  In other words, what are you opposed to, the circumcision or the risk of infection?  This raises the second question, which is that if everything is medically perfect and the only effect is to impede sexual pleasure and function, then who determines the morality of such a decision?  Why is our culture better prepared to address this than theirs?  What is the justification?
    Mundus vult decipi
    In the field of the moral philosophy two fundamental schools try to indicate the right path to follow. There is the deontological school and the teleological school. The former teaches the individual must follow the moral rule all the way, the latter says what does it matter is the aim (telos) of the action. Personally in my life I follow the teleological criterion and I act looking at the goal of my action not at the rule. If I foresee for my action a good issue I act without caring of the moral precept.
    The book I have recently written deepens many moral issues. I want to draw it to your attention, as you may be interested in it. The title is “Travels of the Mind” and it is available at
    If you have any questions, I am most willing to offer my views on this topic.
    Ettore Grillo

    Interesting discussion.
    Regarding Hume, he wrote a somewhat overlooked (I think) piece, An Enquiry Concerning The Principle of Morals. An appendix in the essay is particularly fascinating, as he breaks apart the role of reason and sentiment. Hume stands on the point that when it comes to moral actions, we often act first and may justify later. Moral behavior in nature is hardly ever carried out with explicit reasoning in mind. 

    Ethics is a necessary field, but it remains somewhat odd because it essentially turns those moral behaviors around, finds a logic behind them, and justifies them to other situations. Regardless of the validity of ethics, the process is artificial. The vast majority of moral acts in the world do not require such reasoning. 

    It's a little fast and loose, I think, to claim that their end is human welfare, which is slippery to define. Even if you can operationalize human welfare, it would be difficult to pin it down to a time-scale. What is beneficial in the short-run is often not beneficial in the long-run. And what is beneficial to some people is not beneficial to others. And what is beneficial to some people in one way is beneficial to other people in another way. Like most concepts in empirical science, you can estimate human welfare from various angles, but not directly. Furthermore, when you consider the role of sentiment in moral behavior, I would argue that scientifically we understand moral behavior a whole lot better than we understand human welfare. 

    Alternatively, if human welfare is to be the defining attribute to moral behavior, then you're faced with the potential contradiction that other things - be them behaviors or circumstances - contribute to human welfare while falling outside of the moral sphere. Is an effective immune response moral?

    A better method might be to switch things around and claim that morality is one of the many tools that we are endowed with to help us achieve certain forms of human welfare. 
    Gerhard Adam
    I still maintain that the primary difficulty is in defining human welfare.  Instead we should consider moral behavior as aimed towards maintaining group welfare.  While there may be various attempts to extend the definition of the group, invariably there must be some sense of an individual being a part of a larger collection before morality makes any sense.
    Mundus vult decipi
    Steve Davis
    do enjoy being kind, and I do want to reduce other people's suffering -- and I would want to do those things even without a rational justification for why that's "moral." And I believe most people would feel the same way.
    But there is a rational justification for feeling that way and for why that's "moral." The word has the same root as "mores", it is a social construct, it refers to the greater good. And to make it even more significant, this matter has, as Julia suspects, had an influence in evolution. 
    Gerhard Adam
    I think that in its simplest form, morality can be reduced to the "correct" behavior that is expected by the group.  In the case of social animals, there are protocols and "rules" that are usually followed, with those that violate often chased out by the group.  In that way, we can argue that this displays a rudimentary morality (since there is ultimately no other way to classify what is "right" or "wrong" behavior).

    When we get to humans, we have the cultural dimension which may be rooted in geographic considerations, the size of the group involved, etc.  In other words, the dynamics of the group would determine what is "correct" or"incorrect" within the group.  This isn't merely polite protocol (which is typically required of strangers), but rather the internal expected behaviors of those that want to be members of a particular group.  In some cases, we don't necessarily call such behaviors as purely moral, but invariably they usually turn out to be so.
    Mundus vult decipi
    since what is moral varies so widely from cultural to cultural - and often within a culture - morals must be a cultural product to a large degree.

    As individuals, I think that the degree we wrap ourselves in morals is dependant on the degree that our creature comforts being met and our sense of the fairness in the degree.

    I just keep thinking about that morals and ethics class I took years ago in college - we look at a vareity of schools of morality - and I kept thinking that none of them were moral because none took into account motivation.

    the morality was instead focussed on the action or the result.

    Gerhard Adam
    the morality was instead focussed on the action or the result.
    As it should be.  This is similar to the arguments surrounding altruism, as if the individual's motivation must be included before an act can be considered altruistic. 

    Suppose that I want to cheat on my spouse, but I don't for fear that it would lead to divorce, etc.  Why should I not be considered as behaving morally regardless of my motivation?  What have I done that is immoral? 

    The problem with motivations is that they are forever unknowable,so they can never serve as the basis for any kind of discussion
    Mundus vult decipi
    Fred Pauser
    In regard to the "naturalistic fallacy" or the "is/ought fallacy:"

    Second, of course, as much as I myself love Hume, I don’t think the guy was infallible, and generally speaking invoking authority truly is a logical fallacy.

    Yes. Sometimes an "ought" is derived fallaciously from what "is," but sometimes it may very well be valid to do so. I think the is/ought guillotine is often called upon much too readily.

    Science certainly provides our most reliable knowledge. Although scientific methods may not be suited to directly address questions of morality, the science of biology and the evolution of life now provides an enormous body of information by which we can infer and develop more comprehensive and quite likely more valid moral principles than was possible in Hume's time, or even a few decades ago.

    To move beyond the narrow sense of flourishing that generated our moral instincts we need to be able to reflect about these issues in a rational and empirically-informed manner. That is, we need to do science-informed philosophy

    Indeed!! If we do not use the vast body of scientific knowledge as a starting point or a basis for building a broad morality, then what do we use?
    Gerhard Adam
    With group selection debunked, any basis of genetic selection for these traits other than the individual's reproductive fitness are mythic as well.
    That's simply not true.  You're placing far too much faith in genes, and failing to consider that there are many other elements in the traits.  Group selection is most certainly NOT debunked and you're also presuming that everything that is represented as a trait is necessarily an adaptation (which is also not true).
    Claims of human exceptionalism are emotionally appealing but seem just more pop culture beliefs.
    You can say this as often as you like, but until you can find another primate that is exceptional in the same way humans are, it's empty rhetoric.  The mere fact that you can type that opinion is exceptional in all of biology, and comments to the contrary make no sense.
    Mundus vult decipi
    Gerhard already pointed out the difficulty in assessing what is moral across cultures, and circumcision is a great example. Massimo gives a definition of moral that involves allowing for humans to increase their “welfare and flourishing”, yet I find the definitions of these terms equally, if not more, difficult to define. They cannot mean to simply increase in number, as many areas of the world that are overpopulated are rife with epidemic and food and water shortages. So a literal definition of “flourish” can act in direct opposition to “welfare”. If it means to increase in number sustainably, ensuring ample food, water, and comfortable living conditions, then you’ve opened the can of worms that is population control and communism/socialism that is chalk-full of its own moral questions and ambiguities.

    This also does not address the question of inactivity in the face of suffering, or a lack of “flourishing and welfare”. By this definition, not acting when you have the means while someone else is suffering can be considered immoral. There are black and white situations, such as being the only person to witness a bad car accident and ignoring it by not calling an ambulance. But there are also innumerable grey areas. Is it immoral to spend money on a lobster dinner, when one can easily live off much more economical fare and send the extra money to the needy? If one has any disposable income at all, it could go to people in need, thereby increasing their welfare and helping them to flourish, and the withholding of that income could be considered immoral. Is it immoral to own a television, computer, cell phone, IPod, or any other number of non-essential to life (I know many will disagree on whether or not those things are essential to life or not!) items that the vast majority of those in the western world own, when the money used to buy those items could have gone to disease prevention in Africa or supplying food and water to those affected by the disaster in Haiti? In these instances, one did not go out and directly kill someone, but there is no doubt that somebody in the world will die today of hunger, so in the end the result is the same, no?

    Gerhard Adam
    By definition, then, something is moral in my book if it increases human welfare and flourishing...
    Adam, I agree with your assessment, because using the quote from Massimo's article illustrates the difficulty.  There is no species in existence that extends such a degree of cooperation or help to all members of its species.  Therefore the definition would be more accurate to replace "human" with "human social group".  Specifically we cannot assess morality across social groups or beliefs.  This would create preposterous situations that we clearly don't intend.  For example, do we have a moral obligation to increase the "welfare and flourishing" of terrorists, criminals, or enemies? 

    More specifically how does imposing our belief or morality increase someone else's "welfare or flourshing", especially if it undercuts a fundamental aspect of their existence as a group? 

    We also have to consider whether our view of morality would require us to force others into behavior that they may disagree with.  Are we behaving morally if we force an abused spouse to divorce their abuser?   Does punishment, in any form, increase human welfare and flourishing?  Can we force people to diet or quit smoking simply because it may be healthier?

    This was precisely the rationale employed by the Inquisition where it was considered moral to kill or torture someone with the intent of saving their souls.  Regardless of how irrational such a position may seem today, it is valid provided that everyone within the participating group agrees that this is a legitimate standard. 

    So the final question becomes whether there is ever any possibility of attaining an objective definition of morality.  If so, then how does the quote of "welfare and flourishing" change if we should find intelligent life in the universe?

    Mundus vult decipi
    Mark Sloan
    Massimo and Julia, This is an old post, but I am motivated to comment anyway because I thought you both described your 2010 views well, and those are two sensible views I am trying to figure out how to argue against (in part). 
    From reading your posts, you both might agree me concerning the following (I list these ideas as a check that you actually do agree and to provide a basis for further discussion, if there is any): 

     1) Science can reveal descriptive facts about the biology underlying moral emotions such as altruism, empathy, loyalty, guilt, shame, and righteous indignation, and even descriptive facts about past and present enforced cultural norms (moral standards). 

     2) This science of morality is no more logically able to tell us what we ought to do morally in an imperative sense than agricultural science can tell a farmer he must grow lots of beans. Agricultural science can only tell the farmer how to grow lots of beans. 

     3) Science can also supply descriptive facts about the origins and nature of our experience of durable well-being (durable happiness) and what we might do to increase those experiences. 

     4) Fundamental axioms regarding moral behavior, or, as I prefer to describe them, ultimate goals for moral behavior and enforcing moral standards, cannot be defined by science and must be defined by people (who are fortunately natural born goal and purpose generators). Those ultimate goals are likely to be increased personal durable well-being or something similar. 

     5) The science of morality can be useful in defining the “How” of moral behavior and enforced moral standards that will be most effective in aid of achieving whatever ultimate goals people or groups choose. 

     The science of morality has been progressing rapidly. My reading of that progress is that there is a growing consensus that social morality (morality dealing only with interactions with other people) is made up of biological and cultural evolutionary adaptations defined by a common underlying selection force; they were all selected for by the benefits of altruistic cooperation in groups. 

     Where we three most obviously part company is in the inevitability and justification of “The Bases of Morality”. 

     Massimo: “To move beyond the narrow sense of flourishing that generated our moral instincts … we need to do science-informed philosophy” and “there is no rationally defensible distinction between in-group and out-group” 

    Julia: “it is (im)possible to use scientific facts to justify selecting one particular set of initial (moral) axioms over another” and “if someone didn't care about other people's welfare, I couldn't accuse him of irrationality.” 

     Mark: The science of social morality, morality understood as biological and cultural evolutionary adaptations selected for by increased benefits of altruistic cooperation in groups, is useful for achieving whatever ultimate goals groups choose. Behaviors and enforced cultural norms (moral standards) aimed at maximizing the benefits of altruistic cooperation in groups are effective at increasing the human experience of durable well-being because the human experience of durable well-being was largely selected to increase (by increasing) the benefits of altruistic cooperation in groups. 

     Rebuttal to Massimo: Our social moral instincts, with both biological and cultural components, were selected for by the increased benefits of altruistic cooperation in groups, not “our sense of flourishing” which was selected for by many forces, not all of them moral. Regarding in-groups and out-groups, if we gave as much moral concern to all the children in the world as we did our own, our families would disintegrate due to loss of the benefits of cooperation, which would be bad, so we ought to distinguish between in-group family and others (out-groups). Similar arguments can be made regarding all the many groups we belong to: families, communities, nations, and all people on earth (leaving other species and ecosystems as out-groups). 

     Actually, I am sincerely looking for good philosophical arguments justifying enlarging the circle of moral concern. Any suggestions? 

    Rebuttal to Julia: Science can provide powerful moral strategies (altruistic cooperation strategies such as “Do unto others as you would have them do unto you” also called indirect reciprocity) for increasing the benefits of cooperation in groups in aid of reaching whatever ultimate goals the group chooses. It may be true that “if someone didn't care about other people's welfare, I couldn't accuse him of irrationality” but you might, based on science, accuse him of being immoral in the universal, evolutionary sense. That means not acting to increase the benefits of altruistic cooperation in groups, and therefore guilty of decreasing his (assuming he was mentally normal) durable well-being as well as other people’s durable well-being.