“You find out more about God from the Moral Law than from the universe in general just as you find out more about a man by listening to his conversation than by looking at a house he has built. Now, from this second bit of evidence we conclude that the Being behind the universe is intensely interested in right conduct – in fair play, unselfishness, courage, good faith, honesty and truthfulness. In that sense we should agree with the account given by Christianity and some other religions, that God is ‘good.’ But do not let us go too fast here. The Moral Law does not give us any grounds for thinking that God is good in the sense of being indulgent, or soft, or sympathetic. There is nothing indulgent about the Moral Law. It is as hard as nails. It tells you to do the straight thing and it does not seem to care how painful, or dangerous, or difficult it is to do. If God is like the Moral Law, then He is not soft. (Lewis, 1952, p. 34).
For well over a hundred years, Western societies have been trending away from traditional religious values and towards more secular, rational values. Within this trend is a movement away from supernatural explanations of observations in life to natural explanations.
In recent times, debates between philosophers, psychologists, and evolutionary biologists have focused on whether our morality is only culturally contingent or universal and objective. People who argue for the latter often ask the question of whether objective morality exists, which is answered using moral ontology within a meta-ethical framework. Studies on this matter to date are descriptive. Others interested in explaining our moral prescriptions often turn to normative ethical frameworks, which focus on how people should act using deontological explanations or consequentialism.
Scholars further argue for either supernatural or natural explanations of grounding. The present study examines and expands on these arguments. To understand whether we have objective moral values, we apply meta-ethics. To understand how we should act if we have objective moral duties, we apply normative ethical frameworks. Better ethical decisions can be made when considering both normative ethical frameworks using Christian synderesis, intuitions, reasoning, and conscience.
The Ethical dilemma
For well over two thousand years, scholars have formulated varying ways to explain ethical decision-making. In Western societies, Christian ethical frameworks dominated people’s perspectives, yet modernism in the first part of the last century has increasingly supplanted Christian views with secular perspectives (c.f., Kim, McCalman & Fisher, 2012). Kim and colleagues (2012, p. 205) underscored this view. “As modernism gradually replaced Christianity as the dominant worldview in the western world, it essentially eliminated God from the public arena. Modernists believed the growth of newly discovered facts based on human reasoning and the scientific method would yield a unified answer for all knowledge and life.” Yet as they point out, modernism fails to comprehensively explain our reality and meaning, particularly with respect to ethics.
According to Fukuyama (1997, pp. 381), “the modern liberal project envisioned replacing community based on tradition, religion, race, or culture with one based on a formal social contract among rational individuals who come together to preserve their natural rights as human beings. Rather than seeking the moral improvement of their members, modern societies have sought to create institutions like constitutional government and market-based exchange to regulate individual behavior. From the earliest days of the Enlightenment, conservative thinkers like Edmund Burke and Joseph de Maistre argued that such a community could not work. Without the transcendental sanctions posed by religion, without the irrational attachments, loyalties, and duties arising out of culture and historical tradition, modern societies would come apart at the seams.”
Postmodernism is a relativist view that came about after World War II and shifted attention away from grand narratives, such as objective truth and human nature. Postmodernists question the presence of any objective, philosophical, scientific or religious truth. This view is similar to moral relativism, which some believe better explains our reality than any objective or universal morality. Moral relativists believe that morality is culturally contingent. What is morally right or wrong depends on the prevailing norms or values of the cultures in which the moral actor is situated. Some moral relativists argue that moral decisions can only be judged within these boundaries as they believe we have no universal moral values and duties (e.g., Dawkins, 2006).
Others argue we have universal and objective moral values and duties. Moral values refer to whether something is good or bad, while moral duties refer to whether something is right or wrong (Craig, 2008). “To say a person ‘has a value’ is to say that he has an enduring belief that a specific mode of conduct or end-state of existence is personally and socially preferable to alternative modes of conduct or end-states of existence” (Rokeach, 1972, p. 159-160). To say moral values or duties are objective is to say they do not vary as a function of a person’s opinion. To say moral values or duties are universal is to say they are shared by all people globally, regardless of race, culture, sex, religion, color, and other demographic characteristics. They are the standards against which we judge practices, behaviors and actions. For example, truth is an objective moral value whose meaning does not vary as a function of any one person or group of people. We assess whether someone is lying by matching his words or behaviors up against what we know as the truth. The Golden Rule is an objective moral duty whose meaning does not vary as a function of any person or group of people. We assess whether people follow the Golden Rule by whether and how their words or behaviors match up to a standard of benevolence.
While universal moral values and duties often are also objective and righteous, there could be instances in which that is not the case. For example, if the Nazis had won World War II and taken over the world, forcing the world to discriminate against “lesser” races, discrimination would be considered a universal value, yet it would not be objectively right. It violates truths we hold to be self-evident, such as the intrinsic value of life, equity, equality, liberty, justice and freedom.
Some have argued that because objective moral values and duties transcend the cognitive limitations of particular individuals and are not restricted to the ethos of various cultures and eras, they must have been derived from a source that transcends the same boundaries (c.f., Craig, 2008). They argue that this source is an objective, universal, and intentional moral lawgiver (c.f., Craig, 2008; Miller, 2013).
“Because in Judaism, Christianity, and Islam God is associated with ultimate ‘goodness,’ orders from God include specific duties and prohibitions that are associated with goodness and humanitarian behaviors. Some of the Laws of God require humane-oriented behaviors and doing good to others, like alms giving. Other laws require individuals to refrain themselves from directly causing harm to others, such as homicide or theft. Still other laws require individuals to refrain from sensual enjoyment and hedonistic activities, such as adultery, consumption of alcohol, eating (fasting), or being lustful” (House, Hanges, Javidan, Dorfman & Gupta, 2004, p. 565).
Along similar lines, the seminal work of Immanuel Kant (1785/2002) on deontological ethics made a case for a metaphysical foundation of our moral duties. “Everyone must admit that a law, if it is to be valid morally, i.e., as the ground of an obligation, has to carry absolute necessity with it; that the command ‘You ought not to lie’ is valid not merely for human beings, as though other rational beings did not have to heed it; and likewise all the other genuinely moral laws; hence that the ground of obligation here is to be sought not in the nature of the human being or the circumstances of the world in which he is placed, but a priori solely in concepts of pure reason, and that every other precept grounded on principles of mere experience, and even a precept that is universal in a certain aspect, insofar as it is supported in the smallest part on empirical grounds, perhaps only as to its motive, can be called a practical rule, but never a moral law…Thus a metaphysics of morals is indispensably necessary” (p. 24).
Others have made arguments suggesting objective moral values and duties exist, yet they are not grounded in a transcendent moral lawgiver. In a recent debate at the New York Academy of Sciences, Michael Shermer argued for a naturalistic position in explaining objective morality, while Christopher Miller argued for a supernatural position (Miller, 2013; Shermer, 2013). More specifically, Miller argued in support of an omni-benevolent, omnipotent, and omniscient Creator of the universe as the source of our moral grounding. Shermer argued in support of (unguided) evolutionary processes and the desire to survive and flourish.
Scholars who support Moral Foundations Theory (Graham et al., 2013) have posited that various objective moral foundations evolved to solve a variety of adaptive problems and selective pressures (McKay & Whitehouse, 2015). Some have posited that the conjunction of these moral foundations and evolutionary adaptations may have “increased the premium on mechanisms that inhibit moral transgressions” (McKay & Whitehouse, 2015, p. 458). In other words, they have argued that people had a need to create an external, overseeing and omniscient agent. According to Johnson (2009, p. 178), “What better way than to equip the human mind with a sense that their every move—even thought—is being observed, judged, and potentially punished?” This is the view taken by people who believe that our objective morality resulted from evolutionary adaptations and is grounded in consequentialism (e.g., the desire to survive and flourish) (c.f., Mills, 1863; Rawls, 1971; Harris, 2010).
Which position is most valid? Are people from all parts of the globe bound by the same universal and objective moral values and duties or are they only contingent? If universal and objective moral values and duties exist, are they grounded in a transcendent moral lawgiver or are evolutionary adaptations sufficient to explain? I take the position that if moral universals exist, it follows that we have objective moral standards against which to judge human behaviors across cultures.
“Metaethics is a branch of analytic philosophy that explores the status, foundations, and scope of moral values, properties, and words. Whereas the fields of applied ethics and normative theory focus on what is moral, metaethics focuses on what morality itself is. Just as two people may disagree about the ethics of, for example, physician-assisted suicide, while nonetheless agreeing at the more abstract level of a general normative theory such as Utilitarianism, so too many people who disagree at the level of a general normative theory nonetheless agree about the fundamental existence and status of morality itself, or vice versa. In this way, metaethics may be thought of as a highly abstract way of thinking philosophically about morality. For this reason, metaethics is also occasionally referred to as “second-order” moral theorizing, to distinguish it from the “first-order” level of normative theory.” (IEP, 2019)
Moral ontology vs. moral epistemology
Moral ontology refers to whether moral values and duties objectively exist independently of people and are to be discovered by people. Ontology is defined as a branch of metaphysics that covers the nature of being. Moral ontology focuses on whether a standard of good and bad values or right and wrong duties exists.
Moral epistemology refers to the knowledge of morality and how one comes to know morals and what’s right and wrong. If one states that different cultures have different moral values and duties, one is applying moral epistemology. These are epistemological questions of how one comes to know moral values and duties. These are not to be conflated with whether moral values and duties exist. Some cultures may be closer to being “right” in their understanding of morality than others, yet this does not mean objective rights or wrongs do not exist. It is a non sequitur to state that because different cultures have different perspectives of right and wrong that objective standards of right and wrong do not exist.
Two types of moral ontology
There are two types of moral ontology: moral realism and moral relativism. Moral realists believe that one truth exists and truth is unchanging and is discoverable via objective measurements. This is a deductive approach (general to specific), which suggests that one can start from a generalization to make specific hypotheses about reality. Using the moral realism perspective, Hopster (2017, p. 764) helped to better define objectivity in morality. “Consider the claim that the Earth revolves around the sun: this claim is true, it states a fact, and this truth or fact is fully independent of what any agent thinks or feels about it. Similarly, moral realists maintain that moral truths or facts are fully independent of the attitudes of any agent.” Similarly, Street (2006) argued there are objective, “stance-independent” moral truths. Moral relativists believe that truth does not exist without meaning – so truth is dependent upon the meaning you attach to it. Reality is created and it evolves and changes depending on one’s context and situations. This is an inductive approach (specific to general), which suggests that one analyzes the specifics of various situations and transfers those specifics to other similar contexts.
Moral ontology usually employs an “etic” (more quantitative and objective) methodology, whereas moral epistemology usually uses an “emic” (more qualitative and contextual) approach. Compare this with a fishbowl analogy. If one is applying an etic view, one would analyze the fishbowl (of a culture) from an objective position outside of it. If one is applying an emic view, one would analyze the fishbowl from within the fishbowl. Moral ontology dictates one’s epistemological beliefs. In other words, what people believe about reality will influence the relationship one thinks one should have to reality.
I will next expand on these concepts with a discussion of moral relativism and moral universalism. I follow with a discussion on several possibilities that scholars have used to ground morality, which include biological evolutionary heritable traits, social development processes, moral psychology, consequentialism/utilitarianism and deontology. I will show how some of these means of grounding result in category errors.
I will then summarize and synthesize the results using Christian synderesis. I will offer theories that will support the position that universal and objective moral values and duties are real, evidenced, and grounded in a benevolent God. Finally, I have summarized my basic arguments in a table, which is presented just prior to my conclusions.
Demuijnck (2015) observed “the repeated realization, during teaching but also during occasional consultancy work, of how astonishingly popular moral relativism is, not only among students, but also among managers and executives. If you start a training session or a class by saying that you think ethics is a matter of right and wrong, universally, that some principles seem to be absolutely valid, or, more modestly, that some management and business practices are, from an ethical perspective, deﬁnitely better than others, people raise their eyebrows and quickly come up with all kinds of examples to show you that ethics depends on people’s culture or religion. From this observation, the relativist conclusion is quickly reached, and moral objectivism is rejected as an outdated notion.” Demuijnck (2015) attributed a portion of this trend to the prevalence of cross-cultural research in universities and businesses by researchers who focused on differences between societies in values, behaviors, and norms rather than similarities.
Moral relativism comes in multiple forms (Frankena, 1973). Descriptive relativism is the view that basic ethical beliefs of different people in different societies are different and even conflicting. For example, views concerning the morality of practices such as bribery and child labor vary across cultures with acceptance in some and rejection in others. Meta-ethical relativism is the view that we have no objectively valid, rational way of justifying one ethical position against another. This view rejects any universal or objective moral standard. Normative moral relativism is the view that the circumstances and norms within a society play a role in determining the appropriate ethical decisions. If one takes these positions seriously and posits we have no rational way of imposing the human rights rules of one country on another, the consequences could be devastating. What occurred in Nazi Germany with the condemnation, persecution, and reprehensible slaughter of millions of Jews in the last century helps to exemplify an extreme the world has rightfully condemned.
Demuijnck (2015) cited two examples in which external observers failed to curb actions within isolated cultures due to their own desire to permit cultures to set their own rules, despite the ethicality of the particular rules. He identified the white “mutineers of the bounty” who have occupied the British Pitcairn Islands near New Zealand for the past two hundred years. They routinely “break in” their young girls (as young as ten-years old) by having sex with them. He also identified infanticides in the Amazon Rainforest by tribes who force parents to kill their young babies and toddlers who are born with birth defects, stating that those humans lack souls. Outsiders have failed to step in and stop the practices under the belief that the tribes have rights to establish their own sets of moral laws and rules.
A recent example occurred on the island of Sentinel in the Bay of Bengal off of the coast of India. A Christian missionary with a waterproof Bible who was determined to convert the long isolated and very primitive natives was killed by them when he reached their shore (Gettleman, Kumar & Schultz, 2018). Rather than arresting the parties involved, Indian officials arrested the men who provided the Christian missionary a boat ride to the island, since travel to the island was prohibited (Gettleman et al., 2018; McKirdy, 2018). Many justified the action, stating that the missionary was likely to bring modern diseases to the island, so they were acting in self-defense (c.f., Barash, 2018).
If we applied universal ethical guidelines to such situations, condemnation would more likely have occurred. Yet would such condemnation be fair? In other words, is humanity universally bound to the same sets of moral values and duties? Should we ensure that the International Bill of Human Rights by the United Nations (2018) is equally applied to all people on the globe? An equal application presents a norm that can be used to appeal against gross violations in human rights (Hofstede, 2001) and can therefore be beneficial.
Cross-cultural scholars in the last century made strong cases for either or both moral relativism, which acknowledges value differences between societies, and moral universalism, which suggests certain moral values and duties are shared universally between all humans. I concur with them and also argue that we should consider both frameworks together, rather than only one or the other. Many scholars with similar ideas have built theories from the seminal work of Kluckhohn and Strodtbeck (1961), Rokeach (1979), Hofstede (Hofstede; 1980; 2001), and Schwartz (1992).
In multiple studies of hundreds of samples in eighty-two countries and representing culturally diverse people of varying ages, genders, occupations, and geographies, Schwartz (2012, p. 17) drew a conclusion that he considered “astonishing.” After ranking ten values in order of importance, results indicated universals in values. The vast majority ranked benevolence as the #1 and the most important value, followed by universalism and self-direction. Schwartz defined the constructs as follows (p. 6-7):
Benevolence refers to “preserving and enhancing the welfare of those with whom one is in frequent personal contact (the ‘in-group’). Benevolence values derive from the basic requirement for smooth group functioning (cf. Kluckhohn, 1951) and from the organismic need for affiliation (cf. Maslow, 1965). Most critical are relations within the family and other primary groups. Benevolence values emphasize voluntary concern for others’ welfare. (helpful, honest, forgiving, responsible, loyal, true friendship, mature love). Benevolence and conformity values both promote cooperative and supportive social relations. However, benevolence values provide an internalized motivational base for such behavior. In contrast, conformity values promote cooperation in order to avoid negative outcomes for self. Both values may motivate the same helpful act, separately or together.”
Universalism (note this is not to be confused with moral universalism) refers to “understanding, appreciation, tolerance, and protection for the welfare of all people and for nature. This contrasts with the in-group focus of benevolence values. Universalism values derive from survival needs of individuals and groups. But people do not recognize these needs until they encounter others beyond the extended primary group and until they become aware of the scarcity of natural resources. People may then realize that failure to accept others who are different and treat them justly will lead to life-threatening strife. They may also realize that failure to protect the natural environment will lead to the destruction of the resources on which life depends. Universalism combines two subtypes of concern—for the welfare of those in the larger society and world and for nature (broadminded, social justice, equality, world at peace, world of beauty, unity with nature, wisdom, protecting the environment).”
Self-direction refers to “independent thought and action—choosing, creating, exploring. Self-direction derives from organismic needs for control and mastery (e.g., Bandura, 1977; Deci, 1975) and interactional requirements of autonomy and independence (e.g., Kluckhohn, 1951; Kohn & Schooler, 1983). (creativity, freedom, choosing own goals, curious, independent).”
Schwartz (2012) defines values as guiding principles in people’s lives. They are standards or goals that guide actions and they transcend specific situations or actions and are ordered by importance. Schwartz (2012) posited that having the pan-cultural values he discovered may be attributable to adaptions by humans to maintain order in societies over time.
Along similar lines, Kinnear, Kernes, and Dautheribes (2000) consulted the religious texts and sacred writings of seven major world religions (Christianity, Hinduism, Buddhism, Taoism, Confucianism, Judaism, and Islam) to identify whether any universals could be identified. They further consulted atheist and humanist organizations, along with the United Nations. They found these commonalities:
- Commitment to something greater than oneself
- To recognize the existence of and be committed to a Supreme Being, higher principle, transcendent purpose or meaning to one’s existence
- To seek the truth (or truths)
- To seek justice
- Self-respect, but with humility, self-discipline, and acceptance of personal responsibility
- To respect and care for oneself
- To not exalt oneself or overindulge – to show humility and avoid gluttony, greed, or other forms of selfishness or self-centeredness
- To act in accordance with one’s conscience and to accept responsibility for one’s behavior
- Respect and caring for others (i.e., the Golden Rule)
- To recognize the connectedness between all people
- To serve humankind and to be helpful to individuals
- To be caring, respectful, compassionate, tolerant, and forgiving of others
- To not hurt others (e.g., do not murder, abuse, steal from, cheat, or lie to others)
- Caring for other living things and the environment
In a similar study, Dahlsgaard, Peterson and Seligman (2005) examined the ancient texts from eight religious traditions (Christianity, Judaism, Athenian philosophy, Taoism, Confucianism, Islam, and Hinduism). The authors found six recurrent values: courage, temperance, justice, transcendence, humanity, and wisdom. They defined courage as emotional strengths that involve the exercise of will to accomplish goals in the face of opposition, external or internal. Examples include bravery, perseverance, and authenticity (honesty). They defined justice as civic strengths that underlie healthy community life. Examples include fairness, leadership, and citizenship or teamwork. They defined humanity as interpersonal strengths that involve “tending and befriending” others with the examples of love and kindness. They defined temperance as strengths that protect against excess with examples of forgiveness, humility, prudence and self-control. They define wisdom as cognitive strengths that entail the acquisition and use of knowledge. Examples included creativity, curiosity, judgment and providing counsel to others. They defined transcendence as strengths that forge connections to the larger universe and thereby provide meaning. Examples included gratitude, hope, and spirituality.
Other global studies have found similarly. In a survey using psychological, historical, juridical, theological, and ethnographical research, Westermarck (1906) identified universals in the approval of honesty, charity, mutual aid, and generosity, along with the prohibition of theft and homicide. Henrich and colleagues (2005) examined fifteen societies, finding that fairness and trust were exhibited in all.
Taken together, these studies indicate similarity in the universal, objective, self-transcendent foundations upon which to judge human behaviors and actions. Yet the question of grounding is outstanding.
What is the source of our universal and objective moral values and duties?
Does our social development explain our universal and objective moral values and duties? Charles Darwin (1871, p. 110-111) stated: “So in regard to mental qualities, their transmission is manifest in our dogs, horses, and other domestic animals. Besides special tastes and habits, general intelligence, courage, bad and good temper, etc., are certainly transmitted. With man we now know through the admirable labors of Mr. Galton that genius, which implies a wonderfully complex combination of high faculties, tends to be inherited; and on the other hand, it is too certain that insanity and deteriorated mental powers likewise run in the same families.”
Since Darwin’s time, we have been able to dial down on some of the causes of the variability in human populations due to studies on phenotypes, which are observable characteristics influenced by our environments, and genotypes, which is our genetic code (Ilies, Arvey, and Bouchard, 2006). Evolutionary biologists and psychologists often make the cases that our moral behaviors can be explained as a function of humanity’s adaptations to their environments over time. In other words, they made a descriptive case for morality, which is an attempt to describe “as is” conditions in practices and behaviors. The descriptive arguments in favor of moral evolution tend to focus on the idea that cultures adapted to their environments over time, both competing for resources and cooperating within in-groups to enhance their chances of survival.
MacKay and Whitehall (2015, p. 17) delineate the adaptationist perspective on religion, which is the view that some of the religious “byproducts” we have observed, such as prosocial behaviors (cooperative and competitive, or coopetition), may have become useful for the survival of individuals and groups as their cultures evolved. This approach helps to explain the growth of large-scale societies from smaller bands of hunters and gathers, farmers, and local communities. They argue that small group psychology in which families and small groups would hold cheaters and free-riders accountable would not work when societies grew into larger empires. In those cases, an external force was needed to monitor behaviors, which they propose could be a God or gods. In other words, religion provided a needed system of external accountability.
Some have also argued that as societies grew, rituals developed as a signal of good character, such as signals of trustworthiness and the willingness to cooperate (Bulbulia et al., 2013). The theory of credibility enhancing displays (CREDS; Heinrich, 2009) posits that some members in society secured the trust and commitment from followers by becoming role models and walking the walk or talking the talk. By serving as moral exemplars, these people helped to spread moral ideals though their societies. The CREDS theory helps to offer one potential explanation for the prevalence of gurus, role models, and formal or informal leaders within all religious groups who pave the example for their followers.
These are a few of numerous theories and propositions developed in attempts to explain our social development. Yet these arguments fall within moral epistemology – not moral ontology. The way we developed our morality (moral epistemology) does not address the question of whether objective morals and duties exist (moral ontology). To use our social development to posit non-existence of objective morality is a category error.
Heritability of social traits
Ilies, Arvey, and Bouchard (2006) noted that traits, work attitudes, values and interests, and behaviors are heritable. More specifically, the authors reported that the heritability of intelligence is between .60 and .80 (Bouchard, 1998). The heritability of five factor model of personality is .41 for emotional stability, .49 for extraversion, .45 for openness to experience, .35 for agreeableness, and .38 for conscientiousness (Loehlin, 1992). Empathy is a facet of agreeableness. The heritability of positive and negative emotionality is .43 and .47 (Bouchard & McGue, 2003). The heritability of a person’s attitude toward being the leader of a group is .41 (Olson, Vernon, Harris & Jang, 2001). For all of these traits, the genetic component is less than half. Non-genetic components, such as the environment in which a person is raised, are in each case more important in determining how people behave. These findings offer strong evidence against an evolutionary hypothesis for morality (Garte, 2019, personal conversation).
As noted above, while biologists and psychologists have described descriptively our “as is” genetic predispositions and environmental influences on certain behaviors, they cannot explain our “should be” moral oughts and duties. Moral ontology can be used to claim that moral values and duties exist, yet normative ethical frameworks better explain how we should act using teleological or deontological approaches.
Haidt and Joseph (2004) surveyed evolutionary theories about human and primate sociality, along with lists of virtues and taxonomies of morality from psychology and anthropology to moral concerns or virtues that were shared widely across cultures. They established five “foundations” of morality: care/harm, fairness/reciprocity, ingroup/loyalty, authority, and purity/sanctity. They later added a sixth one: liberty/oppression.
These foundations are stance-independent and objective. In other words, they do not vary as a function of anyone’s opinions. Two hundred years from today, when no one currently present exists, these moral foundations will exist. They are unchanging standards to which human behavior aspires.
Graham, Haidt and Nosek (2009) identified political variations in the attention to the five foundations. People considered more liberal on the political spectrum were primarily concerned with care/harm and fairness/reciprocity, while more conservative individuals drew more evenly across all five foundations.
Jonathan Haidt uses “Social Intuition Theory” to make the claim that our moral intuitions about these values guide our reasoning. Moral intuitions and reasoning are the foundations of morality. I would argue that our intuitions and reasoning are guided by our conscience. Our conscience is the foundation upon which our intuitions and reasoning are driven. Our intuitions inform our reasoning, which in turn inform our intuitions, which in turn inform reasoning. These mechanisms are part of the learning process, yet the process is driven by our conscience. When we align our intuitions and reasoning to our conscience, our very nature is closer in alignment with our Maker. Our Maker is the source of our conscience and the reason why we have a conscience. We are made in His image, which is why one of the unique features of our species is that we have a moral code and a conscience.
Some might argue that our intuitions are our conscience, but the two are distinct. Imagine yourself on the edge of a pond. You witness an alligator that grabs a little boy and drags him into the pond. Your intuitions are to stay alive for the benefit of your family. Your intuitions are to run and save yourself from the alligator. Fight or flight. You’ve been hard-wired to want to run away. Your intuitions are in “survival mode.” Yet your conscience calls on you to rescue the little boy. Your conscience is at a higher level than your intuitions, similar to Maslow’s self-actualization mode. Your conscience calls on you to risk your life to save the life of another.
Osman and Wiegmann (2017, p. 17) described a heated debate between a philosopher and two psychologists. “Shaw, the philosopher, argued that the field of moral psychology lacks a moral compass, and should acknowledge that as a field its dependency on psychological and biological facts makes it morally irrelevant, and reveals nothing about moral propositions of a normative nature. The two psychologists, Haidt and Pinker, replied that their research was never supposed to be understood as a normative guide – and that this should not only be obvious but is also explicitly stated in the works.”
One of the psychologists, Jonathan Haidt, directed his challenger to a passage in his book: “Philosophers typically distinguish between descriptive definitions of morality (which simply describe what people happen to think is moral) and normative definitions (which specify what is really and truly right, regardless of what anyone thinks). So far in this book I have been entirely descriptive.” (Haidt, 2012, p. 271). In other words, Haidt’s moral foundation theory should be used to demonstrate that the six foundations identified by Haidt exist, but should not be applied to normative ethical frameworks as guides to human behavior. Normative ethical frameworks can instead be used to explain our moral prescriptions.
In summary, in the quest to understand the source of our objective morality, skeptics often use moral epistemology to explain how we came to be via our social development. They may also point to the heritability of our social traits, yet as noted by Sy Garte, the evidence suggesting nurture is a stronger factor than nature suggests this argument is weak. They may further use moral foundations theory, yet this theory is only descriptive (as noted by Haidt, 2012).
Normative ethical frameworks
The normative ethical framework is a branch of philosophical ethics that concerns the study of action to investigate how one ought to act morally. Within the normative ethical framework are teleological ethics (consequentialism and utilitarianism) and deontological ethics.
Normative ethical teleological framework
Consequentialism and Utilitarianism
Consequentialism falls in the normative ethical teleological framework. People who lean heavily on consequentialism to make ethical decisions consider the morally correct action to be the one with the best consequences. Consequences take into account both actions and everything the actions cause.
English philosopher and economist John Stuart Mill (1863, p. 14) popularized consequentialism with what he labeled the Greatest Happiness Principle. “According to the Greatest Happiness Principle…the ultimate end, with reference to and for the sake of which all other things are desirable (whether we are considering our own good or that of other people), is an existence exempt as far as possible from pain, and as rich as possible in enjoyments, both in point of quantity and quality; the test of quality, and the rule for measuring it against quantity, being the preference felt by those who in their opportunities of experience, to which must be added their habits of self-consciousness and self-observation, are best furnished with the means of comparison.” Mill further acknowledged the higher pleasures of the intellect and lower pleasures of the senses. In a similar vein, Harris (2010) stated that which is morally right is that which maximizes well-being, which he defined as the maximization of pleasure and happiness.
Happiness (also known as “well-being” or “pleasure)” is often considered to be the “ideal” consequence, so some adherents believe that this consequence grounds moral decisions. Yet I would make the argument that happiness is only a minor goal within a much more complex set of goals and ideals. Given the aforementioned studies of all world religions, atheism, and humanism, one should note that “happiness” is never mentioned. Instead, seeking purpose and meaning, benevolence, universalism, and self-direction are mentioned. These values, along with others identified in global studies (e.g., courage, humility, and temperance) do not necessarily correspond to happiness. Accordingly, consequentialism using subjective appeals to happiness does not sufficiently ground humanity’s objective moral values and duties.
A special form of consequentialism is utilitarianism, which accentuates the idea that we should do the greatest good for the greatest number of people. The greatest good is often that which maximizes happiness or well-being or pleasure over pain for the greatest number of people.
In Rawls’ Theory of Justice (1971), he stated that each society sets its own moral standards. To set the standards, he argued that each society requires a certain set of initial conditions, which include rational actors and mutually disinterested parties. Yet human nature is often in opposition to these rather strict, somewhat extreme conditions (c.f., Zollo, Pelligrini & Ciappei, 2017). Behavioral economists have found that people are often predictably irrational (Ariely, 2010) while agency theory (Fama, 1980) has often been drawn upon to make a strong case demonstrating that people act in their own self-interests. Cross-cultural scholars who study values have found that humans don’t always act in rational ways and rationality is subjectively defined (Hofstede, 2001, p. 6) and becomes problematic in cases of uncertainty (March & Simon, 1958). Rational (or optimal) decisions require that all alternatives to a problem are perceived by the problem-solver, and in organizations, that is rarely the case (Tosi, 2009). Furthermore, the means people use to generate personal happiness may be at the expense of the happiness of others. Consider that the Nazi prison guards may have derived happiness and pleasure by torturing, starving, and murdering people, which is obviously egregious and morally wrong.
Harris (2010, p. 225) seemed to acknowledge this shortfall when he asked these questions. “What if the laws of nature allow for different and seemingly antithetical peaks on the moral landscape? What if there is a possible world in which the Golden Rule has become an unshakable instinct, while there is another world of equivalent happiness where the inhabitants reflexively violate it? Perhaps this is a world of perfectly matched sadists and masochists. Let’s assume that in this world every person can be paired, one-for-one, with the saints in the first world, and while they are different in every other way, these pairs are identical in every way relevant to their well-being. Stipulating all these things, the consequentialist would be forced to say that these worlds are morally equivalent. Is this a problem? I don’t think so. The problem lies in how many details we have been forced to ignore in the process of getting to this point.”
Building on that point, it is noteworthy to consider that utilitarianism corresponds to some social classes and some negative personality traits. People with higher scores on psychopathy, life meaninglessness, and Machiavellianism had greater endorsements of utilitarian solutions (Bartels & Pizarro, 2011). Reynolds and Conway (2018) identified a negative correlation between psychopathy and more deontological judgments. Other scholars have found that when making moral judgments, members of the upper-class attempted to maximize the gains for a group by expressing a willingness to take actions that harm some but help many (Cote, Piff & Willer, 2013). Their tendencies to show less empathy increased utilitarian judgment (Cote, Piff & Willer, 2013).
These arguments suggest utilitarian decisions are agent-relative, not agent-neutral as others have argued (Holyoak & Powell, 2016). Holyoak and Powell (2016) argued for agent neutrality, claiming that what is right for one would be right for the group. Yet that is not necessarily the case. Consider an ethical dilemma in which a father in a burning building has the option to save his three children on one side or ten coworkers on another. He will likely reject the greatest good for the greatest number of people in favor of following his duty to value his children’s lives more, so what is right for one would not be right for the group. This dilemma is somewhat similar to the famous trolley dilemma, which is often used to force decision-makers to choose either reason using a utilitarian perspective or to follow moral duties to act virtuously.
Another shortcoming relates to the formulation of the “greater good.” “If ‘greater good’ is to be meaningful in the formulation of a criterion of morality, three conditions must be fulfilled: 1) ‘good’ must have a single meaning; 2) what is good in this unique sense must be measurable; and 3) the result of measurement must settle moral issues either directly or indirectly. Clearly, the necessary meaning of ‘good’ cannot be specified in moral terms. What Rawls says of utilitarianism is true of all consequentialism: Its point is to define ‘good’ independently of ‘right’ and to define ‘right’ in terms of ‘good.’ And, in general, consequentialists see this requirement and try to meet it. If consequentialists said that ethical considerations determine what a good consequence is, they would either be going in a circle or setting off on an infinite regress” (Grisez, 1978, p. 31).
Utilitarianism as an explanation of our moral prescriptions is faulty for several reasons. First, assuming humanity shares a common goal of “happiness,” “well-being,” or “pleasure” is superficial, given our much deeper and more complex universal moral values that have been identified in multiple global studies. A soldier who shows great courage and takes a bullet for another soldier and dies did not do it to make himself “happy.” He did it because in that moment of time, he displayed his tremendous love for his fellow man. In the hierarchy of moral values and duties, love rests on top. Happiness may or may not be an outcome of love in this life.
Second, defining happiness or well-being or pleasure-seeking as a “good” consequence while saying it is “right” to seek happiness, well-being, or pleasure is a mere play on words, leading to circular thinking. Even if we changed the goal to love or focused on our mutual duty to love, the thinking is problematic. Saying that it is good to love and it is right to love so we should love is true. We have a universal and objective moral duty to love. But saying that we can agree on the goal of love or our duty to love so therefore our shared goal/mutual agreement are both the objective foundation and the shared goal/duty we should achieve is circular. It equates with saying that humans have an underlying principle to be moral and a goal of being moral. The goal is also the grounding of the goal.
Using the popular “being” versus “doing” mode, consider that our conscience and intuitions are steeped within our nature, or being. Our reasoning (or goal-setting) is steeped in actions, or doing. Accordingly, grounding our morality in our reasoning (c.f., consequentialism, utilitarianism, identifying the moral value itself) is flawed. It equates with establishing the goal or the value – and then saying the goal or value is also the source, or foundation of the goal. To put it another way, setting a goal is an act, or “doing,” while the grounding is “being.” One cannot conflate the two. Doing focuses on the impersonal external world and the act of setting a goal or getting something done. Being focuses on what’s internal and personal to us, intrinsic to our very nature.
Third, the assumptions of rational actors and mutual disinterest are not met when establishing social contracts or shared goals. People often act irrationally and in their own self-interests. In other words, we cannot ground utilitarianism in the minds of humanity so we cannot explain our universal and objective morality using this philosophy. This philosophy does not offer a sound foundation upon which humanity’s universal and objective moral values stand. Instead, we must grant a universal and objective moral lawgiver who transcends humanity: God. God gave us a conscience that serves as a guide for our reasoning, intuitions, values, duties, purpose and meaning. Now we will turn to a different moral framework.
Normative ethics deontological framework
The influential German philosopher Immanuel Kant (1785/2002) popularized the deontological view, which states that people should follow universal rules and norms on what ought to be followed. In his view, reason serves as a guide toward an action that fulfills what he labeled as the categorical imperative, which is a universal law or maxim. An action performed out of duty is one of moral character.
Kant’s “categorical imperative” states that people should act according only to that maxim by which one can – at the same time – will that it should become a universal law. People are the ends, not the means to the ends, and valued equally. “An action from duty has its moral worth not in the aim that is supposed to be attained by it, but rather in the maxim in accordance with which it is resolved upon; thus that worth depends not on the actuality of the object of the action, but merely on the principle of the volition, in accordance with which the action is done, without regard to any object of the faculty of desire” (Kant 1785/2002, p. 34). In a meta-analysis of 151 studies over 71 years, Villegas de Posada and Vargas-Trujillo (2015) found support for Kant’s conceptualization of deontology using reason as a guide for moral action.
Deontology necessarily presupposes a rule-setting authority. “Morality is a matter of finding the right rule; it is not affected by desire, weakness of will, or laziness. A moral rule is followed as a duty, although it needs to be backed up by authority” (Van Steveren, 2007, p. 25).
Some have argued that the authority is either an implicit social contract, a Democratic government, or God (Holyoak & Powell, 2016). But if we have universal moral duties that transcend all people, eras and types of government, it follows that the moral lawgiver must also transcend people and eras. Claiming we have implicit social contracts does not eliminate the need for moral lawgiver who established this social contract with all of humanity.
Craig (2010) makes the argument that if objective moral values and duties exist, God exists. He states (p. 128) “There must be an infinite, eternal Mind who is the architect of nature and whose moral purpose man and the universe are gradually fulfilling.” Given our universal moral values of benevolence and universalism (Schwartz, 2012) and our universal moral duties to follow the Golden Rule (Kinnear et al., 2000), one could make the argument that the infinite, eternal Mind is the standard of goodness against which moral actions are measured.
People have argued that because humanity has failed to understand live up to the standards God has set over the centuries that God’s very nature is compromised. They argue that the presence of evil or bad actors discounts a standard of love and goodness. They argue that God’s nature must be changing since societies have changed. But this argument is faulty. One cannot use moral epistemology to make suggestions about the existence of objective morality. Furthermore, moral standards of love, truth, courage, mercy, equality and justice do not change as a function of our opinions or actions.
Some psychologists have conceptualized deontology differently, claiming that morality is based on moral intuitions and emotions concerning principles and duties rather than consequences and utility-providing reasoning. As an example, Haidt (2001) uses a Social Intuitionist Model to characterize moral judgments as quick, automatic, and effortless. He has argued that most moral judgments are made based on instant feelings of approval or disapproval. He uses the example of people rationalizing the rejection of two siblings having intercourse due to their gut reaction of disgust. Some have argued that Haidt’s theories are exaggeratedly bleak (e.g., Fine, 2006), but Haidt has responded by indicating that moral reasoning is less trustworthy than some posit, yet the alternative is not chaos, but intuition. He argues that intuition is more of an orderly, automatic process (Haidt & Bjorklund, 2008).
Haidt’s work contrasts the work of others (e.g., Greene, Sommerville, Nystrom, Darley, & Cohen, 2001; Greene, 2007). Greene and his colleagues used fMRI (functional magnetic resonance imaging) on individuals to analyze their brain activity while they considered various types of moral dilemmas. Results indicated the cognitive regions of the brain were more active when considering characteristically consequentialist types of arguments, while the emotional regions of the brain were more active when considering deontological types of arguments. Greene concluded that deontological intuitions are based on heuristics (cognitive shortcuts), so consequentialist judgments are therefore more reliable.
Others have opposed his conclusions (Berker, 2009; Liao, 2017). Liao (2017) argued that deontological judgments do not need to be the product of affect-laden responses and intuitions are not heuristics. Heuristics can be based on reasoning and the Kantian formulation of deontology was steeped in reason. Further, Liao (2017) argued that Greene’s results instead call into question the reliability of the consequentialist judgments, since impaired social cognition (rather than deliberate reasoning) corresponded to the utilitarian responses in the moral dilemmas presented.
Variations in our moral judgments may be based on differences in the following: values, grounds, schemas based on rights and duties, concern for types of moral patients, utility functions for consequences (including harms), emotional responses such as a lack of empathy, cognitive capacity and cognitive control (Holyoak & Powell, 2016). Variations may also be based on whether one is focused on and has distinguished descriptive (fact-based) aspects of moral judgments from prescriptive (values-based) aspects (Kalis, 2010).
Deontological decisions contrast utilitarian decisions in that the latter are slower, based on more difficult and reflective rational reasoning (Holyoak & Powell, 2016). Cohen and Ahn (2016) note that people draw from both rational reasoning and intuitive emotions when making ethical decisions, so sometimes these conflicting processes compete. The authors use Subjective Utilitarian Theory, which suggests that one’s personal values of items in a moral judgment dilemma are weighed and the highest values correspond to the difficulty of the decision in a variety of scenarios in different visual formats and with some under time pressure.
Conway and Gawronski (2013) conducted three studies, finding variations between deontological and utilitarian inclinations. Deontological inclinations were related to empathic concern, religiosity, and perspective-taking while utilitarian inclinations were related to the need for cognition. They further found that cognitive load reduced utilitarian inclinations, while manipulations that enhanced empathy increased deontological inclinations.
Researchers have also distinguished utilitarian judgments from deontological judgments by distinguishing utilitarian decisions as those where one must cause harm to maximize outcomes, while deontological decisions are those that reject harm (Greene, 2007; Reynolds & Conway, 2018). This is often explained using the trolley dilemma, which gives the decision-maker a choice based on the hypothetical story of an observer watching a train pass by. The observer sees that if the train continues down the track, it will hit five innocent people. The observer can save the five people if he stops the train, but to stop the train, the observer will need to push an innocent man in front of it. The deontologist would choose not to push the man, while the utilitarian would choose to push one person to save five.
One criticism of deontology occurs when moral duties conflict. For example, our basic human rights include the right to life, liberty, truth, and justice. Kant considered each as independent, yet sometimes they conflict when seeking the moral high ground. For example, in Nazi Germany in the early 1940s, a person hiding a family of Jewish people in his house likely desired justice and the protection of their lives and liberty. If Nazi guards showed up and asked whether he was hiding any Jews, he would need to lie to them so the family remains protected. Using the Kantian perspective would suggest the man has the moral duty to tell the truth. A different interpretation of deontology would suggest the duties should be considered collectively and weighted in a hierarchical manner, depending on the context of the situation.
Given varying contexts, values, and situations, forcing a dichotomy between deontological and utilitarian decisions using unrealistic trolley dilemmas and the like seems restrictive. One might instead consider Christian synderesis. Zollo, Pellegrini and Ciappei (2017) identified the importance of bridging the gap between intuitive and emotional processes (moral intuitions) with conscious and rational processes (moral reasoning) to explain ethical decision-making. They related moral intuitions to Kantian universalism and moral reasoning to utilitarian ethics.
“Despite the different sophistications of the utilitarian approach, a common element of all these theories is that the agent is rational and able to evaluate the situation and its outcome. Definitely, such requisites are not met by talking about intuition, especially the ability to forecast and choose the preferable outcomes. Conversely, universalism imposes that every act is performed according to general and transcendental moral principles (Kant, Foundations of the metaphysics of morals, ed. orig. 1785; 1959)” (Zollo et al., 2017, p. 695).
The authors used an influential Christian moral social doctrine, synderesis, to bridge the gap between the two viewpoints. Synderesis is “an innate human habit that fosters moral judgment and triggers the virtue of practical reason” (Zollo et al., 2017, p. 682). Synderesis is the “correct habit that regulates intuition due to its innate nature and it is present in every individual.” (Zollo, Pellegrini and Ciappei, 2017, p. 690). It is our conscience.
In Summa Theologiae, Thomas Aquinas referred to synderesis as the law of our mind, which is an awareness or understanding of the principles of human actions. Practical reasoning moves one from awareness of the principles to conclusions on actions or decisions. Conscience then forms a judgment on whether the actions or decisions are in alignment with one’s moral nature, whether they are right or wrong.
Please see the chart below, which provides a framework for understanding these concepts.
|Input||Descriptive||Prescriptive||Outcomes||Subject to change||Source|
|Heritability of social traits||Inherited traits
|N/A||Probability of behaviors, actions, practices||Yes||Genetic influences and the environment (genotypes and phenotypes)|
|Social development||Individual “as is” values||N/A||Probability of behaviors, actions, norms, practices||Yes||Adaptation to change|
|Moral ontology: Moral relativism||Cultural “as is” values||N/A||Probability of positive behaviors, actions, norms, practices, beliefs||Yes||Adaptation to change|
Moral realism and moral psychology: Objective morality
|Global “as is” values and moral standards||N/A||Probability of positive behaviors, actions, norms, practices, beliefs||No||Standards that transcend cultures and eras from intuitions, reasoning, and/or conscience|
|Normative ethics: Deontology||Intuitions, reasoning, conscience, emotions||Stance-independent moral duties of how we should act||Alignment with conscience||No||Transcendent moral lawgiver|
|Normative ethics: Utilitarianism||Intuitions, reasoning, conscious rationality||Individual or group preferences for particular consequences||Greatest good, greatest number of people||Yes||Preferred individual or shared goals|
In a longitudinal study of world values, Ron Inglehart (2000) identified trends away from traditional, religious views towards secularization and rationality. These trends have manifested in a shift in views favoring naturalism and moral relativism, which is evidenced in university classrooms and academic scholarship.
The intention of this article was to question the merit of cultural shifts away from supernatural foundations and explanations by examining and synthesizing the extant literature on metaethics, moral ontology, moral relativism and moral universalism. I sought to answer whether people from all parts of the globe are bound by the same universal and objective moral values and duties or whether our morality is only culturally contingent. The evidence presented suggests our morality is a blend of both relative and universal, objective values. While scholars such as Geert Hofstede (2001) have provided strong evidence of variations between cultures in “as is” values, scholars such as those in the GLOBE (2004) study have provided evidence of both “as is” practices and higher level “should be” values. Combined with results from other global studies led by Shalom Schwartz (2012), Westermarck (1906), Henrich and colleagues (2005), Dahlsgaard and colleagues (2005) and Kinnear and colleagues (2000), we have striking evidence suggesting humanity has certain righteous moral imperatives. All humans, no matter the country, ethnicity, age, or gender, share the universal moral duty to do what’s right and to strive for benevolence and universalism.
Accordingly, I offer a moral argument for the existence of God:
- If humanity has universal, objective moral values and obligations to do what’s right, there must be a universal source of righteousness that transcends generations.
- Humanity has universal, objective moral values and obligations to do what’s right.
- There is a universal source of righteousness: God.
These findings support the assertion that we have a conscience and a moral compass, which direct us to goodness. It follows that the source of our conscience is a benevolent, transcendent moral lawgiver. In the New Testament, the apostle Paul stated the following in Romans (2:15): “They show that the work of the law is written on their hearts, while their conscience also bears witness, and their conflicting thoughts accuse or even excuse them.”
Philosophers, evolutionary biologists, and psychologists have offered other explanations of our objective moral values and duties, which are captured in either evolutionary adaptations, social development, genetic influences, our environment, or objective moral theories. These explanations vary in moral epistemological or ontological approaches and in offering descriptive or prescriptive information.
Rather than thinking these explanations are at odds with one another, falling into the modern debate trap that pits science against religion, one might posit a process-based view using synderesis, reasoning, and conscience. Social development and evolutionary adaptations may have been guided by a divine moral lawgiver who stacked the deck in favor of selflessness and humility over selfishness, ego and coopetition, which is the combination of competition and cooperation. The heritability of social traits and the environment offers an explanation of the “as is” predispositions in humans that make more or less probable various behaviors. Moral foundations theory provides a descriptive explanation of objective morality, yet (like our social development and inherited traits), this theory fails to explain our moral prescriptions.
Philosophers initiated explanations of our moral duties using the normative moral deontological and utilitarian/consequentialist frameworks, which have been studied in recent years in a variety of disciplines, using various lenses and interpretations. While both frameworks are useful in understanding the ways we make ethical decisions, the deontological framework best helps us to understand the transcendent source of our moral grounding.
Ariely, D. (2010) Predictably Irrational: The Hidden Forces that Shape Our Decisions. New York, NY: Harper Perennial.
Bandura, A. (1977). Self-efficacy: Toward a unifying theory of behavioral change. Psychological Review, 84, 191-215. http://dx.doi.org/10.1037/0033-295X.84.2.191.
Barash, D.P. (2018). The sad saga of John Chau and the North Sentinelese. Psychology Today. November 28. https://www.psychologytoday.com/us/blog/peace-and-war/201811/the-sad-saga-john-chau-and-the-north-sentinelese
Bartels, D. M., & Pizarro, D. A. (2011). The mis-measure of morals: Antisocial personality traits predict utilitarian responses to moral dilemmas. Cognition, 121, 154–161.
Bengtsson, M., and S. Kock. 1999. Cooperation and Competition in Relationships Between Competitors in Business Networks. Journal of Business and Industrial Marketing, 14 (3), 178–190
Berker, S. (2009). The normative insignificance of neuroscience. Philosophy and Public Affairs, 4, 293–329.
Bouchard, T.J., Jr. (1998). Genetic and environmental influences on adult intelligence and special mental abilities. Human Biology, 70, 257-279.
Bouchard, T.J., Jr. & McGue, M. (2003). Genetic and environmental influences on human psychological differences. Journal of Neurobiology, 54, 4-45.
Bulbulia, J., Geertz, A. W., Atkinson, Q. D., Cohen, E. A., Evans, N., François, P., Gintis, H.,
Gray, R.D., Henrich, J., Jordan, F.M., Norenzayan, A., Richerson, P.J., Slingerland, E., Turchin, P., Whitehouse, H., Widlock, T., Wilson, D.S. (2013). The cultural evolution of religion. In P. J. Richerson & M. H. Christiansen (Eds.), Cultural evolution: Society, technology, language, and religion, p. 381-404, Cambridge, MA: MIT Press.
Cohen, D.J. & Ahn, M. (2016). A subjective utilitarian theory of moral judgment. Journal of Experimental Psychology, 145(10), 1359-1381.
Conway, P. & Gawronski, B. (2013). Deontological and Utilitarian Inclinations in Moral Decision Making: A Process Dissociation Approach. Journal of Personality and Social Psychology, 104 (2), 216-235.
Côté, S., Piff, P. K., & Willer, R. (2013). For whom do the ends justify the means? Social class and utilitarian moral judgment. Journal of Personality and Social Psychology, 104, 490–503.
Craig, W.L. (2008). Reasonable Faith: Christian truth and apologetics. Wheaton, IL: Crossway.
Craig, W.L. (2010). On Guard: Defending your faith with reason and precision. Colorado Springs, Co.: David C. Cook.
Dahlsgaard, K., Peterson, C. & Seligman, M.E.P. (2005). Shared virtue: The convergence of valued human strengths across culture and history. Review of General Psychology, 9(3): 203-213.
Dawkins, R. (2006). The God delusion. New York: Houghton Mifflin.
Deci, E. L. (1975). Intrinsic motivation. New York: Plenum.
Demuijnck, G. (2015) Universal values and virtues in management versus cross-cultural moral relativism: An educational strategy to clear the ground for business ethics. Journal of Business Ethics, 128, 817-835.
Fama, E.F. (1980). Agency problems and the theory of the firm. The Journal of the Political Economy, 88 (2), 288-307.
Fine, C. (2006). Is the emotional dog wagging its rational tail, or chasing it? Philosophical Explorations, 9, 83–98.
Frankena, W. (1973). Ethics. Englewood Cliffs, NJ: Prentice Hall.
Fukuyama, F. (1997). Social capital. The Tanner lectures on human values. Retrieved at: https://tannerlectures.utah.edu/_documents/a-to-z/f/Fukuyama98.pdf
Garte, S. (2019). Personal conversation with a biochemist via email. August 2nd.
Gettleman, J., Kumar, H., and Schultz, K. (2018). A man’s last letter before being killed on a forbidden island. New York Times. November 23. https://www.nytimes.com/2018/11/23/world/asia/andaman-missionary-john-chau.html
Graham, J., Haidt, J., & Nosek, B.A. (2009). Liberals and conservatives rely on different sets of moral foundations. Journal of Personality and Social Psychology, 96 (5), 1029–1046.
Graham, J., Haidt, J., Koleva, S., Motyl, M., Iyer, R., Wojcik, S., & Ditto, P. H. (2013). Moral foundations theory: The pragmatic validity of moral pluralism. Advances in Experimental Social Psychology, 47, 55–130. http://dx.doi.org/10.1016/B978-0-12-407236-7.00002-4
Greene, J. D. (2007). The secret joke of Kant’s soul. In W. Sinnott Armstrong (Ed.),Moral psychology: The neuroscience of morality, Vol. 3, 35-79, Cambridge, MA: MIT Press.
Greene, J. D., Sommerville, R. B., Nystrom, L. E., Darley, J. M., & Cohen, J. D. (2001). An fMRI investigation of emotional engagement in moral judgment. Science, 293, 2105–2108. doi: 10.1126/science.1062872
Grisez, G. (1978). Against consequentialism. The American Journal of Jurisprudence, 23(1), 21-72.
Haidt, J. (2001). The emotional dog and its rational tail: A social intuitionist approach to moral judgment. Psychological Review, 108, 814–834. http://dx.doi.org/10.1037/0033-295X.108.4.814
Haidt, J., & Joseph, C. (2004). Intuitive ethics: How innately prepared intuitions generate culturally variable virtues. Daedalus: Special Issue on Human Nature, 133(4), 55–66.
Haidt, J., & Bjorklund, F. (2008). Social intuitionists answer six questions about moral psychology. In W. Sinnott-Armstrong (Ed.), Moral psychology, Vol. 2: The cognitive science of morality, 181– 218. Cambridge, MA: MIT Press.
Haidt, J. (2012). The righteous mind: Why good people are divided by politics and religion. New York, NY: Vintage.
Harris. S. (2010). The moral landscape: How science can determine human values. New York, NY: Free Press.
Henrich, M.D., Boyd, R. Bowles, S., Camerer, C., Fehr, E., Gintis, H., McElreath, R., Alvard, M.,…Tracer, D. (2005). ‘Economic Man’ in cross-cultural perspective: Behavioral experiments in 15 small-scale societies. Behavioral and Brain Sciences, 28(6): 795-855.
Henrich, J. (2009). The evolution of costly displays, cooperation, and religion: Credibility enhancing displays and their implications for cultural evolution. Evolution and Human Behavior, 30, 244–260. http://dx .doi.org/10.1016/j.evolhumbehav.2009.03.005
Hills, M. D. (2002). Kluckhohn and Strodtbeck’s Values Orientation Theory. Online Readings in Psychology and Culture, 4(4). https://doi.org/10.9707/2307-0919.1040
Hofstede, G. (1980). Culture’s consequences: International differences in work-related values. Beverly Hills, CA.: Sage.
Hofstede, G. (2001). Culture’s consequences: Comparing values, behaviors, institutions and organizations across nations. Beverly Hills, CA: Sage.
Hofstede, G., Hofstede, G.-J., & Minkov, M. (2010 ). Cultures and organizations: Software of the mind. London: McGraw-Hill.
Holyoak, K.J. & Powell, D. (2016). Deontological coherence: A framework for common-sense moral reasoning. Psychological Bulletin, 142(11): 1179-1203.
Hopster, J. (2017). Two accounts of moral objectivity: from attitude independence to standpoint invariance. Ethical Theory and Moral Practice, 20: 763-780.
Ilies, R., Arvey, R.D. & Bouchard, T.J., Jr. (2006). Darwinism, behavioral genetics, and organizational behavior: a review and agenda for future research. Journal of Organizational Behavior, 27, 121-141.
Inglehart, R. (2000). Culture and democracy. In L.E. Harrison and S.P. Huntington: Culture Matters: How values shape human progress, p. 80-97. New York, NY: Basic Books.
IEP. Anonymous. (2019) Metaethics) Internet Encyclopedia of Philosophy. https://www.iep.utm.edu/metaethi/
Johnson, D. D. P. (2009). The error of God: Error management theory, religion, and the evolution of cooperation. In S. A. Levin (Ed.), Games, groups, and the global good (p. 169–180). Berlin, Germany: Springer Physica-Verlag. http://dx.doi.org/10.1007/978-3-540-85436-4_10
Kalis, A. (2010). Improving moral judgments: philosophical considerations. Journal of Theoretical and Philosophical Psychology, 30 (2), 94-108.
Kant, I. (1785/2002). Groundwork for the Metaphysics of Morals. New Haven, CT: Yale University Press.
Kim, D., McCalman, D. & Fisher, D. (2012). The sacred/secular divide and the Christian worldview. Journal of Business Ethics, 109, 203-208.
Kinnear, R.T., Kernes, J.L. & Dautheribes, T.M. (2000). A short list of universal moral values. Counseling and Values, 45, 4-17.
Kluckhohn, C. (1951). Values and value-orientations in the theory of action: An exploration in definition and classification. In T. Parsons & E. Shils (Eds.), Toward a general theory of action, p. 388-433, Cambridge, MA: Harvard University Press.
Kluckhohn, F. R. & Strodtbeck, F. L. (1961). Variations in value orientations. Evanston, IL: Row, Peterson. Kohn, M. L., & Schooler, C. (1983). Work and personality. Norwood, NJ: Ablex.
Lewis, C.S. (1952). Mere Christianity. New York, NY: C.S. Lewis Pte. Ltd.
Loehlin, J.C. (1992). Genes and environment in personality development. Newbury Park, CA: Sage Publications Inc.
MacKay, R. & Whitehouse, H. (2015). Religion and morality. Psychological Bulletin, 141(2): 447-473.
March, J.G. & Simon, H.A. (1958). Organizations. New York, NY: John Wiley.
Maslow, A. H. (1965). Eupsychian management. Homewood, IL: Dorsey.
McKirdy, E. (2018). ‘You guys might think I’m crazy.’ Diary of U.S. ‘missionary’ reveals last days on remote island. CNN. November 23. https://www.cnn.com/2018/11/22/asia/north-sentinel-island-john-allen-chau-diary-intl/index.html
Mill, J.S. (1863). Utilitarianism. Kitchener, Ontario, CA: Batoche Books Limited.
Miller, C. (2013). Morality is real, objective, and supernatural. Annals of the New York Academy of Sciences, 74-82.
Olson, J.M. Vernon, P.A., Harris, J.A. & Jang, K.L. (2001). The heritability of attitudes: a study of twins. Journal of Personality and Social Psychology, 80, 845-860.
Osman, M. & Wiegmann, W. (2017). Explaining moral behavior: A minimal moral model. Experimental Psychology, 64 (2), 68-81.
Rawls, J. (1971). A Theory of Justice. Cambridge, MA: The Belknap Press of the Harvard University Press.
Reynolds, C.J. & Conway, P. (2018). Not Just Bad Actions: Affective Concern for Bad Outcomes Contributes to Moral Condemnation of Harm in Moral Dilemmas. Emotions, 18 (7), 1007-1023.
Rokeach, M. (1972). Beliefs, Attitudes and Values: A Theory of Organization and Change. San Francisco: Jossey-Bass.
Rokeach, M. (1979) Understanding human values: Individual and societal. New York: The Free Press.
Schwartz, S. H., & Bilsky, W. (1987). Toward a universal psychological structure of human values. Journal of Personality and Social Psychology, 53, 550-562.
Schwartz, S. H. (1992). Universals in the content and structure of values: Theoretical advances and empirical tests in 20 cultures. In M.P. Zanna (Ed.). Advances in experimental social psychology, 25, 1-65. San Diego, CA: Academic Press.
Schwartz, S.H. (2012). An overview of the Schwartz view of basic values. Online Readings in Psychology and Culture, 2(1) https://doi.org/10.9707/2307-0919.1116
Shermer, M. (2013). Morality is real, objective, and natural. Annals of the New York Academy of Sciences, 57-62.
Street, S. (2006). A Darwinian dilemma for realist theories of value. Philosophical Studies, 127(1),109–166
Thomason, S.J., Simendinger, E. & Kiernan, D. (2013). Several determinants of successful coopetition in small business. Journal of Small Business and Entrepreneurship, 26 (1), 15-28.
Tosi, H.L. (2009). James March and Herbert Simon, Organizations. Theories of Organization.
Thousand Oaks, CA: Sage Publications. Retrieved on March 10, 2019 at https://www.corwin.com/sites/default/files/upm-binaries/27411_7.pdf
United Nations. (2018). International human rights law. United Nations Human Rights Office of the High Commissioner. https://www.ohchr.org/en/professionalinterest/Pages/InternationalLaw.aspx
Van Staveren, I. (2007) Beyond utilitarianism and deontology: Ethics in economics. Review of Political Economy, 19 (1), 19-35.
Villegas de Posada, C. & Vargas-Trujillo, E. (2015). Moral reasoning and personal behavior: A meta-analytical review. Review of General Psychology, 19 (4), 408-424.
Walley, K. (2007). Coopetition: An introduction to the subject and an agenda for research. International Studies of Management and Organizations 37 (2), 11–31.
Westermarck, E.A. (1906). The Origin and Development of the Moral Ideas. London, England: Macmillan.
Zollo, L., Pellegrini, M.M., and Ciappei, C. (2017). What sparks ethical decision making? The interplay between moral intuition and moral reasoning. Lessons from the Scholastic Doctrine. Journal of Business Ethics, 145, 681-700.
 Many businesses have discovered the value of “coopetition,” which occurs when competitors simultaneously cooperate and compete with one another to create value (Bengtsson & Kock 1999; Walley 2007; Thomason, Simendinger & Kiernan, 2013). Cooperation focuses on trust and reciprocity while competition is focused on maximizing one’s own self-interests. The combination of both concepts may help to contribute to the explanation of the evolution of prosocial behaviors.