Religious Organizations in the Public Health Paradigm - Oxford Scholarship Jump to ContentJump to Main Navigation

Archives

2016

2015

2014

2013

2012

2011

Religious Organizations in the Public Health Paradigm

October 30, 2014

Originally posted on the OUPblog on 26th October, 2014. By Ellen Idler, Director of the Religion and Public Health Collaborative at Emory University. She is the author of Religion as a Social Determinant of Public Health, available on Oxford Scholarship Online.

Religion as a Social Determinant for Public Health

If you think about big public health challenges of our day — the Ebola virus in Africa, the rising rates of suicide among the middle-aged in the United States, the HIV epidemic everywhere — religions are playing a role. When I speak, I ask audiences, “What was the first thing you heard about the Ebola crisis?”, and they always say “The missionaries who got it were taken to Emory.” “That makes my point,” I say. “You didn’t know anything about it up until that moment, but they did.” Those missionaries, and the faith-based organizations they worked for (Samaritan’s Purse and Serving in Mission) were already there on the ground along with other faith-based organizations, volunteering their time, putting their lives in danger, and providing valuable resources of equipment, supplies, and knowledgeable helping hands to try to contain the outbreak.

In another challenge, the crisis of rising suicides among US veterans and Baby Boomers, religion’s role is more in the background, but no less important. Since sociologist Emile Durkheim first studied the subject in late 19th century France, researchers have consistently found that individuals with more social ties – particularly to religious groups — are more protected from suicide. Religious ties provide caring, support, warmth, and intimacy — the “carrots” of social interaction. They also provide rules for living and guidance for behavior that often require individuals to sacrifice their self-interest for the good of the group. These are the “sticks” of social interaction, which Durkheim argued were just as necessary as the “carrots” in keeping individuals from taking their own lives. So here are two quite different roles that religions play in public health: first in the foreground, deploying resources and religious social capital as partners with public health authorities in countries around the world, and also in the background, providing the sustenance of social integration and regulation that prevents the tailspin of suicide.

But religions are complicated, and in the HIV epidemic we have seen faith traditions playing all of these roles and other less helpful ones as well. One positive thing that religions do — very effectively through religious ritual and practice — is to give individuals a sense of belonging to something larger than themselves; they bestow a social identity that marks individuals as valued members of a group, with all of that group’s rights, privileges, and responsibilities. But group membership by its very nature implies that there are other individuals and groups — outsiders — who are not members, who may be less valued. This is an obvious source of conflict around the world and can lead to violence on a small or large scale. This too, sad to say, is an instance of religions taking a role in determining the health of populations, but not in a good way. And at a less extreme level, if an individual violates the norms of the group, or breaks its rules, it can lead to sanctions, punishment, or even being cast out from membership. So in the HIV epidemic, individuals who were victimized by the disease first, in many cases experienced a secondary victimization of being stigmatized by religious groups who perceived that the disease was a sign of forbidden behaviors, and therefore a just punishment.

Public health organizations and religious organizations are both looking to promote the well-being of their communities. In many cases those interests are perfectly aligned and the two institutions function, implicitly or explicitly, as partners. When they do not, it makes sense that two powerful forces should identify all of the ways in which they can work together, finding a way around the contentious issues to leverage each other’s constructive responses. Religion, along with income inequality, education, and political structures, is one of the social determinants of public health in countries around the world, despite its usual exclusion from the public health paradigm.

 

If objective moral reasoning is possible, how does it get started?  Sidgwick’s answer is, in brief, that it starts with a self-evident intuition. He does not mean by this, however, the intuitions of what he calls “common sense morality.”  To see what he does mean, we must draw a distinction between intuitions that are self-evident truths of reason, and a very different kind of intuition. This distinction will become clearer if we look at an objection to the idea of moral intuition as a source of moral truth.

Sidgwick was a contemporary of Charles Darwin, so it is not surprising that already in his time the objection was raised that an evolutionary view of the origins of our moral judgments would completely discredit them. Sidgwick denied that any theory of the origins of our capacity for making moral judgments could discredit the very idea of morality, because he thought that no matter what the origin of our moral judgments, we will still have to decide what we ought to do, and answering that question is a worthwhile enterprise.

On the other hand, he agreed that some accounts of the origins of particular moral judgments might suggest that they are unlikely to be true, and therefore discredit them. We defend this important insight, and press it further. Many of our common and widely shared moral intuitions are the outcome of evolutionary selection, but the fact that they helped our ancestors to survive and reproduce does not show them to be true.

This might be taken as a ground for skepticism about morality as a whole, but our capacity for reasoning saves morality from this skeptical critique. The ability to reason has, of course, evolved, and clearly confers evolutionary advantages on those who possess it, but it does so by making it possible for us to discover the truth about our world, and this includes the discovery of some non-natural moral truths.

Sidgwick thought that his greatest work was a failure because it concluded by accepting that both egoism and universal benevolence were rational. Yet they pointed to different conclusions about what we ought to do. We argue that the evolutionary critique of some moral intuitions can be applied to egoism, but not to universal benevolence. The principle of universal benevolence can be seen as self-evident, once we understand that our own good is, from “the point of view of the universe” of no more importance than the similar good of anyone else. This is a rational insight, not an evolved moral intuition.

In this way, we resolve the so-called “dualism of practical reason.” This leaves us  with a utilitarian reason for action that can be presented in the form of a utilitarian principle: we ought to maximize the good generally.

What  is this good thing that we should maximize? Is my having a positive attitude towards something enough to make bringing it about good for me? Preference utilitarians have argued that it is, and one of us has, for many years, been well-known as a representative of that view.

Sidgwick, however, rejected such theories, arguing that the good must be, not what I actually desire but what I would desire if I were thinking rationally. He then develops the view that the only things that it is rational to desire for themselves are desirable mental states, or pleasure, and the absence of pain.

For those who hold that practical reasoning must start from desires, it is hard to understand the idea of what it would be rational to desire – or at least, that idea can be understood only in relation to other desires that the agent may have, so as to produce a greater harmony of desire.

This leads to a desire-based theory of the good.

One of us, for many years, became well-known as a defender of one such desire-based theory, namely preference utilitarianism. But if reason can take us to a more universal perspective, then we can understand the claim that it would be rational for us to desire some goods, even if we have no present desire for them. On that basis, it becomes more plausible to argue for the view that the good consists in having certain mental states, rather than in the satisfaction of desires or preferences.

- See more at: http://blog.oup.com/2014/06/the-point-of-view-of-the-universe/#sthash.LhtDta11.dpuf

 

 

Discover more: the chapter 'Religion: The Invisible Social Determinant' in Religion as a Social Determinant of Public Health is now free and available to read until the end of November. Get access to the full text of this book, as well as almost 1,500 Oxford Religion titles, by recommending OSO to your librarian today.