Which of the following would NOT be an ethical concern of anthropologists studying terrorism

Bioethicist, National Institute of Environmental Health Sciences, National Institutes of Health, Box 12233, CU 03, Research Triangle Park, NC, 27709, USA. Phone: 919 541 5658, Fax: 919 541 9854

Find articles by David B. Resnik

Michigan State University, Associate Professor, Lyman Briggs College, Department of Fisheries and Wildlife, and Department of Philosophy, W-31 Holmes, East Lansing, MI 48825, USA. Phone: 517 432 7374

Find articles by Kevin C. Elliott

Social responsibility is an essential part of the responsible conduct of research that presents difficult ethical questions for scientists. Recognizing one’s social responsibilities as a scientist is an important first step toward exercising social responsibility, but it is only the beginning, since scientists may confront difficult value questions when deciding how to act responsibly. Ethical dilemmas related to socially responsible science fall into at least three basic categories: 1) dilemmas related to problem selection, 2) dilemmas related to publication and data sharing, and 3) dilemmas related to engaging society. In responding to these dilemmas, scientists must decide how to balance their social responsibilities against other professional commitments and how to avoid compromising their objectivity. In this article, we will examine the philosophical and ethical basis of social responsibility in science, discuss some of the ethical dilemmas related to exercising social responsibility, and make five recommendations to help scientists deal with these issues.

Keywords: social responsibility, scientific research, ethics, politics, objectivity, values

Numerous scientist and philosophers have argued that scientists have a responsibility to address the social implications of their research (Edsall 1975, Shrader-Frechette 1994, Reiser and Bulger 1997, Kitcher 2001, Wing 2002, Beckwith and Huang 2005, Forge 2008, Committee and Science, Engineering, and Public Policy 2009, Douglas 2009, Elliott 2011, Frankel 2012, Børsen et al. 2013, Shamoo and Resnik 2014).1 Many professional codes specifically mention duties related to social responsibility in science (e.g. American Anthropological Association 2012, American Chemical Society 2012, American Society for Microbiology 2005). The National Institutes of Health (NIH) requires that funded students and trainees receive instruction in the responsible conduct of research (RCR), which should include education in social responsibility (National Institutes of Health 2009).

History contains some striking examples of scientists who demonstrated a strong commitment to social responsibility. In 1939, Albert Einstein, at the urging of Hungarian physicist Leo Szilard, wrote a letter to President Roosevelt informing him about Germany’s intent to develop atomic bombs from enriched uranium. Einstein advised Roosevelt to allocate more funds to develop an atomic bomb to counter the threat from Germany. Though Einstein was a lifelong pacifist, he could not ignore the threat to world peace posed by the Nazi regime (Einstein 1939). After the war, Einstein and other physicists advocated using atomic energy only for peaceful purposes (Shamoo and Resnik 2014). In 1962, wildlife biologist Rachel Carson published Silent Spring, a book that warned scientists and the public about the dangers posed by overuse of dichlorodiphenyltrichloroethane (DDT) and other pesticides. Carson’s book helped to launch the modern environmental movement and led to new pesticide regulations (Carson 1962). During the 1970s, pediatrician and child psychiatrist Herbert Needleman conducted important research demonstrating the adverse impacts of lead on human development. Needleman informed the public about health hazards of lead and advocated for regulations to ban it as an ingredient in gasoline and household paint (Shamoo and Resnik 2014).

Acknowledging one’s social responsibilities as a scientist is only the beginning of dealing with the value implications of one’s work, since responsibility requires one to address the moral, political, social, and policy issues at stake. In this article, we will examine the philosophical and ethical basis of social responsibility in science, discuss some of the ethical dilemmas related to exercising social responsibility, and make some recommendations to help scientists deal with these issues.

The current consensus concerning the social responsibilities of scientists stands in sharp contrast to the opinion that prevailed several decades ago, which held that the primary duty of the investigator is to conduct research, and that policymakers, scholars, and the public should deal with the consequences of new knowledge (Resnik 1998, Pielke 2007). The main rationale for this viewpoint was the belief that science is objective: science deals with facts, not values (Ayer 1952, Popper 1959, Snow 1959, Nagel 1961). The objectivity of science has traditionally been understood in two different ways2: 1) science is grounded in mind-independent reality, i.e. it is true or factual;3 and 2) science is value-free, i.e. scientific judgments and decisions are based on evidence and reasoning, not on moral, political, or other values (Longino 1990, Douglas 2004). Our discussion will focus on the second sense of objectivity.

To understand the debate about values in science, it is important to clarify a couple of points. First, one needs to specify what is meant by a ‘value’. A value is something that is desired or sought, such as happiness, economic prosperity, social justice, beauty, or environmental protection. Values may also include epistemic goals, such as knowledge or truth, as well as desired epistemic features of hypotheses, theories, and models, such as empirical support, simplicity, generality, precision, rigor, testability, and explanatory power (Longino 1990, Kitcher 2001, Haack 2003). Proponents of the value-free thesis have long recognized that epistemic values can and should influence scientific judgment and decision-making. The debate about the role of values in science is primarily about the role of non-epistemic (e.g. moral, political, social, or economic) values in scientific judgment and decision-making (Longino 1990, Resnik 1998, 2007, Douglas 2009, Elliott 2011). Accordingly, our paper will focus on these values.

Second, one needs to distinguish between descriptive and normative senses of the notion that science is value-free (Longino 1990). To claim that science is value-free in the descriptive sense is to assert that science is not influenced by non-epistemic values. To claim that science is value-free in the normative sense is to assert that science ought not to be influenced by non-epistemic values. It is important to distinguish between descriptive and normative senses of the value-free thesis, since one could admit that actual science, as practiced by human beings, is often influenced by non-epistemic values, but that scientists still ought to minimize the impact of these values on their research (Douglas 2004, Resnik 2007).

Since the 1950s, historians (e.g. Kuhn 1961, 1977), sociologists (e.g. Barnes 1977, Latour and Woolgar 1986), and philosophers (e.g. Rudner 1953, Laudan 1977, Harding 1986), have challenged the notion that science is or ought to be value-free by arguing that non-epistemic values can impact science in many different ways.4 For example, non-epistemic values often influence the decision to conduct research on a particular topic (i.e., problem selection). A pharmaceutical company may decide to fund research on a drug to treat hypertension, as opposed to a vaccine for an infectious disease affecting people in tropical regions, because there is a larger market for the hypertension drug. At the governmental level, funding agencies allocate money to support research on problems the public regards as important or of immediate concern.

Non-epistemic values often play a role in research design as well. For example, research with human subjects should be designed to protect the rights and welfare of participants, and animal experiments should be designed to minimize pain and suffering, wherever possible (Shamoo and Resnik 2014). In some cases, research sponsors have selected designs with an aim toward obtaining a particular result. If a company is interested in producing experimental evidence that its chemical has no adverse effects in a human population, it could try to achieve this goal by conducting a small study lacking adequate statistical power to demonstrate those effects.

Many different values can come into play in data interpretation, since this aspect of science involves drawing conclusions about the scientific or policy significance of one’s results. For example, a sponsor of a clinical trial of a new drug might argue that the data show that its product yields significant benefits that outweigh the risks and that it should therefore be approved for marketing. A toxicologist might argue that data from his or her research on the adverse effects of a chemical tested in mice shows that it presents a danger to the human population and should be studied further or regulated.

Decisions and judgments related to accepting or rejecting a theory or hypothesis often involve non-epistemic values, since theories and hypotheses can have significant consequences for society. In a paper that challenged the prevailing orthodoxy concerning the objectivity of science, Rudner (1953) argued that scientists must make value judgments when they accept or reject hypotheses, because the amount of evidence needed to accept a hypothesis depends on the consequences of accepting it. For example, scientists should use very high standards of evidence to accept hypotheses concerning the safety and efficacy or new drugs, because these decisions can have significant implications for human health, but lower standards of evidence may be applied to decisions without significant social implications.

Before concluding this section it is important to note that values may operate at a conscious or subconscious level (Resnik 2007, Resnik and Elliott 2013). At a conscious level, a value would impact science by playing a role in deliberate choices that affect research. Some deliberate choices might include designing a study to minimize harm to human subjects, falsifying data in order to maintain grant funding, or not publishing research that could be used to develop a bioweapon (Resnik 2013). At a subconscious level, values might impact science by influencing judgment and reasoning in ways that scientists are not aware of. Value influences might go unnoticed because they are inherent in the institutional, social, and economic context of research. For example, a scientist whose research is sponsored by a pharmaceutical company might make choices pertaining to data analysis or interpretation that are favorable to the company. While the scientist might claim that corporate sponsorship has not impacted her research, she might not even be aware of how it has affected her decision-making. Psychological research has shown that people are often unaware of economic, political, cultural and other biases that influence their judgment and decision-making (Ciadlini 1993).

Assuming that the arguments and evidence discussed above support the view that science is not and should not be value-free, a question naturally arises: “What is the proper role of non-epistemic values in science?” This is a complex issue that we cannot answer fully here.5 The gist of our position is that scientists should follow ethical standards and values (such as honesty, openness, fairness, accountability, and respect for human and animal subjects) in the conduct and communication of their research and generally strive for value-neutrality with respect to research outcomes (i.e. data or results). By “value-neutrality” we do not mean that the scientific research is completely value-free; we mean only that research outcomes should not be deliberately biased toward any particular set of competing values in a dispute, especially without making the influences of values transparent (Elliott and Resnik 2014). For example, a researcher who falsifies, distorts, or suppresses data in order to promote an economic or political agenda would be violating the value-neutrality condition, but a researcher who takes steps to protect human subjects from harm probably would not be.

There are at least two arguments for value-neutrality in science. The first is that value-neutrality is an essential feature of scientific methodology and ethics (Haack 2003, Resnik 2007). Procedures, methods, experiments, and tests used in science are designed to minimize bias and promote objectivity. Ethical norms, such as honesty and openness, also promote objectivity. Scientists trust that their colleagues will strive for value-neutrality, and they rely on research published by others with this expectation in mind. Scientists who allow values to skew their research results undermine this trust and hinder the growth of scientific knowledge.

The second argument is that the public justifiably relies on science to provide facts and expert opinions that serve as the basis for fair and effective policies (Resnik 2009, Resnik 2011). Scientific facts and expert opinions can help resolve public policy debates because people view them as independent of particular moral, social, political or other values (Resnik 2007, 2009). Science can serve as a common ground between competing viewpoints. Without some agreement on scientific issues, public policy debates can be difficult to resolve, because they may boil down to conflicts of incommensurable values. Debates about climate change policies have been difficult to resolve, in part, because opposing stakeholders have disputed the scientific facts (Pielke 2007). Opposing sides disagree about whether global warming is occurring and whether human activities are causing global warming. Scientific research related to climate change has become highly politicized, and the objectivity of climate researchers has been called into question (Pielke 2007). Scientists working on climate change, and other issues with implications for social policy, need to strive for value-neutrality to legitimize their research in the mind of the public.

While scientists and the public rightly expect researchers to strive for value-neutrality, one might argue that since values often influence scientific judgments and decisions, it is better for scientists to discuss the values that may impact their reasoning instead of trying to maintain the false appearance of complete value-neutrality (Elliott and Resnik 2014). Reasonable scientists may disagree about the appropriate role of non-epistemic values in science, and these disagreements contribute to the ethical challenges that we discuss below.

Having examined the relationship between science and non-epistemic values, we can now develop the argument for social responsibility. The argument begins with the thesis (defended above) that science is not and should not be value-free. If this is the case, then scientists face two choices: they can either ignore the value implications of their work or they can address them (i.e., attempt to respond to value judgments in an ethically responsible manner). Deciding to ignore the value implications of one’s research would be irresponsible, because behaving responsibly requires one to deal with the implications of one’s conduct (Douglas 2009, Elliott 2010). Since scientists should act responsibly, they should address the value implications of their work.

It is important to understand that this argument only shows that scientists should address the non-epistemic values inherent in their conduct; it does not show that scientists should promote any particular values, such as public health, human rights, environmental protection, justice, or that they should follow particular normative theories, such as utilitarianism, Kantianism, egalitarianism, etc. Based on the argument provided thus far, scientists could therefore address the value implications of their research differently, due to differing value commitments (Elliott and Resnik 2014, Resnik and Elliott 2014).

Although the argument outlined above does not show that scientists have any particular obligations to society, at least three arguments do. First, all people have moral duties to avoid causing harm to others. In science, the obligation to do no harm implies that investigators should not engage in activities, such as some types of dangerous research, which are likely to cause net harm to society (Kitcher 2001). Second, all people have obligations to help others. For example, if you see someone drowning in a pool, you should take some action to help them, such as throwing them a life preserver or calling a lifeguard. It would also be wrong to avoid rescuing someone in danger if one could do so with minimal cost to oneself, especially if one had unique skills or training to be of assistance (Shrader-Frechette 1994, Elliott 2011). Scientists can honor their obligation to help others by engaging in activities that benefit society, such as research or education (Shamoo and Resnik 2014). Third, scientists have obligations to society because they have benefitted, directly or indirectly, from government support of their education and research. Government agencies, such as the NIH and National Science Foundation (NSF), support research and education conducted at universities and colleges. State governments also provide considerable support for scientific research and education by funding universities and colleges and providing land and other resources. Researchers who cause harm or fail to do good may undermine public support for science (Shamoo and Resnik 2014).

Recognizing that one should address the value implications of one’s research is an important first step toward exercising social responsibility, but it is only the beginning, since scientists may still face difficult ethical questions related to acting responsibly. In this section, we will examine three types of dilemmas scientists frequently encounter when considering their social responsibilities.

One kind of ethical challenge related to social responsibility is deciding whether proposed research is worthwhile. This is a question scientists cannot avoid, since engaging in research involves at least an implicit endorsement of its value. Non-epistemic values may have a bearing on the decision to conduct (or not conduct) some types of research. For example, the physicists and engineers who worked on the Manhattan Project faced difficult moral questions concerning their involvement in nuclear weapons research. Many of them conducted this research out of a sense of moral obligation to help the war effort, but they also wanted to promote peaceful uses of nuclear energy (Resnik 1998). A social scientist who is considering whether to conduct a study of the role of race and genetics in intelligence would need to address issues related to its implications for racial prejudice and discrimination (Kitcher 2001).

Problem selection also raises value issues for research sponsors and institutions. Government sponsors must decide whether a research proposal merits funding. As noted earlier, government agencies usually consider not only issues related to scientific design of research but also social implications. Peer review committees may address the social implications of grant proposals during their deliberations. Agency officials must also decide how to prioritize their research investments among different areas of study (Resnik 2001, 2009). For example, the NIH considers the impact of research on public health when allocating funds between different parts of its research portfolio. The NSF requires grant proposals to address the social impacts of research (Shamoo and Resnik 2014). Government agencies must also decide whether to fund research with potentially dangerous consequences for society (see discussion below). While private companies tend to focus on how their research funding decisions will impact profits, they may also consider the social implications of their research investments. Research institutions may need to address value questions when deciding whether to enter into research contracts with private companies, because companies may seek to impose contractual requirements for research funding that interfere with the free and open exchange of scientific data and information (Resnik 2007). Likewise, value questions arise when deciding whether to conduct classified research on campus because classification imposes restrictions on the sharing of research data and information (Resnik 2009, Soranno et al. 2014).

Value questions sometimes arise concerning publication and data sharing because dissemination of knowledge can have good or bad consequences for society. Scientists must sometimes decide whether to publish a study, where to publish, and how to publish it (i.e., whether to withhold some information or include a discussion that softens some of the impact of the results). Similar questions arise in data sharing.

A recent example from virology illustrates the moral conundrums related to publication of potentially dangerous research funded by the NIH. In 2011, two research teams, one led by Ron Fouchier at the Erasmus Medical Center in the Netherlands, and another led by Yoshihiro Kawaoka at the University of Wisconsin-Madison, conducted experiments to genetically modify the H5N1 avian influenza virus so that it could be transmissible by air between mammals, including humans. Currently, people can only contract this lethal virus through direct contact with infected birds. The investigators claimed that the research had the potential to benefit society by providing public health officials with information for monitoring dangerous mutations in the wild to prevent disease outbreaks. They also claimed that the research could be used to develop vaccines or treatments. However, the NIH was concerned that publishing the results of these experiments could lead to a global pandemic as a result of accidental contamination of laboratory workers or deliberate misuse, i.e. terrorism.

The NIH asked the National Science Advisory Board for Biosecurity (NASBB) to review the research and make a recommendation concerning publication. The journals where investigators submitted their research, Science and Nature, held up their review while the NSABB deliberated. The NASBB recommended in December 2011 that both papers should be published only if key details that would allow someone to replicate the experiments were removed. However, it reversed its decision in March 2012 after the authors submitted fuller papers that provided more information about public health benefits and biosafety measures. The NSABB also obtained more information about the practical and legal difficulties with redacted publication. The journals published the papers shortly after the NSABB made its final recommendation (Resnik 2013).6

Although most of the public discussion of publication issues has focused on potentially dangerous bioscience research, publication and data sharing issues arise in other areas of science. Researchers often remove personal identifiers prior to sharing human subject information with other scientists to protect the confidentiality of participants. It may also be necessary to remove some demographic information in some cases to protect confidentiality, particularly with small studies. At one time, researchers assumed that sharing de-identified genomic data posed no risks to human subjects, but this assumption no longer holds, because statisticians have developed methods of re-identifying individuals in de-identified genomic databases. To deal with these issues, many research institutions require data recipients to sign data use agreements that obligate recipients not to attempt to identify individuals or share data with other researchers without permission (Resnik 2010).

Researchers may also need to take steps to protect communities from harms resulting from publication or data sharing. For example, if researchers are conducting a study of the sexually transmitted diseases and sexual abuse in a particular community, they may need to decide whether or how to publish results that could lead to adverse consequences for the community, such as discrimination or bias (Resnik and Kennedy 2010). Researchers may decide to withhold the name and location of a community in publications and refer to it only in demographic terms. Researchers who conduct community-based research have formed community advisory boards to help with study design and recruitment and deal with the potential consequences of publication (Resnik and Kennedy 2010).

Social responsibility implies obligations to help the public address the implications of research. For example, Carson urged society to adopt pesticide regulations and Needleman argued for regulations on lead in gasoline and paint. There are a variety of ways that scientists may engage the public, such as:

  • Discussing the policy implications of research in scholarly articles and commentaries, press releases, and university courses;

  • Providing expert testimony7 on the ethical, social, legal, or policy implications of research;

  • Participating in non-governmental organizations that deal with the value and policy implications of science and technology. Publicly advocating for specific policies related to one’s research through editorials, letters to the editor, public speeches, media interviews, etc.

  • Blowing the whistle on illegal or unethical activities in industry, government, or academia.

Researchers may face ethical dilemmas when deciding whether or how to engage the public. Scientists who conduct policy-relevant research often must decide whether to draw policy conclusions from their research or let it speak for itself. For example, public health researchers who are studying the impact of needle exchange programs on the incidence of human immunodeficiency virus (HIV) and hepatitis C must decide whether to recommend that communities adopt needle exchange programs. Because value commitments often operate at a subconscious level, scientists who decide not to include any policy recommendations in their published articles or commentaries may unwittingly make value-based assumptions that impact their research design, data analysis, or data interpretation (Elliott and Resnik 2014).

Scientists who conduct policy-relevant research may find themselves in unfamiliar and uncomfortable territory. Most scientists are not accustomed to answering questions about the implications of their research or addressing controversial moral, social, political, or policy debates. They may also lack the education, experience, or disposition to handle these issues effectively. It is therefore easy to understand why some scientists prefer to avoid dealing with the value implications of their research. They may feel more comfortable in the laboratory than in the public spotlight. However, as we stressed earlier, scientists should not abdicate their social responsibilities. Scientists who conduct controversial research must be prepared to deal with value questions.

Scientists who conduct policy-relevant research may also be concerned that their objectivity will be threatened if they engage the public. Threats to objectivity come in two forms. First, although every researcher should be mindful of the potential impact of value-based assumptions, those who conduct policy-relevant science may be especially concerned that their own stances on policy issues may bias their research in subtle ways that they are not aware of. For example, a researcher with a strong commitment to public health may be concerned that she will unknowingly introduce biases into her research on the hazardous effects of a chemical. Researchers who are concerned about inadvertently introducing values into their research should take appropriate measures to reduce bias by seeking criticism and feedback from independent parties before, during and after the research process.

Second, scientists who engage society may risk their reputation for objectivity (Pielke 2007). Research may appear biased to laypeople, policymakers, and other researchers even when there are good reasons to believe that it is reliable and methodologically sound. A salient example of the threats to the reputation for objectivity can be found in climate science, where opposing sides have accused each other of conducting research that has been biased by values (Pielke 2007, Resnik 2012). Those who favor action to prevent or mitigate climate change have argued that scientists who oppose the consensus view8 on climate change have been influenced by industry interests, while those who oppose action to deal with global warming have argued that climate researchers who support the consensus view have been influenced by environmental and values (Pielke 2007).

Scientists may therefore be wary of engaging the public because they fear that this will lead people to question their objectivity (Pielke 2007). There is no easy way out of this dilemma, since it is difficult to control other people’s perceptions of one’s conduct or demeanor. However, scientists who exercise social responsibility can help protect their reputations by openly discussing their value commitments and clearly distinguishing between the scientific evidence related to an issue and their personal opinions (Resnik and Elliott 2013, 2014). Scientists who draw policy implications from their research should disclose their own value assumptions and acknowledge that other people might draw different implications from their research. Transparency concerning one’s value assumptions and commitments should be the rule (Elliott and Resnik 2014).

Though most researchers have a strong commitment to objectivity, some may decide to intentionally incorporate moral, social, political, or other values into their research. This decision is controversial, because scientists and the public justifiably expect research to be objective, and any efforts to introduce values into one’s research could undermine its credibility. Nevertheless, some scholars have argued that there are situations in which it is appropriate for scientists to incorporate values into the analysis or interpretation of data, or hypothesis acceptance, as long as they are appropriately transparent about these value commitments (Douglas 2009; Elliott 2011; Elliott and Resnik 2014). This may occur when the evidence about an issue is complex and difficult to interpret and when policymakers need scientists to provide the best assessment of the evidence that they can. For example, scientists who are conducting research on the risks of industrial chemicals may decide to allow a concern for public health to guide their data interpretation (Elliott and Resnik 2014).

Scientists who engage the public may also face backlash from industry or other interests. For example, chemical industry groups and some leading scientists sought to discredit Carson’s work. They argued that her conclusions were invalid and that she did not have sufficient scientific qualifications to assess the safety of pesticides. Some of her opponents described her as an irrational woman (Resnik 2012). Herbert Needleman faced strong opposition from lead manufacturers, including bogus charges of research misconduct. Though Needleman was exonerated, he spent a great deal of time and money fighting these allegations (Shrader-Frechette 2012).

When public engagement threatens a scientist’s personal, financial, or other interests, he or she must choose between pursuing particular avenues for benefiting the public and self-protection. This will often not be an easy choice to make, and scientists may make it differently. Some may choose to sacrifice their careers for a cause that they view as just, while others may seek to promote the good of society in a way that does not threaten their livelihood. For example, a scientist could make an anonymous report to the press concerning a company’s illegal or unethical activities, instead of revealing his or her name in public. We do not claim that there is one correct way to exercise social responsibility when engaging the public. Rather, we wish to call attention to the fact that social responsibility often incurs considerable personal costs or risks.

Finally, engaging the public may involve a substantial expenditure of time and effort. While discussing the value and policy implications of one’s research with a journalist may require only an hour or so of a scientists’ time, providing expert testimony may require several days of work, and publicly advocating for a specific policy could involve an extensive commitment of time. Since these activities may compete with other obligations, responsibilities, and interests, scientists must decide how to balance competing commitments and values. Some scientists may devote considerable time and effort toward clarifying the policy implications of their work, while others may not.

Social responsibility is an essential part of the responsible conduct of research that presents difficult ethical questions for scientists. Recognizing one’s social responsibilities as a scientist is an important first step toward exercising social responsibility, but it is only the beginning. Scientists who exercise social responsibility often face ethical dilemmas concerning their obligations to society. These dilemmas typically arise in three different areas: problem selection, publication and data sharing, and public engagement. Exercising social responsibility sometimes presents hazards for scientists, since they may face public backlash and scrutiny, and may risk compromising their own objectivity or their reputation for objectivity.

To help deal with these issues, we make five recommendations. First, collaborations with scholars who have some experience and expertise in ethics, politics, or public policy may help scientists deal with the value implications of their work. Scientists who are conducting or planning to conduct research that raises controversial issues may wish to consult with ethicists, attorneys, philosophers, or other humanists, or even ask them to be a member of the research team. Some institutions offer research ethics consultation services to help scientists deal with ethical questions related to their work (de Melo-Martin et al. 2007). Organizations that fund and oversee research may also want scholars with expertise in ethics, policy, or politics to participate in the review of research that raises potentially controversial issues. Scientists may also decide to work with public relations officials in deciding how to communicate their research to the public (Watts 2014).

Second, scientists can frequently alleviate or defuse charges that their work is biased by disclosing and discussing their value assumptions and commitments when drawing policy implications from their research. They should distinguish between what their data and results clearly show and the policy conclusions they infer from their data and results. Scientists should be mindful of how their involvement in controversial issues may impact the public’s perception of their work.

Third, education in the responsible conduct of research should include ample time to discuss ethical questions related to exercising social responsibility. Scientists should understand that these are important issues that are not always clear-cut and require thoughtful reflection.

Fourth, scholarly societies can create codes of conduct and universities can create policies that provide basic guidelines for socially responsible practice (see e.g., Kourany 2010).

Fifth, scientific organizations, government agencies, and policy-oriented institutions may find it valuable to create advisory bodies that can reflect on difficult ethical issues related to the practice of socially responsible science. The Asilomar Conference on Recombinant DNA in 1974 provides an excellent example of a meeting organized by scientists to reflect on the social implications of their research (Singer 2001). In recent years, numerous venues have been developed for this sort of reflection, including consensus conferences, government science advisory panels, citizen juries and municipal advisory committees, and “citizen science” initiatives (Kleinman 2000; Elliott 2011). As calls for socially responsible science have increased, it has become clearer that scientists can benefit by working with a broader range of advisors and collaborators.

This article is the work product of an employee or group of employees of the National Institute of Environmental Health Sciences (NIEHS), National Institutes of Health (NIH). However, the statements, opinions or conclusions contained therein do not necessarily represent the statements, opinions or conclusions of NIEHS, NIH, or the United States government.

1We will assume that social responsibility encompasses more than duties to society (as a whole) and includes duties to individuals, groups, communities, and the environment.

2Douglas (2004) distinguishes between eight different senses of scientific objectivity. We will focus on only two here.

3There is a large philosophical literature examining the relationship between science and reality that we will not address here. SeeChakravartty (2010).

4There is not sufficient space in this article to review this debate here. For further discussion see Longino (1990), Resnik (2007), Douglas (2009), Elliott (2011).

5For further discussion, see Resnik 2007, 2009, Douglas 2004, 2009, Elliott 2011, Elliott and Resnik 2014.

6The NSABB does not have the legal authority to censor or classify research. It only makes recommendations that other federal agencies may choose to follow.

7Expert testimony includes testimony in a court of law or on government committees or boards.

8The consensus view is that human activities, such as emissions of greenhouse gases and deforestation, are partly responsible for the rise in global temperatures that has occurred in the last hundred years and is expected to continue even if current practices change (Solomon et al. 2007).

David B. Resnik, Bioethicist, National Institute of Environmental Health Sciences, National Institutes of Health, Box 12233, CU 03, Research Triangle Park, NC, 27709, USA. Phone: 919 541 5658, Fax: 919 541 9854.

Kevin C. Elliott, Michigan State University, Associate Professor, Lyman Briggs College, Department of Fisheries and Wildlife, and Department of Philosophy, W-31 Holmes, East Lansing, MI 48825, USA. Phone: 517 432 7374.

  • American Anthropological Association. Principles of Professional Responsibility. 2012 http://ethics.aaanet.org/category/statement/. Accessed 14 July 14 2014.
  • American Chemical Society. The Chemical Professional’s Code of Conduct. 2012 http://www.acs.org/content/acs/en/careers/profdev/ethics/the-chemical-professionals-code-of-conduct.html. Accessed 14 July 2014.
  • American Society for Microbiology. Code of Ethics. 2005 http://www.asm.org/index.php/governance/code-of-ethics. Accessed 14 July 2014.
  • Ayer AJ. Language, Truth, and Logic. 2nd. New York: Dover; 1952. [Google Scholar]
  • Barnes B. Interests and the Growth of Knowledge. London: Routledge; 1977. [Google Scholar]
  • Beckwith J, Huang F. Should we make a fuss? A case for social responsibility in science. Nature Biotechnology. 2005;23(12):1479–1480. [PubMed] [Google Scholar]
  • Børsen T, Antia AN, Glessmer MS. A case study of teaching social responsibility to doctoral students in the climate sciences. Science and Engineering Ethics. 2013;19(4):1491–1504. [PubMed] [Google Scholar]
  • Carson R. Silent Spring. New York: Houghton-Mifflin; 1962. [Google Scholar]
  • Chakravartty A. A Metaphysics for Scientific Realism. Cambridge: Cambridge University Press; 2010. [Google Scholar]
  • Cialdini RB. Influence: The Psychology of Persuasion. New York: Quill William Morrow; 1993. [Google Scholar]
  • Committee on Science, Engineering, and Public Policy, National Academy of Sciences, National Academy of Engineering, and Institute of Medicine. On Being a Scientist: A Guide to Responsible Conduct in Research. 3rd. Washington, DC: National Academies Press; 2009. [Google Scholar]
  • de Melo-Martín I, Palmer LI, Fins JJ. Viewpoint: developing a research ethics consultation service to foster responsive and responsible clinical research. Academic Medicine. 2007;82(9):900–904. [PubMed] [Google Scholar]
  • Douglas H. The irreducible complexity of objectivity. Synthese. 2004;138(3):453–473. [Google Scholar]
  • Douglas H. Science, Policy, and the Value-Neutral Ideal. Pittsburgh, PA: University of Pittsburgh Press; 2009. [Google Scholar]
  • Edsall JT. Scientific freedom and responsibility, Science. 1975;188(4189):687–693. [PubMed] [Google Scholar]
  • Einstein A. Letter to Franklin D. Roosevelt. 1939 Aug 2; 1939. [Google Scholar]
  • Elliott KC. An ethics of expertise based on informed consent. Science and Engineering Ethics. 2006;12(4):637–661. [PubMed] [Google Scholar]
  • Elliott KC. Hydrogen fuel-cell vehicles, energy policy, and the ethics of expertise. Journal of Applied Philosophy. 2010;27(4):376–393. [Google Scholar]
  • Elliott KC. Is a Little Pollution Good for You?: Incorporating Societal Values in Environmental Research. New York: Oxford University Press; 2011. [Google Scholar]
  • Elliott KC. Selective ignorance and agricultural research. Science, Technology, and Human Values. 2013;38(3):328–350. [Google Scholar]
  • Elliott KC, Resnik DB. Science, policy, and the transparency of values. Environmental Health Perspectives. 2014;122(7):647–650. [PMC free article] [PubMed] [Google Scholar]
  • Forge J. The Responsible Scientist: A Philosophical Inquiry. Pittsburgh, PA: University of Pittsburgh Press; 2008. [Google Scholar]
  • Frankel MS. Regulating the boundaries of dual-use research. Science. 2012;336(6088):1523–1525. [PubMed] [Google Scholar]
  • Gilmer PJ, DuBois M. Teaching social responsibility: the Manhattan project. Commentary on “The Six Domains of Research” Science Engineering Ethics. 2002;8(2):206–210. [PubMed] [Google Scholar]
  • Haack S. Defending Science within Reason. New York: Prometheus; 2003. [Google Scholar]
  • Harding S. The Science Question in Feminism. Ithaca, NY: Cornell University Press; 1986. [Google Scholar]
  • Joravsky D. The Lysenko Affair. Chicago: University of Chicago Press; 1986. [Google Scholar]
  • Kitcher P. Science, Truth, and Democracy. New York: Oxford University Press; 2001. [Google Scholar]
  • Kleinman D. Science, Technology, and Democracy. Albany: State University of New York Press; 2000. [Google Scholar]
  • Kourany J. Philosophy of Science after Feminism. New York: Oxford University Press; 2010. [Google Scholar]
  • Kuhn TS. The Copernican Revolution. Cambridge, MA: Harvard University Press; 1957. [Google Scholar]
  • Kuhn TS. The Structure of Scientific Revolutions. 2nd. Chicago: University of Chicago Press; 1962, 1970. [Google Scholar]
  • Kuhn TS. The Essential Tension. Chicago: University of Chicago Press; 1977. [Google Scholar]
  • Latour B, Woolgar S. The Social Construction of Facts. Princeton, NJ: Princeton University Press; 1986. [Google Scholar]
  • Laudan L. Progress and Its Problems. Berkeley, CA: University of California Press; 1977. [Google Scholar]
  • Longino H. Science as Social Knowledge. Princeton, NJ: Princeton University Press; 1990. [Google Scholar]
  • MacDaniels LH. Some social implications of the scientific method. Science. 1941 Sep 12;94:243–248. 12. [PubMed] [Google Scholar]
  • Merriam JC. Some responsibilities of science with relation to government. Science. 1934 Dec 28;80:597–601. [PubMed] [Google Scholar]
  • Nagel E. The Structure of Science. New York: Harcourt, Brace, and World; 1961. [Google Scholar]
  • National Institutes of Health. Update on the Requirement for Instruction in the Responsible Conduct of Research. 2009 http://grants.nih.gov/grants/guide/notice-files/NOT-OD-10-019.html. Accessed 14 July 2014.
  • National Institutes of Health. Ethical, Legal, and Social Implications Program. 2014 http://www.genome.gov/elsi/. Accessed 14 July 2014.
  • Pielke R. The Honest Broker: Making Sense of Science in Policy and Politics. Cambridge: Cambridge University Press; 2007. [Google Scholar]
  • Pimple KD. Six domains of research ethics. A heuristic framework for the responsible conduct of research. Science and Engineering Ethics. 2002;8(2):191–205. [PubMed] [Google Scholar]
  • Popper K. The Logic of Scientific Discovery. London: Hutchinson; 1959. [Google Scholar]
  • Reiser JM, Bulger RE. The social responsibilities of biological scientists. Science and Engineering Ethics. 1997;3(2):137–143. [PubMed] [Google Scholar]
  • Resnik DB. The Ethics of Science. New York: Routledge; 1998. [Google Scholar]
  • Resnik DB. Setting biomedical research priorities: justice, science, and public participation. Kennedy Institute of Ethics Journal. 2001;11(2):181–204. [PubMed] [Google Scholar]
  • Resnik DB. The Price of Truth: How Money Affects the Norms of Science. New York: Oxford University Press; 2007. [Google Scholar]
  • Resnik DB. Playing Politics with Science: Balancing Scientific Independence and Government Oversight. New York: Oxford University Press; 2009. [Google Scholar]
  • Resnik DB. Genomic research data: open vs. restricted access. IRB. 2010;32(1):1–6. [PMC free article] [PubMed] [Google Scholar]
  • Resnik DB. Environmental Health Ethics. Cambridge: Cambridge University Press; 2012. [Google Scholar]
  • Resnik DB. H5N1 avian flu research and the ethics of knowledge. Hastings Center Report. 2013;43(2):22–33. [PMC free article] [PubMed] [Google Scholar]
  • Resnik DB, Elliott KC. Taking financial relationships into account when assessing research. Accountability in Research. 2013;20(3):184–205. [PMC free article] [PubMed] [Google Scholar]
  • Resnik DB, Elliott KC. Bisphenol A and risk management ethics. Bioethics. 2014 Jan;29 2014 [Epub ahead of print] [PMC free article] [PubMed] [Google Scholar]
  • Resnik DB, Kennedy CE. Balancing scientific and community interests in community-based participatory research. Accountability in Research. 2010;17(4):198–210. [PMC free article] [PubMed] [Google Scholar]
  • Rudner R. The scientist qua scientist makes value judgments. Philosophy of Science. 1953;20(1):1–6. [Google Scholar]
  • Shamoo AE, Resnik DB. Responsible Conduct of Research. 3rd. New York: Oxford University Press; 2014. [Google Scholar]
  • Shrader-Frechette KS. Ethics of Scientific Research. Rowman and Littlefield; Boston: 1994. [Google Scholar]
  • Shrader-Frechette KS. Research integrity and conflicts of interest: the case of unethical research-misconduct charges filed by Edward Calabrese. Accountability in Research. 2012;9(4):220–242. [PubMed] [Google Scholar]
  • Singer M. What did the Asilomar exercise accomplish, what did it leave undone? Perspectives in Biology and Medicine. 2001;44(2):186–191. [PubMed] [Google Scholar]
  • Snow CP. The Two Cultures and the Scientific Revolution. New York: Cambridge University Press; 1959. [Google Scholar]
  • Solomon S, Qin D, Manning M, Marquis M, Averyt K, Tignor M, Miller HL, Jr, Chen Z. Working Group I Contribution to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge: Cambridge University Press; 2007. Climate Change 2007: The Physical Basis. [Google Scholar]
  • Soranno P, Cheruvelil K, Elliott K, Montgomery G. It’s good to share: Why environmental scientists’ ethics are out of date. BioScience. 2014 doi: 10.1093/biosci/biu169. [PMC free article] [PubMed] [CrossRef] [Google Scholar]
  • Watts S. Society needs more than wonder to respect science. Nature. 2014;508:7495–151. [PubMed] [Google Scholar]
  • Wing S. Social responsibility and research ethics in community-driven studies of industrialized hog production. Environmental Health Perspectives. 2002;110(5):437–44. [PMC free article] [PubMed] [Google Scholar]
  • Wing S. Objectivity and ethics in environmental health science. Environmental Health Perspectives. 2003;111(14):1809–1818. [PMC free article] [PubMed] [Google Scholar]