20 Compiling a Summary of Research Findings
The goal of research is to build a picture of the world.
The picture that develops will depend on the question asked:[1] Is it a research question? A policy question? A moral question? An engineering question? Something else?
A research question essentially asks, “What is?” What do we know about hypermasculinity? How much violence is there in media? What impact does priming have on the brain? Does having smartphones out during conversations decrease people’s satisfaction with the quality of the interaction? All these questions ask for descriptions of a) what is knowable (verifiable), b) what is known, and c) how it is known (assessment of the method’s soundness).
Moral questions address the issue of “what should be,” and only need research to verify statements of fact. For example, let’s examine what research would be useful for answering the question, “Is it wrong to have a television show that portrays Lucifer as an appealing character?” Fox’s announcement of Lucifer, a television series based on the premise that Lucifer is bored with hell and decides to vacation in Los Angeles, was highly offensive to and protested by the Christian right. The American Family Association and One Million Moms both started online petitions to cancel Lucifer strictly on moral grounds. The show, both groups argued, “will glorify Satan as a caring, likeable person in human flesh” and fundamentally “mischaracterize Satan.” The show, and the network that sponsors it, will “[disrespect] Christianity and mock the Bible.” Most of their claims were moral judgements, not based on empirical evidence.[2] Perhaps the one issue that could be empirically established is whether the character of Lucifer was scripted to be appealing.
Some moral questions need considerable supportive evidence, such as claims suggesting that using Twitter at all is bad because it increases the possibility of death. Here, the statement of fact—“increases the possibility of death”—needs to be supported by evidence. However, after doing several literature reviews of different databases, the only evidence found for increased fatalities was using smartphones in places that are dangerous—such as taking selfies on a ledge, or using Twitter while driving. In the former case, taking selfies is not using Twitter, but taking selfies in hazardous places or situations where the risk for severe injury is high. In the latter case, the danger, however, was not specifically Twitter, but the more general category of reading and texting while driving, which itself is a subcategory of danger associated with “driving while distracted.” Since the evidence doesn’t support a general claim that “Twitter use per se increases chances of dying,” the moral statement, as written, does not hold and is an invalid argument. However, if the moral question was limited to whether texting when driving is bad, then the moral question is supported, given that the audience accepts the overarching moral claim that death is bad.
Policy analysts extensively depend on research to understand a current situation, and to argue for what solution should be adopted.
A literature review for developing a policy to address the impacts of violence in media would at the very least need to look at:
How much violence is shown in media (a research question)?
Is the impact severe enough that the violence should not be tolerated in society (a moral question that may need empirical data)?
What should be done (a policy recommendation that should minimize the harmful impacts and maximize the positive impacts)?
Policy developers usually need to use available research, which means that the policy analyst would need to extend findings from a specific research population to the population that would be affected by a proposed policy—which can introduce significant distortions if the policy analyst is not careful. A media study that looked at the top grossing films for aggressive acts, for example, would likely underplay the level and intensity of violence in horror films, since horror films—known to be more likely to show intense and graphic violence than many other genres of film—are far less likely to be in the list of top ten grossing films in any one year. Therefore, we can assume that horror film audiences are systematically more exposed to violence than a study of top-grossing films would indicate. A policy maker interested in the question, “Should the U.S. regulate the level of violence in horror films?” would or should know that study of a list that is more likely to have action and adventure films than horror films would likely underestimate the level and explicitness of violence to which horror fans are exposed.
Further, as with all studies, readers need to identify errors the researcher might have introduced in the method, and how those errors bias the study’s findings. Coding categories might be incomplete (content analysis), questions could be biased (survey research), or the instrument measuring subject change might not adequately detect change (experimental method). Some of these are fatal flaws, meaning that the entire study should not be used; some mean that certain questions or certain coding categories are flawed, but that the rest of the findings are still useful.
Constructing a Literature Review
The cost of getting verifiable findings is increased precision in the research question, which usually means narrowing a general question. In most cases, however, describing “what is” means developing an answer to a general question. To do this, analysts put many studies together, using findings from each individual research project as one bit of an overall picture.
In the following chapter, we will walk through building a short literature review to address the question, “What do we know about violence in media?”
- And the research that has been done, of course. ↵
- Lucifer is bad (by definition, evil). Portrayal of Lucifer as appealing will decrease the perception that Lucifer is bad. People need to understand that Lucifer is evil to resist his temptation. Therefore, portraying Lucifer as charming is bad. There is no doubt that the television show intended Lucifer Morningstar to be appealing. The Fox press release described Lucifer as “Charming, charismatic and devilishly handsome.” Katherine Sangiorgio, “DC’s Lucifer Pilot Leaks Online Today,” Legion of Leia, August 10, 2015, https://web.archive.org/web/20150814091654/http://legionofleia.com:80/2015/08/dcs-lucifer-pilot-leaks-online-today/. ↵