My research program focuses on an age-old question germane to understanding individual differences in behavior, and with implications for scholarship in Human Development, Family Science, Education, Psychology, and the Social Sciences more broadly: When researchers measure how two or more people view the same social experience, why do they often encounter discrepant results in these views? We see this question reflected in our relationships and larger social environments. Discrepant results occur within families, when caregivers differ in their views on child rearing. Discrepant results manifest when evaluating the effectiveness of education and mental health programs, as different outcome measures often point to differing levels of effectiveness (e.g., Schneider, 2020; Weisz et al., 2017). They appear within our institutional structures―plaintiffs and defendants arguing opposing sides of legal matters that are adjudicated in the courtrooms of common law countries, for example. When valued for the insights they reveal, discrepant results beget consensus building and mutually beneficial coalitions among disparate community partners. Yet, too often, discrepant results beget efforts to determine who has the “right” or “most accurate” view of a social experience, even when several views―not just one―each accurately reflect how relationships and social environments impact human behavior (see De Los Reyes, 2024). In fact, scholars can leverage discrepant results to understand individual behavior and how it varies within and across the relationships and social environments that typify day-to-day life (De Los Reyes et al, 2023). Understanding discrepant results and leveraging strategies that embrace and integrate them has implications for multiple scientific disciplines and decision-making settings in societies globally. In this description of my research philosophy, I describe the approach I take to addressing questions relevant to interpreting discrepant results, and using knowledge about these results to understand links among individual behavior, interpersonal relationships, and social environments.
In my scholarship, I gravitate toward areas of scholarly discourse where discrepant results happen often, and there remains uncertainty among scholars as to what these discrepancies reflect. Thus, I have dedicated my career to understanding the discrepant results produced in scholarship about mental health. The “origin story” of scholarship about discrepant results in mental health lies with a reality of assessing mental health domains. As with many other lines of discourse in the Social Sciences, scholars lack “gold standard” instruments to assess any one mental health domain. We do not have “one test” for detecting anxiety, or mood concerns, or concerns with sustaining attention, for example. Addressing this challenge requires not one instrument, but rather multiple, rigorously studied instruments. The most often-used and well-understood instruments consist of surveys and interviews administered to people and significant others in their lives, such as caregivers and teachers in the case of youth, or spouses and coworkers in the case of adults. Reports completed by these informants form the backbone of what we know about mental health interventions and special education programs. The data produced by these reports factor into researchers’ decisions about which interventions appear to “work” or produce beneficial effects for those who receive them. More broadly, authoritative bodies use these same data to make high-stakes decisions about interventions, with implications for determining which interventions should be “scaled up” and consumed by the public. Consider those entities tasked with classifying “Evidence-Based Interventions,” such as the United Kingdom’s National Institute of Clinical Excellence (NICE) guidelines, the American Psychological Association’s Clinical Practice Guidelines, and the U.S. Department of Education’s What Works Clearinghouse (WWC), to name a few. These entities classify interventions as “evidence-based” after considering many studies, each of which relied on multiple informants’ reports to arrive at the data used to estimate the effects of interventions. Yet, how might one make these classifications accurately, if studies do not reveal the same results about an intervention’s effects?
We all have likely experienced reading the news about a scientific study that produced a given result―the effects of caffeine on memory or health outcomes, for example―and thought, “I read about a study that said the opposite.” It turns out that these same kinds of discrepant results appear in mental health studies. Any two mental health studies often differ in their estimates of anything from the prevalence of mental health conditions to the effects of mental health treatments. Researchers even encounter discrepant results among findings made in a single study. Discrepant results factor into what we think we know about how often mental health conditions occur, what causes them, and how to improve mental health. Yet, researchers do not know what to do with discrepant results when they encounter them. The problem does not lie with their methods. Discrepant results appear even when researchers use high-quality instruments to collect data, and they appear in the results produced by controlled experiments and uncontrolled field studies alike. The problem actually lies with how researchers interpret their data, and the decisions they make with those data. My work reveals that discrepant results often contain information pertinent to understanding links among individual behavior, interpersonal relationships, and social environments.
In my work, I seek to reduce uncertainties in research and decision-making generally, by understanding what discrepant results reflect. Along these lines, even when informants like caregivers, youth, and teachers make reports about the mental health of, for instance, the youth, and use identical instruments to make these reports (e.g., surveys with parallel item content and response options), these informants nonetheless produce discrepant estimates of the youth’s mental health status. For example, an assessor collecting a report from a caregiver might learn that their child displays oppositional behavior, but the child’s teacher does not corroborate the caregiver’s impressions. At other times, an assessor observes the reverse pattern―a teacher reports that a student in their classroom needs help with anxiety, but the student’s caregiver does not concur. At times, an assessor collects a report from a child, who reports struggling with relatively covert depressive symptoms that reports from adult authority figures fail to capture. My work indicates that these discrepant results often reflect aspects of the social environments in which informants observe the youth’s behavior (e.g., caregivers at home vs. teachers at school; De Los Reyes et al, 2022). This is because when researchers assess any one person’s mental health, they do not randomly select their sources. Much like the journalist preparing a news story, researchers strategically solicit their sources. In fact, in Psychology these sources have a name―structurally different informants. These are informants who harbor unique abilities to observe the person undergoing evaluation within social environments pertinent to understanding that person’s behavior (see Eid et al., 2008). These structural differences among informants signify that discrepant results do not have to pose obstacles to decision-making. In fact, they may reveal learning opportunities that enhance decision-making. What excites me most about my research program is this: Discrepant results happen in many other areas of scholarly discourse, not just mental health. This means I get a lot of excuses to develop multidisciplinary collaborations with colleagues whose backgrounds and areas of expertise are quite different from my own. Building bridges between my research program and the programs of diverse groups of colleagues helps us all leverage knowledge about the discrepant results we see in our own work, and to not only address long-standing problems in mental health research, but also inspire new lines of research for probing discrepant results and how they manifest in multiple areas of discourse.
Using frameworks I published in leading theoretical outlets in Human Development, Education, and Psychology (e.g., Psychological Bulletin, Child Development Perspectives, Exceptional Children, Annual Review of Clinical Psychology, Journal of Youth and Adolescence), I examine how discrepant results in mental health research reveal meaningful information about individual differences in behavior. I take a team scholarship approach akin to the team science approaches that are now commonplace in STEM (see Stokols et al., 2008). That is, I leverage the ubiquitous nature of discrepant results to produce scholarship with a network of colleagues across Human Development, Education, Organizational Behavior, Cognitive Science, Neuroscience, Medicine, and Social Work. I also take a developmentally-informed approach that traverses multiple developmental periods, including early childhood, late adolescence, and emerging adulthood. I integrate multi-informant, psychophysiological, observational, and performance-based assessment paradigms, and I leverage these paradigms to test questions using a suite of experimental, controlled observation, naturalistic, and quantitative review designs. The goal of my research program is to understand discrepant results in assessments of mental health, the relationships and social environments that shape them, and how we might harness discrepant results to inform decision-making.