Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 11 de 11
Filter
Add more filters










Publication year range
1.
Cognition ; 250: 105790, 2024 Jun 21.
Article in English | MEDLINE | ID: mdl-38908304

ABSTRACT

Rules help guide our behavior-particularly in complex social contexts. But rules sometimes give us the "wrong" answer. How do we know when it is okay to break the rules? In this paper, we argue that we sometimes use contractualist (agreement-based) mechanisms to determine when a rule can be broken. Our model draws on a theory of social interactions - "virtual bargaining" - that assumes that actors engage in a simulated bargaining process when navigating the social world. We present experimental data which suggests that rule-breaking decisions are sometimes driven by virtual bargaining and show that these data cannot be explained by more traditional rule-based or outcome-based approaches.

2.
Clin Pediatr (Phila) ; 62(4): 321-328, 2023 05.
Article in English | MEDLINE | ID: mdl-36113109

ABSTRACT

This study explored how a community health worker (CHW) within a primary care team with a HealthySteps (HS) Specialist impacted referrals to social determinant of health resources for families with children aged birth to 5 years. Medical charts with documentation of HS comprehensive services between January and June 2018 were reviewed at 3 primary care clinics: 2 with an HS Specialist (HSS Only) and 1 with an HS Specialist and CHW (HSS + CHW). Eighty-six referrals were identified, 78 of which had documented outcomes. Outcomes were categorized as successful, unsuccessful, and not documented. The HSS + CHW group had a higher rate of successful referrals (96%) than the HSS Only group (74%). Statistical analysis (χ2 = 8.37, P = .004) revealed a significant association between the referral outcome and having a CHW on a primary care team with an HS Specialist. Therefore, primary care practices should consider adapting their HS model to include CHWs.


Subject(s)
Community Health Workers , Referral and Consultation , Child , Humans , Child, Preschool , Health Resources
3.
J Exp Psychol Gen ; 151(11): 2893-2909, 2022 Nov.
Article in English | MEDLINE | ID: mdl-35862073

ABSTRACT

We ask whether moral judgment in preschool children observes a "means principle." It is well established that young children consider both the consequences and the goals of actions when making moral judgments; much less studied is the question of whether the means used to attain a given goal also matter. By obtaining preschoolers' judgments regarding when, if ever, it is permissible for 1 person to harm another as a means, we show, across 2 experiments, that children (N = 200 across 2 studies; Mage = 5.1 yrs.) use the means principle in their moral judgments. Subjects recognized not only when a harm was being used as a means but also situated that means appropriately with respect to the correct superordinate goal. In this respect, the preschoolers in this sample are like adults across a wide range of cultures. These findings have important implications for the understanding of moral development: young children can use an agent's means, and not just her goal, to make a moral judgment. We discuss the broader issue of whether, in light of emerging evidence for the means principle, there really are any moral universals. (PsycInfo Database Record (c) 2022 APA, all rights reserved).


Subject(s)
Judgment , Morals , Adult , Child, Preschool , Female , Humans
4.
Trends Cogn Sci ; 26(5): 388-405, 2022 05.
Article in English | MEDLINE | ID: mdl-35365430

ABSTRACT

Technological advances are enabling roles for machines that present novel ethical challenges. The study of 'AI ethics' has emerged to confront these challenges, and connects perspectives from philosophy, computer science, law, and economics. Less represented in these interdisciplinary efforts is the perspective of cognitive science. We propose a framework - computational ethics - that specifies how the ethical challenges of AI can be partially addressed by incorporating the study of human moral decision-making. The driver of this framework is a computational version of reflective equilibrium (RE), an approach that seeks coherence between considered judgments and governing principles. The framework has two goals: (i) to inform the engineering of ethical AI systems, and (ii) to characterize human moral judgment and decision-making in computational terms. Working jointly towards these two goals will create the opportunity to integrate diverse research questions, bring together multiple academic communities, uncover new interdisciplinary research topics, and shed light on centuries-old philosophical questions.


Subject(s)
Morals , Philosophy , Decision Making , Engineering , Humans , Judgment
5.
Nutrients ; 13(7)2021 Jun 29.
Article in English | MEDLINE | ID: mdl-34210069

ABSTRACT

The purpose of this study was to conduct in-depth individual interviews with 30 African American adolescents with overweight and obesity and their families (caregiver/adolescent dyads) to gain a better understanding of how to integrate stress and coping essential elements into an existing family-based health promotion program for weight loss. Interview data from 30 African American adolescents with overweight and obesity (Mage = 15.30 ± 2.18; MBMI%-ile = 96.7 ± 3.90) were transcribed and coded for themes using inductive and deductive approaches by two independent coders. Inter-rater reliability was acceptable (r = 0.70-0.80) and discrepancies were resolved to 100% agreement. The themes were guided by the Relapse Prevention Model, which focuses on assessing barriers of overall coping capacity in high stress situations that may undermine health behavior change (physical activity, diet, weight loss). Prominent themes included feeling stressed primarily in response to relationship conflicts within the family and among peers, school responsibilities, and negative emotions (anxiety, depression, anger). A mix of themes emerged related to coping strategies ranging from cognitive reframing and distraction to avoidant coping. Recommendations for future programs include addressing sources of stress and providing supportive resources, as well as embracing broader systems such as neighborhoods and communities. Implications for future intervention studies are discussed.


Subject(s)
Adaptation, Psychological , Adolescent Behavior/psychology , Black or African American/psychology , Pediatric Obesity/psychology , Stress, Psychological/psychology , Adolescent , Behavior Therapy , Child , Diet/psychology , Family/psychology , Family Relations/psychology , Female , Health Behavior , Health Promotion/methods , Humans , Male , Pediatric Obesity/therapy , Qualitative Research , Randomized Controlled Trials as Topic , Weight Reduction Programs
6.
Proc Natl Acad Sci U S A ; 117(42): 26158-26169, 2020 10 20.
Article in English | MEDLINE | ID: mdl-33008885

ABSTRACT

To explain why an action is wrong, we sometimes say, "What if everybody did that?" In other words, even if a single person's behavior is harmless, that behavior may be wrong if it would be harmful once universalized. We formalize the process of universalization in a computational model, test its quantitative predictions in studies of human moral judgment, and distinguish it from alternative models. We show that adults spontaneously make moral judgments consistent with the logic of universalization, and report comparable patterns of judgment in children. We conclude that, alongside other well-characterized mechanisms of moral judgment, such as outcome-based and rule-based thinking, the logic of universalizing holds an important place in our moral minds.


Subject(s)
Decision Making , Judgment/physiology , Models, Psychological , Moral Development , Morals , Social Perception , Adult , Child , Child, Preschool , Humans , Middle Aged
7.
iScience ; 23(9): 101515, 2020 Sep 25.
Article in English | MEDLINE | ID: mdl-32920489

ABSTRACT

The recent sale of an artificial intelligence (AI)-generated portrait for $432,000 at Christie's art auction has raised questions about how credit and responsibility should be allocated to individuals involved and how the anthropomorphic perception of the AI system contributed to the artwork's success. Here, we identify natural heterogeneity in the extent to which different people perceive AI as anthropomorphic. We find that differences in the perception of AI anthropomorphicity are associated with different allocations of responsibility to the AI system and credit to different stakeholders involved in art production. We then show that perceptions of AI anthropomorphicity can be manipulated by changing the language used to talk about AI-as a tool versus agent-with consequences for artists and AI practitioners. Our findings shed light on what is at stake when we anthropomorphize AI systems and offer an empirical lens to reason about how to allocate credit and responsibility to human stakeholders.

8.
Nat Hum Behav ; 4(2): 134-143, 2020 02.
Article in English | MEDLINE | ID: mdl-31659321

ABSTRACT

When an automated car harms someone, who is blamed by those who hear about it? Here we asked human participants to consider hypothetical cases in which a pedestrian was killed by a car operated under shared control of a primary and a secondary driver and to indicate how blame should be allocated. We find that when only one driver makes an error, that driver is blamed more regardless of whether that driver is a machine or a human. However, when both drivers make errors in cases of human-machine shared-control vehicles, the blame attributed to the machine is reduced. This finding portends a public under-reaction to the malfunctioning artificial intelligence components of automated cars and therefore has a direct policy implication: allowing the de facto standards for shared-control vehicles to be established in courts by the jury system could fail to properly regulate the safety of those vehicles; instead, a top-down scheme (through federal laws) may be called for.


Subject(s)
Accidents, Traffic , Automation , Automobile Driving , Automobiles , Man-Machine Systems , Safety , Social Perception , Accidents, Traffic/legislation & jurisprudence , Adult , Automation/ethics , Automation/legislation & jurisprudence , Automobile Driving/legislation & jurisprudence , Automobiles/ethics , Automobiles/legislation & jurisprudence , Humans , Pedestrians/legislation & jurisprudence , Safety/legislation & jurisprudence
9.
J Exp Psychol Gen ; 147(11): 1728-1747, 2018 Nov.
Article in English | MEDLINE | ID: mdl-30372115

ABSTRACT

The presumption of innocence is not only a bedrock principle of American law, but also a fundamental human right. The psychological underpinnings of this presumption, however, are not well understood. To make progress, one important task is to explain how adults and children infer the goals and intentional structure of complex actions, especially when a single action has more than one salient effect. Many theories of moral judgment have either ignored this intention inference problem or have simply assumed a particular solution without empirical support. We propose that this problem may be solved by appealing to domain-specific prior knowledge that is either built-up over the probability of prior intentions or built-in as part of core cognition. We further propose a specific solution to this problem in the moral domain: a good intention prior, which entails a rebuttable presumption that if an action has both good and bad effects, the actor intends the good effects and not the bad effects. Finally, in a series of novel experiments we provide the first empirical support - from both adults and preschool children - for the existence of this good intention prior. (PsycINFO Database Record (c) 2018 APA, all rights reserved).


Subject(s)
Intention , Judgment , Morals , Adult , Child , Child, Preschool , Cognition , Comprehension , Female , Goals , Human Rights , Humans , Male
10.
Cogn Sci ; 42(4): 1229-1264, 2018 05.
Article in English | MEDLINE | ID: mdl-29785732

ABSTRACT

Various theories of moral cognition posit that moral intuitions can be understood as the output of a computational process performed over structured mental representations of human action. We propose that action plan diagrams-"act trees"-can be a useful tool for theorists to succinctly and clearly present their hypotheses about the information contained in these representations. We then develop a methodology for using a series of linguistic probes to test the theories embodied in the act trees. In Study 1, we validate the method by testing a specific hypothesis (diagrammed by act trees) about how subjects are representing two classic moral dilemmas and finding that the data support the hypothesis. In Studies 2-4, we explore possible explanations for discrete and surprising findings that our hypothesis did not predict. In Study 5, we apply the method to a less well-studied case and show how new experiments generated by our method can be used to settle debates about how actions are mentally represented. In Study 6, we argue that our method captures the mental representation of human action better than an alternative approach. A brief conclusion suggests that act trees can be profitably used in various fields interested in complex representations of human action, including law, philosophy, psychology, linguistics, neuroscience, computer science, robotics, and artificial intelligence.


Subject(s)
Cognition , Intention , Judgment , Morals , Humans
11.
Australian Dental Journal ; 22(6): 481-6, Dec. 1977. ilus, Tab
Article in En | Desastres -Disasters- | ID: des-4032
SELECTION OF CITATIONS
SEARCH DETAIL
...