Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters










Database
Language
Publication year range
1.
Clim Change ; 177(3): 53, 2024.
Article in English | MEDLINE | ID: mdl-38434209

ABSTRACT

Today, a major challenge for climate science is to overcome what is called the "usability gap" between the projections derived fromclimate models and the needs of the end-users. Regional Climate Models (RCMs) are expected to provide usable information concerning a variety of impacts and for a wide range of end-users. It is often assumed that the development of more accurate, more complex RCMs with higher spatial resolution should bring process understanding and better local projections, thus overcoming the usability gap. In this paper, I rather assume that the credibility of climate information should be pursued together with two other criteria of usability, which are salience and legitimacy. Based on the Swiss climate change scenarios, I study the attempts at meeting the needs of end-users and outline the trade-off modellers and users have to face with respect to the cascade of uncertainty. A conclusion of this paper is that the trade-off between salience and credibility sets the conditions under which RCMs can be deemed adequate for the purposes of addressing the needs of end-users and gearing the communication of the projections toward direct use and action.

2.
Clim Change ; 176(8): 101, 2023.
Article in English | MEDLINE | ID: mdl-37476487

ABSTRACT

Parameterization and parameter tuning are central aspects of climate modeling, and there is widespread consensus that these procedures involve certain subjective elements. Even if the use of these subjective elements is not necessarily epistemically problematic, there is an intuitive appeal for replacing them with more objective (automated) methods, such as machine learning. Relying on several case studies, we argue that, while machine learning techniques may help to improve climate model parameterization in several ways, they still require expert judgment that involves subjective elements not so different from the ones arising in standard parameterization and tuning. The use of machine learning in parameterizations is an art as well as a science and requires careful supervision.

3.
Stud Hist Philos Sci ; 100: 32-38, 2023 08.
Article in English | MEDLINE | ID: mdl-37315425

ABSTRACT

Like any science marked by high uncertainty, climate science is characterized by a widespread use of expert judgment. In this paper, we first show that, in climate science, expert judgment is used to overcome uncertainty, thus playing a crucial role in the domain and even at times supplanting models. One is left to wonder to what extent it is legitimate to assign expert judgment such a status as an epistemic superiority in the climate context, especially as the production of expert judgment is particularly opaque. To begin answering this question, we highlight the key components of expert judgment. We then argue that the justification for the status and use of expert judgment depends on the competence and the individual subjective features of the expert producing the judgment since expert judgment involves not only the expert's theoretical knowledge and tacit knowledge, but also their intuition and values. This goes against the objective ideal in science and the criteria from social epistemology which largely attempt to remove subjectivity from expertise.


Subject(s)
Intuition , Judgment , Uncertainty , Knowledge
4.
Stud Hist Philos Sci ; 88: 120-127, 2021 08.
Article in English | MEDLINE | ID: mdl-34166920

ABSTRACT

Non-epistemic values pervade climate modelling, as is now well documented and widely discussed in the philosophy of climate science. Recently, Parker and Winsberg have drawn attention to what can be termed "epistemic inequality": this is the risk that climate models might more accurately represent the future climates of the geographical regions prioritised by the values of the modellers. In this paper, we promote value management as a way of overcoming epistemic inequality. We argue that value management can be seriously considered as soon as the value-free ideal and inductive risk arguments commonly used to frame the discussions of value influence in climate science are replaced by alternative social accounts of objectivity. We consider objectivity in Longino's sense as well as strong objectivity in Harding's sense to be relevant options here, because they offer concrete proposals that can guide scientific practice in evaluating and designing so-called multi-model ensembles and, in fine, improve their capacity to quantify and express uncertainty in climate projections.


Subject(s)
Cultural Diversity , Philosophy , Climate , Climate Change , Philosophy/history , Uncertainty
5.
Stud Hist Philos Sci ; 83: 44-52, 2020 10.
Article in English | MEDLINE | ID: mdl-32958280

ABSTRACT

Projections of future climate change cannot rely on a single model. It has become common to rely on multiple simulations generated by Multi-Model Ensembles (MMEs), especially to quantify the uncertainty about what would constitute an adequate model structure. But, as Parker points out (2018), one of the remaining philosophically interesting questions is: "How can ensemble studies be designed so that they probe uncertainty in desired ways?" This paper offers two interpretations of what General Circulation Models (GCMs) are and how MMEs made of GCMs should be designed. In the first interpretation, models are combinations of modules and parameterisations; an MME is obtained by "plugging and playing" with interchangeable modules and parameterisations. In the second interpretation, models are aggregations of expert judgements that result from a history of epistemic decisions made by scientists about the choice of representations; an MME is a sampling of expert judgements from modelling teams. We argue that, while the two interpretations involve distinct domains from philosophy of science and social epistemology, they both could be used in a complementary manner in order to explore ways of designing better MMEs.


Subject(s)
Climate Change , Judgment , Forecasting , Uncertainty
6.
Stud Hist Philos Sci ; 56: 168-74, 2016 Apr.
Article in English | MEDLINE | ID: mdl-27083097

ABSTRACT

Empirical agreement is often used as an important criterion when assessing the validity of scientific models. However, it is by no means a sufficient criterion as a model can be so adjusted as to fit available data even though it is based on hypotheses whose plausibility is known to be questionable. Our aim in this paper is to investigate into the uses of empirical agreement within the process of model validation.


Subject(s)
Empirical Research , Models, Theoretical , Validation Studies as Topic , Calibration
SELECTION OF CITATIONS
SEARCH DETAIL
...