Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
Philos Technol ; 34(4): 1135-1193, 2021.
Article in English | MEDLINE | ID: mdl-34631392

ABSTRACT

The term 'responsible AI' has been coined to denote AI that is fair and non-biased, transparent and explainable, secure and safe, privacy-proof, accountable, and to the benefit of mankind. Since 2016, a great many organizations have pledged allegiance to such principles. Amongst them are 24 AI companies that did so by posting a commitment of the kind on their website and/or by joining the 'Partnership on AI'. By means of a comprehensive web search, two questions are addressed by this study: (1) Did the signatory companies actually try to implement these principles in practice, and if so, how? (2) What are their views on the role of other societal actors in steering AI towards the stated principles (the issue of regulation)? It is concluded that some three of the largest amongst them have carried out valuable steps towards implementation, in particular by developing and open sourcing new software tools. To them, charges of mere 'ethics washing' do not apply. Moreover, some 10 companies from both the USA and Europe have publicly endorsed the position that apart from self-regulation, AI is in urgent need of governmental regulation. They mostly advocate focussing regulation on high-risk applications of AI, a policy which to them represents the sensible middle course between laissez-faire on the one hand and outright bans on technologies on the other. The future shaping of standards, ethical codes, and laws as a result of these regulatory efforts remains, of course, to be determined.

2.
Mitochondrion ; 45: 38-45, 2019 03.
Article in English | MEDLINE | ID: mdl-29471047

ABSTRACT

We used a comprehensive metabolomics approach to study the altered urinary metabolome of two mitochondrial myopathy, encephalopathy lactic acidosis and stroke like episodes (MELAS) cohorts carrying the m.3243A>G mutation. The first cohort were used in an exploratory phase, identifying 36 metabolites that were significantly perturbed by the disease. During the second phase, the 36 selected metabolites were able to separate a validation cohort of MELAS patients completely from their respective control group, suggesting usefulness of these 36 markers as a diagnostic set. Many of the 36 perturbed metabolites could be linked to an altered redox state, fatty acid catabolism and one-carbon metabolism. However, our evidence indicates that, of all the metabolic perturbations caused by MELAS, stalled fatty acid oxidation prevailed as being particularly disturbed. The strength of our study was the utilization of five different analytical platforms to generate the robust metabolomics data reported here. We show that urine may be a useful source for disease-specific metabolomics data, linking, amongst others, altered one-carbon metabolism to MELAS. The results reported here are important in our understanding of MELAS and might lead to better treatment options for the disease.


Subject(s)
Acidosis, Lactic/pathology , Biomarkers/analysis , MELAS Syndrome/pathology , Urine/chemistry , Adult , Carbohydrate Metabolism , Cohort Studies , Fatty Acids/metabolism , Female , Humans , Male , Metabolomics , Middle Aged , Young Adult
3.
Philos Technol ; 31(4): 525-541, 2018.
Article in English | MEDLINE | ID: mdl-30873341

ABSTRACT

Decision-making assisted by algorithms developed by machine learning is increasingly determining our lives. Unfortunately, full opacity about the process is the norm. Would transparency contribute to restoring accountability for such systems as is often maintained? Several objections to full transparency are examined: the loss of privacy when datasets become public, the perverse effects of disclosure of the very algorithms themselves ("gaming the system" in particular), the potential loss of companies' competitive edge, and the limited gains in answerability to be expected since sophisticated algorithms usually are inherently opaque. It is concluded that, at least presently, full transparency for oversight bodies alone is the only feasible option; extending it to the public at large is normally not advisable. Moreover, it is argued that algorithmic decisions preferably should become more understandable; to that effect, the models of machine learning to be employed should either be interpreted ex post or be interpretable by design ex ante.

SELECTION OF CITATIONS
SEARCH DETAIL
...