ABSTRACT
Recent research has helped to cultivate growing awareness that machine-learning systems fueled by big data can create or exacerbate troubling disparities in society. Much of this research comes from outside of the practicing data science community, leaving its members with little concrete guidance to proactively address these concerns. This article introduces issues of discrimination to the data science community on its own terms. In it, we tour the familiar data-mining process while providing a taxonomy of common practices that have the potential to produce unintended discrimination. We also survey how discrimination is commonly measured, and suggest how familiar development processes can be augmented to mitigate systems' discriminatory potential. We advocate that data scientists should be intentional about modeling and reducing discriminatory outcomes. Without doing so, their efforts will result in perpetuating any systemic discrimination that may exist, but under a misleading veil of data-driven objectivity.
Subject(s)
Data Interpretation, Statistical , Algorithms , Humans , Machine LearningABSTRACT
INTRODUCTION: Testicular lumps and orchalgia both cause considerable anxiety, usually related to concerns about possible cancer. PATIENTS AND METHODS: We established a rapid-access testicular clinic staffed by a urologist and a specialist ultrasonographer in order to delay the time to a definitive diagnosis. RESULTS: Over a 30-month period, 845 men underwent clinical examination and scrotal ultrasonography. Overall, 4% of men were found to have testicular cancer. The majority of men were found to have normal testes or minor abnormalities and could be re-assured. CONCLUSIONS: This one-stop clinic reduces the time to a definitive diagnosis which may reduce anxiety suffered by men with testicular complaints and allows rapid identification of those harbouring a testicular malignancy.