ABSTRACT
Artificial intelligence (AI)-generated content detectors are not foolproof and often introduce other problems, as shown by Desaire et al. and Liang et al. in papers published recently in Patterns and Cell Reports Physical Science. Rather than "fighting" AI with more AI, we must develop an academic culture that promotes the use of generative AI in a creative, ethical manner.
ABSTRACT
In this work, we investigate how students in fields adjacent to algorithms development perceive fairness, accountability, transparency, and ethics in algorithmic decision-making. Participants (N = 99) were asked to rate their agreement with statements regarding six constructs that are related to facets of fairness and justice in algorithmic decision-making using scenarios, in addition to defining algorithmic fairness and providing their view on possible causes of unfairness, transparency approaches, and accountability. The findings indicate that "agreeing" with a decision does not mean that the person "deserves the outcome," perceiving the factors used in the decision-making as "appropriate" does not make the decision of the system "fair," and perceiving a system's decision as "not fair" is affecting the participants' "trust" in the system. Furthermore, fairness is most likely to be defined as the use of "objective factors," and participants identify the use of "sensitive attributes" as the most likely cause of unfairness.
ABSTRACT
During times of crisis, information access is crucial. Given the opaque processes behind modern search engines, it is important to understand the extent to which the "picture" of the Covid-19 pandemic accessed by users differs. We explore variations in what users "see" concerning the pandemic through Google image search, using a two-step approach. First, we crowdsource a search task to users in four regions of Europe, asking them to help us create a photo documentary of Covid-19 by providing image search queries. Analysing the queries, we find five common themes describing information needs. Next, we study three sources of variation - users' information needs, their geo-locations and query languages - and analyse their influences on the similarity of results. We find that users see the pandemic differently depending on where they live, as evidenced by the 46% similarity across results. When users expressed a given query in different languages, there was no overlap for most of the results. Our analysis suggests that localisation plays a major role in the (dis)similarity of results, and provides evidence of the diverse "picture" of the pandemic seen through Google.