Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
Front Robot AI ; 9: 728628, 2022.
Article in English | MEDLINE | ID: mdl-35252360

ABSTRACT

In recent years, two fields have become more prominent in our everyday life: smart cities and service robots. In a smart city, information is collected from distributed sensors around the city into centralised data hubs and used to improve the efficiency of the city systems and provide better services to citizens. Exploiting major advances in Computer Vision and Machine Learning, service robots have evolved from performing simple tasks to playing the role of hotel concierges, museum guides, waiters in cafes and restaurants, home assistants, automated delivery drones, and more. As digital agents, robots can be prime members of the smart city vision. On the one hand, smart city data can be accessed by robots to gain information that is relevant to the task in hand. On the other hand, robots can act as mobile sensors and actuators on behalf of the smart city, thus contributing to the data acquisition process. However, the connection between service robots and smart cities is surprisingly under-explored. In an effort to stimulate advances on the integration between robots and smart cities, we turned to robot competitions and hosted the first Smart Cities Robotics Challenge (SciRoc). The contest included activities specifically designed to require cooperation between robots and the MK Data Hub, a Smart City data infrastructure. In this article, we report on the competition held in Milton Keynes (UK) in September 2019, focusing in particular on the role played by the MK Data Hub in simulating a Smart City Data Infrastructure for service robots. Additionally, we discuss the feedback we received from the various people involved in the SciRoc Challenge, including participants, members of the public and organisers, and summarise the lessons learnt from this experience.

2.
Hum Comput Interact ; 36(2): 150-201, 2021.
Article in English | MEDLINE | ID: mdl-33867652

ABSTRACT

Digital experiences capture an increasingly large part of life, making them a preferred, if not required, method to describe and theorize about human behavior. Digital media also shape behavior by enabling people to switch between different content easily, and create unique threads of experiences that pass quickly through numerous information categories. Current methods of recording digital experiences provide only partial reconstructions of digital lives that weave - often within seconds - among multiple applications, locations, functions and media. We describe an end-to-end system for capturing and analyzing the "screenome" of life in media, i.e., the record of individual experiences represented as a sequence of screens that people view and interact with over time. The system includes software that collects screenshots, extracts text and images, and allows searching of a screenshot database. We discuss how the system can be used to elaborate current theories about psychological processing of technology, and suggest new theoretical questions that are enabled by multiple time scale analyses. Capabilities of the system are highlighted with eight research examples that analyze screens from adults who have generated data within the system. We end with a discussion of future uses, limitations, theory and privacy.

SELECTION OF CITATIONS
SEARCH DETAIL
...