RESUMO
Understanding the COVID-19 pandemic is a multidisciplinary effort that requires a significant number of variables. This dataset comprises (i) sociodemographic characteristics, compiled from 35 datasets obtained at UN Data; (ii) mobility metrics that can assist the analysis of social distancing, from Google Community Mobility Reports and; (iii) daily counts of cases and deaths by COVID-19, from the European Centre for Disease Prevention and Control and the Johns Hopkins University Center for Systems Science and Engineering. This unified dataset ranges from February 15, 2020 to May 7, 2020, a total of 83 days, and is provided as a collection of time series for 131 countries with 192 variables. The pipeline to preprocess and generate the dataset, along with the dataset itself, are versioned with the Data Version Control tool (DVC) and are thus easily reproducible.
RESUMO
Technological innovations in the hardware of RGB-D sensors have allowed the acquisition of 3D point clouds in real time. Consequently, various applications have arisen related to the 3D world, which are receiving increasing attention from researchers. Nevertheless, one of the main problems that remains is the demand for computationally intensive processing that required optimized approaches to deal with 3D vision modeling, especially when it is necessary to perform tasks in real time. A previously proposed multi-resolution 3D model known as foveated point clouds can be a possible solution to this problem. Nevertheless, this is a model limited to a single foveated structure with context dependent mobility. In this work, we propose a new solution for data reduction and feature detection using multifoveation in the point cloud. Nonetheless, the application of several foveated structures results in a considerable increase of processing since there are intersections between regions of distinct structures, which are processed multiple times. Towards solving this problem, the current proposal brings an approach that avoids the processing of redundant regions, which results in even more reduced processing time. Such approach can be used to identify objects in 3D point clouds, one of the key tasks for real-time applications as robotics vision, with efficient synchronization allowing the validation of the model and verification of its applicability in the context of computer vision. Experimental results demonstrate a performance gain of at least 27.21% in processing time while retaining the main features of the original, and maintaining the recognition quality rate in comparison with state-of-the-art 3D object recognition methods.
RESUMO
Wearable computing is a form of ubiquitous computing that offers flexible and useful tools for users. Specifically, glove-based systems have been used in the last 30 years in a variety of applications, but mostly focusing on sensing people's attributes, such as finger bending and heart rate. In contrast, we propose in this work a novel flexible and reconfigurable instrumentation platform in the form of a glove, which can be used to analyze and measure attributes of fruits by just pointing or touching them with the proposed glove. An architecture for such a platform is designed and its application for intuitive fruit grading is also presented, including experimental results for several fruits.