Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 9 de 9
Filter
Add more filters










Database
Language
Publication year range
1.
Heliyon ; 10(11): e32297, 2024 Jun 15.
Article in English | MEDLINE | ID: mdl-38947432

ABSTRACT

The authentication process involves all the supply chain stakeholders, and it is also adopted to verify food quality and safety. Food authentication tools are an essential part of traceability systems as they provide information on the credibility of origin, species/variety identity, geographical provenance, production entity. Moreover, these systems are useful to evaluate the effect of transformation processes, conservation strategies and the reliability of packaging and distribution flows on food quality and safety. In this manuscript, we identified the innovative characteristics of food authentication systems to respond to market challenges, such as the simplification, the high sensitivity, and the non-destructive ability during authentication procedures. We also discussed the potential of the current identification systems based on molecular markers (chemical, biochemical, genetic) and the effectiveness of new technologies with reference to the miniaturized systems offered by nanotechnologies, and computer vision systems linked to artificial intelligence processes. This overview emphasizes the importance of convergent technologies in food authentication, to support molecular markers with the technological innovation offered by emerging technologies derived from biotechnologies and informatics. The potential of these strategies was evaluated on real examples of high-value food products. Technological innovation can therefore strengthen the system of molecular markers to meet the current market needs; however, food production processes are in profound evolution. The food 3D-printing and the introduction of new raw materials open new challenges for food authentication and this will require both an update of the current regulatory framework, as well as the development and adoption of new analytical systems.

2.
J Imaging ; 9(10)2023 Oct 20.
Article in English | MEDLINE | ID: mdl-37888340

ABSTRACT

Data augmentation is a fundamental technique in machine learning that plays a crucial role in expanding the size of training datasets. By applying various transformations or modifications to existing data, data augmentation enhances the generalization and robustness of machine learning models. In recent years, the development of several libraries has simplified the utilization of diverse data augmentation strategies across different tasks. This paper focuses on the exploration of the most widely adopted libraries specifically designed for data augmentation in computer vision tasks. Here, we aim to provide a comprehensive survey of publicly available data augmentation libraries, facilitating practitioners to navigate these resources effectively. Through a curated taxonomy, we present an organized classification of the different approaches employed by these libraries, along with accompanying application examples. By examining the techniques of each library, practitioners can make informed decisions in selecting the most suitable augmentation techniques for their computer vision projects. To ensure the accessibility of this valuable information, a dedicated public website named DALib has been created. This website serves as a centralized repository where the taxonomy, methods, and examples associated with the surveyed data augmentation libraries can be explored. By offering this comprehensive resource, we aim to empower practitioners and contribute to the advancement of computer vision research and applications through effective utilization of data augmentation techniques.

3.
Sensors (Basel) ; 22(10)2022 May 18.
Article in English | MEDLINE | ID: mdl-35632241

ABSTRACT

In the last few years, Augmented Reality, Virtual Reality, and Artificial Intelligence (AI) have been increasingly employed in different application domains. Among them, the retail market presents the opportunity to allow people to check the appearance of accessories, makeup, hairstyle, hair color, and clothes on themselves, exploiting virtual try-on applications. In this paper, we propose an eyewear virtual try-on experience based on a framework that leverages advanced deep learning-based computer vision techniques. The virtual try-on is performed on a 3D face reconstructed from a single input image. In designing our system, we started by studying the underlying architecture, components, and their interactions. Then, we assessed and compared existing face reconstruction approaches. To this end, we performed an extensive analysis and experiments for evaluating their design, complexity, geometry reconstruction errors, and reconstructed texture quality. The experiments allowed us to select the most suitable approach for our proposed try-on framework. Our system considers actual glasses and face sizes to provide a realistic fit estimation using a markerless approach. The user interacts with the system by using a web application optimized for desktop and mobile devices. Finally, we performed a usability study that showed an above-average score of our eyewear virtual try-on application.


Subject(s)
Augmented Reality , Virtual Reality , Artificial Intelligence , Humans , Software
4.
Sensors (Basel) ; 21(22)2021 Nov 09.
Article in English | MEDLINE | ID: mdl-34833529

ABSTRACT

Smart mirrors are devices that can display any kind of information and can interact with the user using touch and voice commands. Different kinds of smart mirrors exist: general purpose, medical, fashion, and other task specific ones. General purpose smart mirrors are suitable for home environments but the exiting ones offer similar, limited functionalities. In this paper, we present a general-purpose smart mirror that integrates several functionalities, standard and advanced, to support users in their everyday life. Among the advanced functionalities are the capabilities of detecting a person's emotions, the short- and long-term monitoring and analysis of the emotions, a double authentication protocol to preserve the privacy, and the integration of Alexa Skills to extend the applications of the smart mirrors. We exploit a deep learning technique to develop most of the smart functionalities. The effectiveness of the device is demonstrated by the performances of the implemented functionalities, and the evaluation in terms of its usability with real users.


Subject(s)
Emotions , Voice , Humans , Privacy
5.
Data Brief ; 29: 105041, 2020 Apr.
Article in English | MEDLINE | ID: mdl-31993461

ABSTRACT

This article presents a dataset with 4000 synthetic images portraying five 3D models from different viewpoints under varying lighting conditions. Depth of field and motion blur have also been used to generate realistic images. For each object, 8 scenes with different combinations of lighting, depth of field and motion blur are created and images are taken from 100 points of view. Data also includes information about camera intrinsic and extrinsic calibration parameters for each image as well as the ground truth geometry of the 3D models. The images were rendered using Blender. The aim of this dataset is to allow evaluation and comparison of different solutions for 3D reconstruction of objects starting from a set of images taken under different realistic acquisition setups.

6.
Data Brief ; 23: 103700, 2019 Apr.
Article in English | MEDLINE | ID: mdl-30828597

ABSTRACT

The two databases here described were generated to evaluate the role of affective content while assessing image quality (Corchs et al., 2018) [1]. The databases are composed of images JPEG-compressed together with the subjective quality scores collected during psychophysical experiments. To reduce interferences in quality perception due to image semantic, we have restricted the semantic content, choosing only close-ups of face images, and we have considered only two emotion categories (happy and sad). We have selected 23 images with happy faces and 23 images with sad faces of high quality. For what concerns image quality we have considered JPEG-distortion with 4 levels of compression, corresponding to q-factors 10, 15, 20, 30. The first image database, hereafter called MMSP-FaceA, is thus composed of 230 images (23+23) × 5 quality levels (including the original high quality pristine images). To better consider only interferences in quality perception due to affective content, we have generated a second image database where the background of images belonging to MMSP-FaceA has been cut off. This second image database is labelled as MMSP-FaceB. Psychophysical experiments were conducted, on a controlled web-based interface, where participants rated the image quality of the two databases in a five point scale. The two final databases MMSP-FaceA and MMSP-FaceB are thus composed of 230 images each, together with the raw quality scores assigned by the observers, and are available at our laboratory web site: www.mmsp.unimib.it/download.

7.
IEEE J Biomed Health Inform ; 21(3): 588-598, 2017 05.
Article in English | MEDLINE | ID: mdl-28114043

ABSTRACT

We propose a new dataset for the evaluation of food recognition algorithms that can be used in dietary monitoring applications. Each image depicts a real canteen tray with dishes and foods arranged in different ways. Each tray contains multiple instances of food classes. The dataset contains 1027 canteen trays for a total of 3616 food instances belonging to 73 food classes. The food on the tray images has been manually segmented using carefully drawn polygonal boundaries. We have benchmarked the dataset by designing an automatic tray analysis pipeline that takes a tray image as input, finds the regions of interest, and predicts for each region the corresponding food class. We have experimented with three different classification strategies using also several visual descriptors. We achieve about 79% of food and tray recognition accuracy using convolutional-neural-networks-based features. The dataset, as well as the benchmark framework, are available to the research community.


Subject(s)
Databases, Factual , Food/classification , Image Processing, Computer-Assisted/methods , Pattern Recognition, Automated/methods , Algorithms , Humans , Neural Networks, Computer
8.
PLoS One ; 11(6): e0157986, 2016.
Article in English | MEDLINE | ID: mdl-27336469

ABSTRACT

The aim of this work is to predict the complexity perception of real world images. We propose a new complexity measure where different image features, based on spatial, frequency and color properties are linearly combined. In order to find the optimal set of weighting coefficients we have applied a Particle Swarm Optimization. The optimal linear combination is the one that best fits the subjective data obtained in an experiment where observers evaluate the complexity of real world scenes on a web-based interface. To test the proposed complexity measure we have performed a second experiment on a different database of real world scenes, where the linear combination previously obtained is correlated with the new subjective data. Our complexity measure outperforms not only each single visual feature but also two visual clutter measures frequently used in the literature to predict image complexity. To analyze the usefulness of our proposal, we have also considered two different sets of stimuli composed of real texture images. Tuning the parameters of our measure for this kind of stimuli, we have obtained a linear combination that still outperforms the single measures. In conclusion our measure, properly tuned, can predict complexity perception of different kind of images.


Subject(s)
Models, Theoretical , Visual Perception , Adult , Aged , Algorithms , Female , Humans , Male , Middle Aged , Photic Stimulation , Young Adult
9.
IEEE Trans Image Process ; 17(12): 2381-92, 2008 Dec.
Article in English | MEDLINE | ID: mdl-19004710

ABSTRACT

In this work, we investigate how illuminant estimation techniques can be improved, taking into account automatically extracted information about the content of the images. We considered indoor/outdoor classification because the images of these classes present different content and are usually taken under different illumination conditions. We have designed different strategies for the selection and the tuning of the most appropriate algorithm (or combination of algorithms) for each class. We also considered the adoption of an uncertainty class which corresponds to the images where the indoor/outdoor classifier is not confident enough. The illuminant estimation algorithms considered here are derived from the framework recently proposed by Van de Weijer and Gevers. We present a procedure to automatically tune the algorithms' parameters. We have tested the proposed strategies on a suitable subset of the widely used Funt and Ciurea dataset. Experimental results clearly demonstrate that classification based strategies outperform general purpose algorithms.


Subject(s)
Algorithms , Artificial Intelligence , Color , Colorimetry/instrumentation , Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Pattern Recognition, Automated/methods , Reproducibility of Results , Sensitivity and Specificity
SELECTION OF CITATIONS
SEARCH DETAIL
...