Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 11 de 11
Filter
Add more filters










Publication year range
1.
J Opt Soc Am A Opt Image Sci Vis ; 39(6): B1-B10, 2022 Jun 01.
Article in English | MEDLINE | ID: mdl-36215522

ABSTRACT

Blind image quality assessment (BIQA) of authentically distorted images is a challenging problem due to the lack of a reference image and the coexistence of blends of distortions with unknown characteristics. In this article, we present a convolutional neural network based BIQA model. It encodes the input image into multi-level features to estimate the perceptual quality score. The proposed model is designed to predict the image quality score but is trained for jointly treating the image quality assessment as a classification, regression, and pairwise ranking problem. Experimental results on three different datasets of authentically distorted images show that the proposed method achieves comparable results with state-of-the-art methods in intra-dataset experiments and is more effective in cross-dataset experiments.


Subject(s)
Algorithms , Image Processing, Computer-Assisted , Image Processing, Computer-Assisted/methods , Neural Networks, Computer
2.
IEEE Trans Image Process ; 31: 5009-5024, 2022.
Article in English | MEDLINE | ID: mdl-35867369

ABSTRACT

The aesthetic quality of an image is defined as the measure or appreciation of the beauty of an image. Aesthetics is inherently a subjective property but there are certain factors that influence it such as, the semantic content of the image, the attributes describing the artistic aspect, the photographic setup used for the shot, etc. In this paper we propose a method for the automatic prediction of the aesthetics of an image that is based on the analysis of the semantic content, the artistic style and the composition of the image. The proposed network includes: a pre-trained network for semantic features extraction (the Backbone); a Multi Layer Perceptron (MLP) network that relies on the Backbone features for the prediction of image attributes (the AttributeNet); a self-adaptive Hypernetwork that exploits the attributes prior encoded into the embedding generated by the AttributeNet to predict the parameters of the target network dedicated to aesthetic estimation (the AestheticNet). Given an image, the proposed multi-network is able to predict: style and composition attributes, and aesthetic score distribution. Results on three benchmark datasets demonstrate the effectiveness of the proposed method, while the ablation study gives a better understanding of the proposed network.

3.
Sensors (Basel) ; 21(22)2021 Nov 09.
Article in English | MEDLINE | ID: mdl-34833529

ABSTRACT

Smart mirrors are devices that can display any kind of information and can interact with the user using touch and voice commands. Different kinds of smart mirrors exist: general purpose, medical, fashion, and other task specific ones. General purpose smart mirrors are suitable for home environments but the exiting ones offer similar, limited functionalities. In this paper, we present a general-purpose smart mirror that integrates several functionalities, standard and advanced, to support users in their everyday life. Among the advanced functionalities are the capabilities of detecting a person's emotions, the short- and long-term monitoring and analysis of the emotions, a double authentication protocol to preserve the privacy, and the integration of Alexa Skills to extend the applications of the smart mirrors. We exploit a deep learning technique to develop most of the smart functionalities. The effectiveness of the device is demonstrated by the performances of the implemented functionalities, and the evaluation in terms of its usability with real users.


Subject(s)
Emotions , Voice , Humans , Privacy
4.
J Imaging ; 7(3)2021 Mar 13.
Article in English | MEDLINE | ID: mdl-34460711

ABSTRACT

Methods for No-Reference Video Quality Assessment (NR-VQA) of consumer-produced video content are largely investigated due to the spread of databases containing videos affected by natural distortions. In this work, we design an effective and efficient method for NR-VQA. The proposed method exploits a novel sampling module capable of selecting a predetermined number of frames from the whole video sequence on which to base the quality assessment. It encodes both the quality attributes and semantic content of video frames using two lightweight Convolutional Neural Networks (CNNs). Then, it estimates the quality score of the entire video using a Support Vector Regressor (SVR). We compare the proposed method against several relevant state-of-the-art methods using four benchmark databases containing user generated videos (CVD2014, KoNViD-1k, LIVE-Qualcomm, and LIVE-VQC). The results show that the proposed method at a substantially lower computational cost predicts subjective video quality in line with the state of the art methods on individual databases and generalizes better than existing methods in cross-database setup.

5.
Sensors (Basel) ; 21(4)2021 Feb 12.
Article in English | MEDLINE | ID: mdl-33673052

ABSTRACT

The automatic assessment of the aesthetic quality of a photo is a challenging and extensively studied problem. Most of the existing works focus on the aesthetic quality assessment of photos regardless of the depicted subject and mainly use features extracted from the entire image. It has been observed that the performance of generic content aesthetic assessment methods significantly decreases when it comes to images depicting faces. This paper introduces a method for evaluating the aesthetic quality of images with faces by encoding both the properties of the entire image and specific aspects of the face. Three different convolutional neural networks are exploited to encode information regarding perceptual quality, global image aesthetics, and facial attributes; then, a model is trained to combine these features to explicitly predict the aesthetics of images containing faces. Experimental results show that our approach outperforms existing methods for both binary, i.e., low/high, and continuous aesthetic score prediction on four different image databases in the state-of-the-art.

6.
J Imaging ; 6(8)2020 Jul 30.
Article in English | MEDLINE | ID: mdl-34460689

ABSTRACT

We introduce a no-reference method for the assessment of the quality of videos affected by in-capture distortions due to camera hardware and processing software. The proposed method encodes both quality attributes and semantic content of each video frame by using two Convolutional Neural Networks (CNNs) and then estimates the quality score of the whole video by using a Recurrent Neural Network (RNN), which models the temporal information. The extensive experiments conducted on four benchmark databases (CVD2014, KoNViD-1k, LIVE-Qualcomm, and LIVE-VQC) containing in-capture distortions demonstrate the effectiveness of the proposed method and its ability to generalize in cross-database setup.

7.
Sensors (Basel) ; 18(8)2018 Aug 14.
Article in English | MEDLINE | ID: mdl-30110891

ABSTRACT

We present a multi-task learning-based convolutional neural network (MTL-CNN) able to estimate multiple tags describing face images simultaneously. In total, the model is able to estimate up to 74 different face attributes belonging to three distinct recognition tasks: age group, gender and visual attributes (such as hair color, face shape and the presence of makeup). The proposed model shares all the CNN's parameters among tasks and deals with task-specific estimation through the introduction of two components: (i) a gating mechanism to control activations' sharing and to adaptively route them across different face attributes; (ii) a module to post-process the predictions in order to take into account the correlation among face attributes. The model is trained by fusing multiple databases for increasing the number of face attributes that can be estimated and using a center loss for disentangling representations among face attributes in the embedding space. Extensive experiments validate the effectiveness of the proposed approach.


Subject(s)
Face , Neural Networks, Computer , Databases, Factual , Deep Learning , Face/anatomy & histology
8.
Rev Sci Instrum ; 87(2): 02A510, 2016 Feb.
Article in English | MEDLINE | ID: mdl-26931918

ABSTRACT

An experimental campaign aiming to investigate electron cyclotron resonance (ECR) plasma X-ray emission has been recently carried out at the ECRISs-Electron Cyclotron Resonance Ion Sources laboratory of Atomki based on a collaboration between the Debrecen and Catania ECR teams. In a first series, the X-ray spectroscopy was performed through silicon drift detectors and high purity germanium detectors, characterizing the volumetric plasma emission. The on-purpose developed collimation system was suitable for direct plasma density evaluation, performed "on-line" during beam extraction and charge state distribution characterization. A campaign for correlating the plasma density and temperature with the output charge states and the beam intensity for different pumping wave frequencies, different magnetic field profiles, and single-gas/gas-mixing configurations was carried out. The results reveal a surprisingly very good agreement between warm-electron density fluctuations, output beam currents, and the calculated electromagnetic modal density of the plasma chamber. A charge-coupled device camera coupled to a small pin-hole allowing X-ray imaging was installed and numerous X-ray photos were taken in order to study the peculiarities of the ECRIS plasma structure.

9.
Rev Sci Instrum ; 87(2): 02B904, 2016 Feb.
Article in English | MEDLINE | ID: mdl-26932076

ABSTRACT

A cheap and efficient diagnostic system for beam monitoring has been recently developed at INFN-LNS in Catania. It consists of a high sensitivity CCD camera detecting the light produced by an ion beam hitting the surface of a scintillating screen and a frame grabber for image acquisition. A scintillating screen, developed at INFN-LNS and consisting of a 2 µm BaF2 layer evaporated on an aluminium plate, has been tested by using (20)Ne and (40)Ar beams in the keV energy range. The CAESAR ECR ion source has been used for investigating the influence of the frequency and magnetic field tuning effects, the impact of the microwave injected power, and of the focusing solenoids along the low energy beam transport on the beam shape and current. These tests will allow to better understand the interplay between the plasma and beam dynamics and, moreover, to improve the transport efficiency along the low energy beam line and the matching with the superconducting cyclotron, particularly relevant in view of the expected upgrade of the machine.

10.
Rev Sci Instrum ; 87(2): 02B909, 2016 Feb.
Article in English | MEDLINE | ID: mdl-26932081

ABSTRACT

The Electron Cyclotron Resonance Ion Sources (ECRISs) development is strictly related to the availability of new diagnostic tools, as the existing ones are not adequate to such compact machines and to their plasma characteristics. Microwave interferometry is a non-invasive method for plasma diagnostics and represents the best candidate for plasma density measurement in hostile environment. Interferometry in ECRISs is a challenging task mainly due to their compact size. The typical density of ECR plasmas is in the range 10(11)-10(13) cm(-3) and it needs a probing beam wavelength of the order of few centimetres, comparable to the chamber radius. The paper describes the design of a microwave interferometer developed at the LNS-INFN laboratories based on the so-called "frequency sweep" method to filter out the multipath contribution in the detected signals. The measurement technique and the preliminary results (calibration) obtained during the experimental tests will be presented.

11.
Rev Sci Instrum ; 85(2): 02A742, 2014 Feb.
Article in English | MEDLINE | ID: mdl-24593476

ABSTRACT

The Catania VIS 2.46 GHz source has been installed on a test stand at the Best Cyclotron Systems, in Vancouver, Canada, as part of the DAEδALUS and IsoDAR R&D program. Studies to date include optimization for H2 (+)/p ratio and emittance measurements. Inflection, capture, and acceleration tests will be conducted when a small test cyclotron is completed.

SELECTION OF CITATIONS
SEARCH DETAIL
...