Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 21
Filter
1.
Comput Methods Programs Biomed ; 234: 107504, 2023 Jun.
Article in English | MEDLINE | ID: mdl-37004267

ABSTRACT

BACKGROUND AND OBJECTIVE: The functions of an organism and its biological processes result from the expression of genes and proteins. Therefore quantifying and predicting mRNA and protein levels is a crucial aspect of scientific research. Concerning the prediction of mRNA levels, the available approaches use the sequence upstream and downstream of the Transcription Start Site (TSS) as input to neural networks. The State-of-the-art models (e.g., Xpresso and Basenjii) predict mRNA levels exploiting Convolutional (CNN) or Long Short Term Memory (LSTM) Networks. However, CNN prediction depends on convolutional kernel size, and LSTM suffers from capturing long-range dependencies in the sequence. Concerning the prediction of protein levels, as far as we know, there is no model for predicting protein levels by exploiting the gene or protein sequences. METHODS: Here, we exploit a new model type (called Perceiver) for mRNA and protein level prediction, exploiting a Transformer-based architecture with an attention module to attend to long-range interactions in the sequences. In addition, the Perceiver model overcomes the quadratic complexity of the standard Transformer architectures. This work's contributions are 1. DNAPerceiver model to predict mRNA levels from the sequence upstream and downstream of the TSS; 2. ProteinPerceiver model to predict protein levels from the protein sequence; 3. Protein&DNAPerceiver model to predict protein levels from TSS and protein sequences. RESULTS: The models are evaluated on cell lines, mice, glioblastoma, and lung cancer tissues. The results show the effectiveness of the Perceiver-type models in predicting mRNA and protein levels. CONCLUSIONS: This paper presents a Perceiver architecture for mRNA and protein level prediction. In the future, inserting regulatory and epigenetic information into the model could improve mRNA and protein level predictions. The source code is freely available at https://github.com/MatteoStefanini/DNAPerceiver.


Subject(s)
DNA , Neural Networks, Computer , Animals , Mice , Algorithms , Proteins/genetics , RNA, Messenger/genetics
2.
Sensors (Basel) ; 23(3)2023 Jan 23.
Article in English | MEDLINE | ID: mdl-36772326

ABSTRACT

Research related to fashion and e-commerce domains is gaining attention in computer vision and multimedia communities. Following this trend, this article tackles the task of generating fine-grained and accurate natural language descriptions of fashion items, a recently-proposed and under-explored challenge that is still far from being solved. To overcome the limitations of previous approaches, a transformer-based captioning model was designed with the integration of external textual memory that could be accessed through k-nearest neighbor (kNN) searches. From an architectural point of view, the proposed transformer model can read and retrieve items from the external memory through cross-attention operations, and tune the flow of information coming from the external memory thanks to a novel fully attentive gate. Experimental analyses were carried out on the fashion captioning dataset (FACAD) for fashion image captioning, which contains more than 130k fine-grained descriptions, validating the effectiveness of the proposed approach and the proposed architectural strategies in comparison with carefully designed baselines and state-of-the-art approaches. The presented method constantly outperforms all compared approaches, demonstrating its effectiveness for fashion image captioning.

3.
IEEE Trans Pattern Anal Mach Intell ; 45(1): 539-559, 2023 01.
Article in English | MEDLINE | ID: mdl-35130142

ABSTRACT

Connecting Vision and Language plays an essential role in Generative Intelligence. For this reason, large research efforts have been devoted to image captioning, i.e. describing images with syntactically and semantically meaningful sentences. Starting from 2015 the task has generally been addressed with pipelines composed of a visual encoder and a language model for text generation. During these years, both components have evolved considerably through the exploitation of object regions, attributes, the introduction of multi-modal connections, fully-attentive approaches, and BERT-like early-fusion strategies. However, regardless of the impressive results, research in image captioning has not reached a conclusive answer yet. This work aims at providing a comprehensive overview of image captioning approaches, from visual encoding and text generation to training strategies, datasets, and evaluation metrics. In this respect, we quantitatively compare many relevant state-of-the-art approaches to identify the most impactful technical innovations in architectures and training strategies. Moreover, many variants of the problem and its open challenges are discussed. The final goal of this work is to serve as a tool for understanding the existing literature and highlighting the future directions for a research area where Computer Vision and Natural Language Processing can find an optimal synergy.


Subject(s)
Deep Learning , Algorithms , Benchmarking , Language , Natural Language Processing
4.
Rev Med Suisse ; 18(797): 1788-1791, 2022 Sep 28.
Article in French | MEDLINE | ID: mdl-36170130

ABSTRACT

According to the latest recommendations, there is no longer place for the use of short-acting beta-2 agonist alone in chronic asthma treatment, due to an increased risk of severe exacerbations and exacerbation-related mortality. Current management of asthma is based on the use of inhaled corticosteroids in combination with formoterol as maintenance and as rescue treatment, thanks to the rapid and prolonged action of formoterol. General practitioners must evaluate, with the patient's collaboration, the treatable factors linked to poor asthma control. They should provide patients with a written treatment plan in order to help patients recognize and manage asthma exacerbation. The place of the pulmonary specialist is currently reserved for the advanced stages of the disease and in case of diagnostic doubt.


Selon les nouvelles recommandations, il n'y a plus de place pour l'utilisation des bêta-agonistes à courte durée d'action seuls dans le traitement de l'asthme chronique, en raison d'un risque accru d'exacerbations sévères et de mortalité. La prise en charge actuelle se base sur l'utilisation combinée de corticostéroïdes inhalés et de formotérol en traitement de fond mais également en traitement de secours, grâce à l'action à la fois rapide et prolongée du formotérol. Le/la généraliste doit évaluer, avec la collaboration du/de la patient-e, les facteurs modifiables liés au mauvais contrôle de l'asthme et lui fournir un plan de traitement afin qu'il/elle puisse reconnaître et gérer les symptômes d'une exacerbation d'asthme. La place du/de la spécialiste est réservée aux stades avancés de la maladie et en cas de doute lors de la pose du diagnostic.


Subject(s)
Anti-Asthmatic Agents , Asthma , Administration, Inhalation , Adrenal Cortex Hormones/therapeutic use , Anti-Asthmatic Agents/therapeutic use , Asthma/diagnosis , Asthma/drug therapy , Bronchodilator Agents/therapeutic use , Ethanolamines/therapeutic use , Formoterol Fumarate/therapeutic use , Humans
5.
IEEE Trans Pattern Anal Mach Intell ; 44(4): 2216-2227, 2022 Apr.
Article in English | MEDLINE | ID: mdl-33048673

ABSTRACT

In this article we introduce a new self-supervised, semi-parametric approach for synthesizing novel views of a vehicle starting from a single monocular image. Differently from parametric (i.e., entirely learning-based) methods, we show how a-priori geometric knowledge about the object and the 3D world can be successfully integrated into a deep learning based image generation framework. As this geometric component is not learnt, we call our approach semi-parametric. In particular, we exploit man-made object symmetry and piece-wise planarity to integrate rich a-priori visual information into the novel viewpoint synthesis process. An Image Completion Network (ICN) is then trained to generate a realistic image starting from this geometric guidance. This careful blend between parametric and non-parametric components allows us to i) operate in a real-world scenario, ii) preserve high-frequency visual information such as textures, iii) handle truly arbitrary 3D roto-translations of the input, and iv) perform shape transfer to completely different 3D models. Eventually, we show that our approach can be easily complemented with synthetic data and extended to other rigid objects with completely different topology, even in presence of concave structures and holes (e.g., chairs). A comprehensive experimental analysis against state-of-the-art competitors shows the efficacy of our method both from a quantitative and a perceptive point of view. Supplementary material, animated results, code, and data are available at: https://github.com/ndrplz/semiparametric.

6.
Neural Netw ; 144: 334-341, 2021 Dec.
Article in English | MEDLINE | ID: mdl-34547671

ABSTRACT

Recurrent Neural Networks with Long Short-Term Memory (LSTM) make use of gating mechanisms to mitigate exploding and vanishing gradients when learning long-term dependencies. For this reason, LSTMs and other gated RNNs are widely adopted, being the standard de facto for many sequence modeling tasks. Although the memory cell inside the LSTM contains essential information, it is not allowed to influence the gating mechanism directly. In this work, we improve the gate potential by including information coming from the internal cell state. The proposed modification, named Working Memory Connection, consists in adding a learnable nonlinear projection of the cell content into the network gates. This modification can fit into the classical LSTM gates without any assumption on the underlying task, being particularly effective when dealing with longer sequences. Previous research effort in this direction, which goes back to the early 2000s, could not bring a consistent improvement over vanilla LSTM. As part of this paper, we identify a key issue tied to previous connections that heavily limits their effectiveness, hence preventing a successful integration of the knowledge coming from the internal cell state. We show through extensive experimental evaluation that Working Memory Connections constantly improve the performance of LSTMs on a variety of tasks. Numerical results suggest that the cell state contains useful information that is worth including in the gate structure.


Subject(s)
Memory, Long-Term , Memory, Short-Term , Knowledge , Learning , Neural Networks, Computer
7.
Sensors (Basel) ; 21(3)2021 Jan 31.
Article in English | MEDLINE | ID: mdl-33572608

ABSTRACT

Nowadays, we are witnessing the wide diffusion of active depth sensors. However, the generalization capabilities and performance of the deep face recognition approaches that are based on depth data are hindered by the different sensor technologies and the currently available depth-based datasets, which are limited in size and acquired through the same device. In this paper, we present an analysis on the use of depth maps, as obtained by active depth sensors and deep neural architectures for the face recognition task. We compare different depth data representations (depth and normal images, voxels, point clouds), deep models (two-dimensional and three-dimensional Convolutional Neural Networks, PointNet-based networks), and pre-processing and normalization techniques in order to determine the configuration that maximizes the recognition accuracy and is capable of generalizing better on unseen data and novel acquisition settings. Extensive intra- and cross-dataset experiments, which were performed on four public databases, suggest that representations and methods that are based on normal images and point clouds perform and generalize better than other 2D and 3D alternatives. Moreover, we propose a novel challenging dataset, namely MultiSFace, in order to specifically analyze the influence of the depth map quality and the acquisition distance on the face recognition accuracy.


Subject(s)
Facial Recognition , Neural Networks, Computer , Algorithms , Databases, Factual
8.
IEEE Trans Pattern Anal Mach Intell ; 42(3): 596-609, 2020 03.
Article in English | MEDLINE | ID: mdl-30530311

ABSTRACT

Depth cameras allow to set up reliable solutions for people monitoring and behavior understanding, especially when unstable or poor illumination conditions make unusable common RGB sensors. Therefore, we propose a complete framework for the estimation of the head and shoulder pose based on depth images only. A head detection and localization module is also included, in order to develop a complete end-to-end system. The core element of the framework is a Convolutional Neural Network, called POSEidon +, that receives as input three types of images and provides the 3D angles of the pose as output. Moreover, a Face-from-Depth component based on a Deterministic Conditional GAN model is able to hallucinate a face from the corresponding depth image. We empirically demonstrate that this positively impacts the system performances. We test the proposed framework on two public datasets, namely Biwi Kinect Head Pose and ICT-3DHP, and on Pandora, a new challenging dataset mainly inspired by the automotive setup. Experimental results show that our method overcomes several recent state-of-art works based on both intensity and depth input data, running in real-time at more than 30 frames per second.


Subject(s)
Face , Head , Imaging, Three-Dimensional/methods , Neural Networks, Computer , Posture/physiology , Algorithms , Automated Facial Recognition , Databases, Factual , Face/anatomy & histology , Face/diagnostic imaging , Female , Head/anatomy & histology , Head/diagnostic imaging , Humans , Male , Pattern Recognition, Automated , Shoulder/anatomy & histology , Shoulder/diagnostic imaging
9.
Sensors (Basel) ; 19(15)2019 Jul 31.
Article in English | MEDLINE | ID: mdl-31370165

ABSTRACT

Face verification is the task of checking if two provided images contain the face of the same person or not. In this work, we propose a fully-convolutional Siamese architecture to tackle this task, achieving state-of-the-art results on three publicly-released datasets, namely Pandora, High-Resolution Range-based Face Database (HRRFaceD), and CurtinFaces. The proposed method takes depth maps as the input, since depth cameras have been proven to be more reliable in different illumination conditions. Thus, the system is able to work even in the case of the total or partial absence of external light sources, which is a key feature for automotive applications. From the algorithmic point of view, we propose a fully-convolutional architecture with a limited number of parameters, capable of dealing with the small amount of depth data available for training and able to run in real time even on a CPU and embedded boards. The experimental results show acceptable accuracy to allow exploitation in real-world applications with in-board cameras. Finally, exploiting the presence of faces occluded by various head garments and extreme head poses available in the Pandora dataset, we successfully test the proposed system also during strong visual occlusions. The excellent results obtained confirm the efficacy of the proposed method.

10.
IEEE Trans Pattern Anal Mach Intell ; 41(7): 1720-1733, 2019 07.
Article in English | MEDLINE | ID: mdl-29994193

ABSTRACT

In this work we aim to predict the driver's focus of attention. The goal is to estimate what a person would pay attention to while driving, and which part of the scene around the vehicle is more critical for the task. To this end we propose a new computer vision model based on a multi-branch deep architecture that integrates three sources of information: raw video, motion and scene semantics. We also introduce DR(eye)VE, the largest dataset of driving scenes for which eye-tracking annotations are available. This dataset features more than 500,000 registered frames, matching ego-centric views (from glasses worn by drivers) and car-centric views (from roof-mounted camera), further enriched by other sensors measurements. Results highlight that several attention patterns are shared across drivers and can be reproduced to some extent. The indication of which elements in the scene are likely to capture the driver's attention may benefit several applications in the context of human-vehicle interaction and driver attention analysis.

11.
Article in English | MEDLINE | ID: mdl-29994710

ABSTRACT

Data-driven saliency has recently gained a lot of attention thanks to the use of Convolutional Neural Networks for predicting gaze fixations. In this paper we go beyond standard approaches to saliency prediction, in which gaze maps are computed with a feed-forward network, and present a novel model which can predict accurate saliency maps by incorporating neural attentive mechanisms. The core of our solution is a Convolutional LSTM that focuses on the most salient regions of the input image to iteratively refine the predicted saliency map. Additionally, to tackle the center bias typical of human eye fixations, our model can learn a set of prior maps generated with Gaussian functions. We show, through an extensive evaluation, that the proposed architecture outperforms the current state of the art on public saliency prediction datasets. We further study the contribution of each key component to demonstrate their robustness on different scenarios.

12.
PLoS One ; 11(7): e0158748, 2016.
Article in English | MEDLINE | ID: mdl-27415814

ABSTRACT

Mankind directly controls the environment and lifestyles of several domestic species for purposes ranging from production and research to conservation and companionship. These environments and lifestyles may not offer these animals the best quality of life. Behaviour is a direct reflection of how the animal is coping with its environment. Behavioural indicators are thus among the preferred parameters to assess welfare. However, behavioural recording (usually from video) can be very time consuming and the accuracy and reliability of the output rely on the experience and background of the observers. The outburst of new video technology and computer image processing gives the basis for promising solutions. In this pilot study, we present a new prototype software able to automatically infer the behaviour of dogs housed in kennels from 3D visual data and through structured machine learning frameworks. Depth information acquired through 3D features, body part detection and training are the key elements that allow the machine to recognise postures, trajectories inside the kennel and patterns of movement that can be later labelled at convenience. The main innovation of the software is its ability to automatically cluster frequently observed temporal patterns of movement without any pre-set ethogram. Conversely, when common patterns are defined through training, a deviation from normal behaviour in time or between individuals could be assessed. The software accuracy in correctly detecting the dogs' behaviour was checked through a validation process. An automatic behaviour recognition system, independent from human subjectivity, could add scientific knowledge on animals' quality of life in confinement as well as saving time and resources. This 3D framework was designed to be invariant to the dog's shape and size and could be extended to farm, laboratory and zoo quadrupeds in artificial housing. The computer vision technique applied to this software is innovative in non-human animal behaviour science. Further improvements and validation are needed, and future applications and limitations are discussed.


Subject(s)
Animal Welfare , Animals, Domestic , Behavior, Animal/physiology , Image Processing, Computer-Assisted/methods , Movement/physiology , Quality of Life , Software , Animals , Dogs , Environment , Pilot Projects , Reproducibility of Results
13.
IEEE Trans Pattern Anal Mach Intell ; 38(5): 995-1008, 2016 May.
Article in English | MEDLINE | ID: mdl-27046841

ABSTRACT

Modern crowd theories agree that collective behavior is the result of the underlying interactions among small groups of individuals. In this work, we propose a novel algorithm for detecting social groups in crowds by means of a Correlation Clustering procedure on people trajectories. The affinity between crowd members is learned through an online formulation of the Structural SVM framework and a set of specifically designed features characterizing both their physical and social identity, inspired by Proxemic theory, Granger causality, DTW and Heat-maps. To adhere to sociological observations, we introduce a loss function ( G -MITRE) able to deal with the complexity of evaluating group detection performances. We show our algorithm achieves state-of-the-art results when relying on both ground truth trajectories and tracklets previously extracted by available detector/tracker systems.

14.
Sensors (Basel) ; 16(2): 237, 2016 Feb 17.
Article in English | MEDLINE | ID: mdl-26901197

ABSTRACT

Augmented user experiences in the cultural heritage domain are in increasing demand by the new digital native tourists of 21st century. In this paper, we propose a novel solution that aims at assisting the visitor during an outdoor tour of a cultural site using the unique first person perspective of wearable cameras. In particular, the approach exploits computer vision techniques to retrieve the details by proposing a robust descriptor based on the covariance of local features. Using a lightweight wearable board, the solution can localize the user with respect to the 3D point cloud of the historical landmark and provide him with information about the details at which he is currently looking. Experimental results validate the method both in terms of accuracy and computational effort. Furthermore, user evaluation based on real-world experiments shows that the proposal is deemed effective in enriching a cultural experience.


Subject(s)
Optical Devices , Algorithms , Humans , Image Processing, Computer-Assisted , User-Computer Interface
15.
IEEE Trans Pattern Anal Mach Intell ; 36(7): 1442-68, 2014 Jul.
Article in English | MEDLINE | ID: mdl-26353314

ABSTRACT

There is a large variety of trackers, which have been proposed in the literature during the last two decades with some mixed success. Object tracking in realistic scenarios is a difficult problem, therefore, it remains a most active area of research in computer vision. A good tracker should perform well in a large number of videos involving illumination changes, occlusion, clutter, camera motion, low contrast, specularities, and at least six more aspects. However, the performance of proposed trackers have been evaluated typically on less than ten videos, or on the special purpose datasets. In this paper, we aim to evaluate trackers systematically and experimentally on 315 video fragments covering above aspects. We selected a set of nineteen trackers to include a wide variety of algorithms often cited in literature, supplemented with trackers appearing in 2010 and 2011 for which the code was publicly available. We demonstrate that trackers can be evaluated objectively by survival curves, Kaplan Meier statistics, and Grubs testing. We find that in the evaluation practice the F-score is as effective as the object tracking accuracy (OTA) score. The analysis under a large variety of circumstances provides objective insight into the strengths and weaknesses of trackers.

16.
IEEE Trans Pattern Anal Mach Intell ; 34(8): 1589-604, 2012 Aug.
Article in English | MEDLINE | ID: mdl-22184258

ABSTRACT

The common paradigm employed for object detection is the sliding window (SW) search. This approach generates grid-distributed patches, at all possible positions and sizes, which are evaluated by a binary classifier: The tradeoff between computational burden and detection accuracy is the real critical point of sliding windows; several methods have been proposed to speed up the search such as adding complementary features. We propose a paradigm that differs from any previous approach since it casts object detection into a statistical-based search using a Monte Carlo sampling for estimating the likelihood density function with Gaussian kernels. The estimation relies on a multistage strategy where the proposal distribution is progressively refined by taking into account the feedback of the classifiers. The method can be easily plugged into a Bayesian-recursive framework to exploit the temporal coherency of the target objects in videos. Several tests on pedestrian and face detection, both on images and videos, with different types of classifiers (cascade of boosted classifiers, soft cascades, and SVM) and features (covariance matrices, Haar-like features, integral channel features, and histogram of oriented gradients) demonstrate that the proposed method provides higher detection rates and accuracy as well as a lower computational burden w.r.t. sliding window detection.


Subject(s)
Algorithms , Image Processing, Computer-Assisted/methods , Video Recording/methods , Biometric Identification/methods , Humans , Monte Carlo Method , Walking/classification
17.
IEEE Trans Image Process ; 19(6): 1596-609, 2010 Jun.
Article in English | MEDLINE | ID: mdl-20227983

ABSTRACT

In this paper, we define a new paradigm for eight-connection labeling, which employs a general approach to improve neighborhood exploration and minimizes the number of memory accesses. First, we exploit and extend the decision table formalism introducing OR-decision tables, in which multiple alternative actions are managed. An automatic procedure to synthesize the optimal decision tree from the decision table is used, providing the most effective conditions evaluation order. Second, we propose a new scanning technique that moves on a 2 x 2 pixel grid over the image, which is optimized by the automatically generated decision tree. An extensive comparison with the state of art approaches is proposed, both on synthetic and real datasets. The synthetic dataset is composed of different sizes and densities random images, while the real datasets are an artistic image analysis dataset, a document analysis dataset for text detection and recognition, and finally a standard resolution dataset for picture segmentation tasks. The algorithm provides an impressive speedup over the state of the art algorithms.


Subject(s)
Algorithms , Decision Support Techniques , Image Interpretation, Computer-Assisted/methods , Pattern Recognition, Automated/methods , Product Labeling/methods , Image Enhancement/methods , Reproducibility of Results , Sensitivity and Specificity
18.
IEEE Trans Pattern Anal Mach Intell ; 30(2): 354-60, 2008 Feb.
Article in English | MEDLINE | ID: mdl-18084065

ABSTRACT

This paper presents a novel and robust approach to consistent labeling for people surveillance in multi-camera systems. A general framework scalable to any number of cameras with overlapped views is devised. An off-line training process automatically computes ground-plane homography and recovers epipolar geometry. When a new object is detected in any one camera, hypotheses for potential matching objects in the other cameras are established. Each of the hypotheses is evaluated using a prior and likelihood value. The prior accounts for the positions of the potential matching objects, while the likelihood is computed by warping the vertical axis of the new object on the field of view of the other cameras and measuring the amount of match. In the likelihood, two contributions (forward and backward) are considered so as to correctly handle the case of groups of people merged into single objects. Eventually, a maximum-a-posteriori approach estimates the best label assignment for the new object. Comparisons with other methods based on homography and extensive outdoor experiments demonstrate that the proposed approach is accurate and robust in coping with segmentation errors and in disambiguating groups.

19.
Comput Med Imaging Graph ; 28(4): 185-201, 2004 Jun.
Article in English | MEDLINE | ID: mdl-15121208

ABSTRACT

In the last decade, computerized tomography (CT) has become the most frequently used imaging modality to obtain a correct pre-operative implant planning. In this work, we present an image analysis and computer vision approach able to identify, from the reconstructed 3D data set, the optimal cutting plane specific to each implant to be planned, in order to obtain the best view of the implant site and to have correct measures. If the patient requires more implants, different cutting planes are automatically identified, and the axial and cross-sectional images can be re-oriented accordingly to each of them. In the paper, we describe the defined algorithms in order to recognize 3D markers (each one aligned with a missed tooth for which an implant has to be planned) in the 3D reconstructed space, and the results in processing real exams, in terms of effectiveness and precision and reproducibility of the measure.


Subject(s)
Dental Implants , Image Processing, Computer-Assisted , Surgery, Computer-Assisted , Tomography, X-Ray Computed/methods , Algorithms , Italy , Preoperative Care , Reproducibility of Results
20.
Dermatology ; 208(1): 21-6, 2004.
Article in English | MEDLINE | ID: mdl-14730232

ABSTRACT

BACKGROUND: Identification of dark areas inside a melanocytic lesion (ML) is of great importance for melanoma diagnosis, both during clinical examination and employing programs for automated image analysis. OBJECTIVE: The aim of our study was to compare two different methods for the automated identification and description of dark areas in epiluminescence microscopy images of MLs and to evaluate their diagnostic capability. METHODS: Two methods for the automated extraction of 'absolute' (ADAs) and 'relative' dark areas (RDAs) and a set of parameters for their description were developed and tested on 339 images of MLs acquired by means of a polarized-light videomicroscope. RESULTS: Significant differences in dark area distribution between melanomas and nevi were observed employing both methods, permitting a good discrimination of MLs (diagnostic accuracy = 74.6 and 71.2% for ADAs and RDAs, respectively). CONCLUSIONS: Both methods for the automated identification of dark areas are useful for melanoma diagnosis and can be implemented in programs for image analysis.


Subject(s)
Image Interpretation, Computer-Assisted , Melanoma/diagnosis , Skin Neoplasms/diagnosis , Diagnosis, Differential , Humans , Microscopy, Video , Nevus, Pigmented/diagnosis , Sensitivity and Specificity
SELECTION OF CITATIONS
SEARCH DETAIL
...