Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
Head Neck ; 45(8): 1885-1893, 2023 Aug.
Article in English | MEDLINE | ID: mdl-37222027

ABSTRACT

OBJECTIVE: Little information is available about deep learning methods used in ultrasound images of salivary gland tumors. We aimed to compare the accuracy of the ultrasound-trained model to computed tomography or magnetic resonance imaging trained model. MATERIALS AND METHODS: Six hundred and thirty-eight patients were included in this retrospective study. There were 558 benign and 80 malignant salivary gland tumors. A total of 500 images (250 benign and 250 malignant) were acquired in the training and validation set, then 62 images (31 benign and 31 malignant) in the test set. Both machine learning and deep learning were used in our model. RESULTS: The test accuracy, sensitivity, and specificity of our final model were 93.5%, 100%, and 87%, respectively. There were no over fitting in our model as the validation accuracy was similar with the test accuracy. CONCLUSIONS: The sensitivity and specificity were comparable with current MRI and CT images using artificial intelligence.


Subject(s)
Artificial Intelligence , Salivary Gland Neoplasms , Humans , Retrospective Studies , Neural Networks, Computer , Ultrasonography/methods , Salivary Gland Neoplasms/diagnostic imaging
2.
Sensors (Basel) ; 23(8)2023 Apr 13.
Article in English | MEDLINE | ID: mdl-37112304

ABSTRACT

Nuts are the cornerstone of human industrial construction, especially A-grade nuts that can only be used in power plants, precision instruments, aircraft, and rockets. However, the traditional nuts inspection method is to manually operate the measuring instrument for conducting an inspection, so the quality of the A-grade nut cannot be guaranteed. In this work, a machine vision-based inspection system was proposed, which performs a real-time geometric inspection of the nuts before and after tapping on the production line. In order to automatically screen out A-Grade nuts on the production line, there are 7 inspections within this proposed nut inspection system. The measurements of parallel, opposite side length, straightness, radius, roundness, concentricity, and eccentricity were proposed. To shorten the overall detection time regarding nut production, the program needed to be accurate and uncomplicated. By modifying the Hough line and Hough circle, the algorithm became faster and more suitable for nut detection. The optimized Hough line and Hough circle can be used for all measures in the testing process.

3.
Sensors (Basel) ; 20(9)2020 May 11.
Article in English | MEDLINE | ID: mdl-32403333

ABSTRACT

The fiducial-marks-based alignment process is one of the most critical steps in printed circuit board (PCB) manufacturing. In the alignment process, a machine vision technique is used to detect the fiducial marks and then adjust the position of the vision system in such a way that it is aligned with the PCB. The present study proposed an embedded PCB alignment system, in which a rotation, scale and translation (RST) template-matching algorithm was employed to locate the marks on the PCB surface. The coordinates and angles of the detected marks were then compared with the reference values which were set by users, and the difference between them was used to adjust the position of the vision system accordingly. To improve the positioning accuracy, the angle and location matching process was performed in refinement processes. To overcome the matching time, in the present study we accelerated the rotation matching by eliminating the weak features in the scanning process and converting the normalized cross correlation (NCC) formula to a sum of products. Moreover, the scanning time was reduced by implementing the entire RST process in parallel on threads of a graphics processing unit (GPU) by applying hash functions to find refined positions in the refinement matching process. The experimental results showed that the resulting matching time was around 32× faster than that achieved on a conventional central processing unit (CPU) for a test image size of 1280 × 960 pixels. Furthermore, the precision of the alignment process achieved a considerable result with a tolerance of 36.4µm.

4.
IEEE Trans Image Process ; 25(8): 3546-61, 2016 08.
Article in English | MEDLINE | ID: mdl-27244737

ABSTRACT

A facial sketch synthesis system is proposed, featuring a 2D direct combined model (2DDCM)-based face-specific Markov network. In contrast to the existing facial sketch synthesis systems, the proposed scheme aims to synthesize sketches, which reproduce the unique drawing style of a particular artist, where this drawing style is learned from a data set consisting of a large number of image/sketch pairwise training samples. The synthesis system comprises three modules, namely, a global module, a local module, and an enhancement module. The global module applies a 2DDCM approach to synthesize the global facial geometry and texture of the input image. The detailed texture is then added to the synthesized sketch in a local patch-based manner using a parametric 2DDCM model and a non-parametric Markov random field (MRF) network. Notably, the MRF approach gives the synthesized results an appearance more consistent with the drawing style of the training samples, while the 2DDCM approach enables the synthesis of outcomes with a more derivative style. As a result, the similarity between the synthesized sketches and the input images is greatly improved. Finally, a post-processing operation is performed to enhance the shadowed regions of the synthesized image by adding strong lines or curves to emphasize the lighting conditions. The experimental results confirm that the synthesized facial images are in good qualitative and quantitative agreement with the input images as well as the ground-truth sketches provided by the same artist. The representing power of the proposed framework is demonstrated by synthesizing facial sketches from input images with a wide variety of facial poses, lighting conditions, and races even when such images are not included in the training data set. Moreover, the practical applicability of the proposed framework is demonstrated by means of automatic facial recognition tests.


Subject(s)
Algorithms , Face , Pattern Recognition, Automated , Art , Humans , Lighting
5.
IEEE Trans Syst Man Cybern B Cybern ; 40(4): 1158-69, 2010 Aug.
Article in English | MEDLINE | ID: mdl-19933007

ABSTRACT

Automatically locating multiple feature points (i.e., the shape) in a facial image and then synthesizing the corresponding facial sketch are highly challenging since facial images typically exhibit a wide range of poses, expressions, and scales, and have differing degrees of illumination and/or occlusion. When the facial sketches are to be synthesized in the unique sketching style of a particular artist, the problem becomes even more complex. To resolve these problems, this paper develops an automatic facial sketch synthesis system based on a novel direct combined model (DCM) algorithm. The proposed system executes three cascaded procedures, namely, 1) synthesis of the facial shape from the input texture information (i.e., the facial image); 2) synthesis of the exaggerated facial shape from the synthesized facial shape; and 3) synthesis of a sketch from the original input image and the synthesized exaggerated shape. Previous proposals for reconstructing facial shapes and synthesizing the corresponding facial sketches are heavily reliant on the quality of the texture reconstruction results, which, in turn, are highly sensitive to occlusion and lighting effects in the input image. However, the DCM approach proposed in this paper accurately reconstructs the facial shape and then produces lifelike synthesized facial sketches without the need to recover occluded feature points or to restore the texture information lost as a result of unfavorable lighting conditions. Moreover, the DCM approach is capable of synthesizing facial sketches from input images with a wide variety of facial poses, gaze directions, and facial expressions even when such images are not included within the original training data set.


Subject(s)
Algorithms , Artificial Intelligence , Biometry/methods , Computer Graphics , Face/anatomy & histology , Image Interpretation, Computer-Assisted/methods , Pattern Recognition, Automated/methods , Humans , Image Enhancement/methods , Reproducibility of Results , Sensitivity and Specificity
SELECTION OF CITATIONS
SEARCH DETAIL
...