Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
Phys Eng Sci Med ; 2024 Apr 04.
Article in English | MEDLINE | ID: mdl-38573489

ABSTRACT

Following the great success of various deep learning methods in image and object classification, the biomedical image processing society is also overwhelmed with their applications to various automatic diagnosis cases. Unfortunately, most of the deep learning-based classification attempts in the literature solely focus on the aim of extreme accuracy scores, without considering interpretability, or patient-wise separation of training and test data. For example, most lung nodule classification papers using deep learning randomly shuffle data and split it into training, validation, and test sets, causing certain images from the Computed Tomography (CT) scan of a person to be in the training set, while other images of the same person to be in the validation or testing image sets. This can result in reporting misleading accuracy rates and the learning of irrelevant features, ultimately reducing the real-life usability of these models. When the deep neural networks trained on the traditional, unfair data shuffling method are challenged with new patient images, it is observed that the trained models perform poorly. In contrast, deep neural networks trained with strict patient-level separation maintain their accuracy rates even when new patient images are tested. Heat map visualizations of the activations of the deep neural networks trained with strict patient-level separation indicate a higher degree of focus on the relevant nodules. We argue that the research question posed in the title has a positive answer only if the deep neural networks are trained with images of patients that are strictly isolated from the validation and testing patient sets.

2.
IEEE Trans Syst Man Cybern B Cybern ; 37(4): 937-51, 2007 Aug.
Article in English | MEDLINE | ID: mdl-17702291

ABSTRACT

The common vector (CV) method is a linear subspace classifier method which allows one to discriminate between classes of data sets, such as those arising in image and word recognition. This method utilizes subspaces that represent classes during classification. Each subspace is modeled such that common features of all samples in the corresponding class are extracted. To accomplish this goal, the method eliminates features that are in the direction of the eigenvectors corresponding to the nonzero eigenvalues of the covariance matrix of each class. In this paper, we introduce a variation of the CV method, which will be referred to as the modified CV (MCV) method. Then, a novel approach is proposed to apply the MCV method in a nonlinearly mapped higher dimensional feature space. In this approach, all samples are mapped into a higher dimensional feature space using a kernel mapping function, and then, the MCV method is applied in the mapped space. Under certain conditions, each class gives rise to a unique CV, and the method guarantees a 100% recognition rate with respect to the training set data. Moreover, experiments with several test cases also show that the generalization performance of the proposed kernel method is comparable to the generalization performances of other linear subspace classifier methods as well as the kernel-based nonlinear subspace method. While both the MCV method and its kernel counterpart did not outperform the support vector machine (SVM) classifier in most of the reported experiments, the application of our proposed methods is simpler than that of the multiclass SVM classifier. In addition, it is not necessary to adjust any parameters in our approach.


Subject(s)
Algorithms , Artificial Intelligence , Decision Support Techniques , Image Interpretation, Computer-Assisted/methods , Models, Theoretical , Nonlinear Dynamics , Pattern Recognition, Automated/methods , Computer Simulation
3.
IEEE Trans Pattern Anal Mach Intell ; 27(1): 4-13, 2005 Jan.
Article in English | MEDLINE | ID: mdl-15628264

ABSTRACT

In face recognition tasks, the dimension of the sample space is typically larger than the number of the samples in the training set. As a consequence, the within-class scatter matrix is singular and the Linear Discriminant Analysis (LDA) method cannot be applied directly. This problem is known as the "small sample size" problem. In this paper, we propose a new face recognition method called the Discriminative Common Vector method based on a variation of Fisher's Linear Discriminant Analysis for the small sample size case. Two different algorithms are given to extract the discriminative common vectors representing each person in the training set of the face database. One algorithm uses the within-class scatter matrix of the samples in the training set while the other uses the subspace methods and the Gram-Schmidt orthogonalization procedure to obtain the discriminative common vectors. Then, the discriminative common vectors are used for classification of new faces. The proposed method yields an optimal solution for maximizing the modified Fisher's Linear Discriminant criterion given in the paper. Our test results show that the Discriminative Common Vector method is superior to other methods in terms of recognition accuracy, efficiency, and numerical stability.


Subject(s)
Algorithms , Artificial Intelligence , Discriminant Analysis , Face/anatomy & histology , Pattern Recognition, Automated/methods , Photography/methods , Signal Processing, Computer-Assisted , History, Ancient , Humans , Image Interpretation, Computer-Assisted , Principal Component Analysis , Sample Size
4.
Biomed Mater Eng ; 13(2): 159-66, 2003.
Article in English | MEDLINE | ID: mdl-12775906

ABSTRACT

In this work, the cross-sectional areas of the vocal tract are determined for the lossy and lossless cases by using the pole-zero models obtained from the electrical equivalent circuit model of the vocal tract and the system identification method. The cross-sectional areas are used to compare the lossy and lossless cases. In the lossy case, the internal losses due to wall vibration, heat conduction, air friction and viscosity are considered, that is, the complex poles and zeros obtained from the models are used directly. Whereas, in the lossless case, only the imaginary parts of these poles and zeros are used. The vocal tract shapes obtained for the lossy case are close to the actual ones.


Subject(s)
Algorithms , Glottis/physiology , Models, Biological , Speech Acoustics , Speech Production Measurement/methods , Anatomy, Cross-Sectional/methods , Elasticity , Glottis/anatomy & histology , Humans , Larynx/anatomy & histology , Larynx/physiology , Pressure , Speech/physiology , Speech Production Measurement/instrumentation , Viscosity , Vocal Cords/anatomy & histology , Vocal Cords/physiology
SELECTION OF CITATIONS
SEARCH DETAIL
...