Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 23
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
J Imaging ; 10(6)2024 May 29.
Artigo em Inglês | MEDLINE | ID: mdl-38921610

RESUMO

Accurate and robust 3D human modeling from a single image presents significant challenges. Existing methods have shown potential, but they often fail to generate reconstructions that match the level of detail in the input image. These methods particularly struggle with loose clothing. They typically employ parameterized human models to constrain the reconstruction process, ensuring the results do not deviate too far from the model and produce anomalies. However, this also limits the recovery of loose clothing. To address this issue, we propose an end-to-end method called IHRPN for reconstructing clothed humans from a single 2D human image. This method includes a feature extraction module for semantic extraction of image features. We propose an image semantic feature extraction aimed at achieving pixel model space consistency and enhancing the robustness of loose clothing. We extract features from the input image to infer and recover the SMPL-X mesh, and then combine it with a normal map to guide the implicit function to reconstruct the complete clothed human. Unlike traditional methods, we use local features for implicit surface regression. Our experimental results show that our IHRPN method performs excellently on the CAPE and AGORA datasets, achieving good performance, and the reconstruction of loose clothing is noticeably more accurate and robust.

2.
Sci Rep ; 14(1): 8307, 2024 Apr 09.
Artigo em Inglês | MEDLINE | ID: mdl-38594404

RESUMO

Due to the antiquity and difficulty of excavation, the Terracotta Warriors have suffered varying degrees of damage. To restore the cultural relics to their original appearance, utilizing point clouds to repair damaged Terracotta Warriors has always been a hot topic in cultural relic protection. The output results of existing methods in point cloud completion often lack diversity. Probability-based models represented by Denoising Diffusion Probabilistic Models have recently achieved great success in the field of images and point clouds and can output a variety of results. However, one drawback of diffusion models is that too many samples result in slow generation speed. Toward this issue, we propose a new neural network for Terracotta Warriors fragments completion. During the reverse diffusion stage, we initially decrease the number of sampling steps to generate a coarse result. This preliminary outcome undergoes further refinement through a multi-scale refine network. Additionally, we introduce a novel approach called Partition Attention Sampling to enhance the representation capabilities of features. The effectiveness of the proposed model is validated in the experiments on the real Terracotta Warriors dataset and public dataset. The experimental results conclusively demonstrate that our model exhibits competitive performance in comparison to other existing models.

3.
Opt Express ; 31(6): 9496-9514, 2023 Mar 13.
Artigo em Inglês | MEDLINE | ID: mdl-37157519

RESUMO

The dense point clouds of Terracotta Warriors obtained by a 3D scanner have a lot of redundant data, which reduces the efficiency of the transmission and subsequent processing. Aiming at the problems that points generated by sampling methods cannot be learned through the network and are irrelevant to downstream tasks, an end-to-end specific task-driven and learnable down-sampling method named TGPS is proposed. First, the point-based Transformer unit is used to embed the features and the mapping function is used to extract the input point features to dynamically describe the global features. Then, the inner product of the global feature and each point feature is used to estimate the contribution of each point to the global feature. The contribution values are sorted by descending for different tasks, and the point features with high similarity to the global features are retained. To further learn rich local representation, combined with the graph convolution operation, the Dynamic Graph Attention Edge Convolution (DGA EConv) is proposed as a neighborhood graph for local feature aggregation. Finally, the networks for the downstream tasks of point cloud classification and reconstruction are presented. Experiments show that the method realizes the down-sampling under the guidance of the global features. The proposed TGPS-DGA-Net for point cloud classification has achieved the best accuracy on both the real-world Terracotta Warrior fragments and the public datasets.

4.
PLoS One ; 18(1): e0280073, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36607995

RESUMO

Unsupervised image-to-image translation (UI2I) tasks aim to find a mapping between the source and the target domains from unpaired training data. Previous methods can not effectively capture the differences between the source and the target domain on different scales and often leads to poor quality of the generated images, noise, distortion, and other conditions that do not match human vision perception, and has high time complexity. To address this problem, we propose a multi-scale training structure and a progressive growth generator method to solve UI2I task. Our method refines the generated images from global structures to local details by adding new convolution blocks continuously and shares the network parameters in different scales and also in the same scale of network. Finally, we propose a new Cross-CBAM mechanism (CRCBAM), which uses a multi-layer spatial attention and channel attention cross structure to generate more refined style images. Experiments on our collected Opera Face, and other open datasets Summer↔Winter, Horse↔Zebra, Photo↔Van Gogh, show that the proposed algorithm is superior to other state-of-art algorithms.


Assuntos
Aprendizagem , Percepção Visual , Humanos , Animais , Cavalos , Algoritmos , Estações do Ano , Traduções , Processamento de Imagem Assistida por Computador
5.
IEEE Trans Vis Comput Graph ; 29(1): 429-439, 2023 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-36179001

RESUMO

We present PuzzleFixer, an immersive interactive system for experts to rectify defective reassembled 3D objects. Reassembling the fragments of a broken object to restore its original state is the prerequisite of many analytical tasks such as cultural relics analysis and forensics reasoning. While existing computer-aided methods can automatically reassemble fragments, they often derive incorrect objects due to the complex and ambiguous fragment shapes. Thus, experts usually need to refine the object manually. Prior advances in immersive technologies provide benefits for realistic perception and direct interactions to visualize and interact with 3D fragments. However, few studies have investigated the reassembled object refinement. The specific challenges include: 1) the fragment combination set is too large to determine the correct matches, and 2) the geometry of the fragments is too complex to align them properly. To tackle the first challenge, PuzzleFixer leverages dimensionality reduction and clustering techniques, allowing users to review possible match categories, select the matches with reasonable shapes, and drill down to shapes to correct the corresponding faces. For the second challenge, PuzzleFixer embeds the object with node-link networks to augment the perception of match relations. Specifically, it instantly visualizes matches with graph edges and provides force feedback to facilitate the efficiency of alignment interactions. To demonstrate the effectiveness of PuzzleFixer, we conducted an expert evaluation based on two cases on real-world artifacts and collected feedback through post-study interviews. The results suggest that our system is suitable and efficient for experts to refine incorrect reassembled objects.

6.
J Opt Soc Am A Opt Image Sci Vis ; 39(12): 2343-2353, 2022 Dec 01.
Artigo em Inglês | MEDLINE | ID: mdl-36520758

RESUMO

Although many recent deep learning methods have achieved good performance in point cloud analysis, most of them are built upon the heavy cost of manual labeling. Unsupervised representation learning methods have attracted increasing attention due to their high label efficiency. How to learn more useful representations from unlabeled 3D point clouds is still a challenging problem. Addressing this problem, we propose a novel unsupervised learning approach for point cloud analysis, named ULD-Net, consisting of an equivariant-crop (equiv-crop) module to achieve dense similarity learning. We propose dense similarity learning that maximizes consistency across two randomly transformed global-local views at both the instance level and the point level. To build feature correspondence between global and local views, an equiv-crop is proposed to transform features from the global scope to the local scope. Unlike previous methods that require complicated designs, such as negative pairs and momentum encoders, our ULD-Net benefits from the simple Siamese network that relies solely on stop-gradient operation preventing the network from collapsing. We also utilize the feature separability constraint for more representative embeddings. Experimental results show that our ULD-Net achieves the best results of context-based unsupervised methods and comparable performances to supervised models in shape classification and segmentation tasks. On the linear support vector machine classification benchmark, our ULD-Net surpasses the best context-based method spatiotemporal self-supervised representation learning (STRL) by 1.1% overall accuracy. On tasks with fine-tuning, our ULD-Net outperforms STRL under fully supervised and semisupervised settings, in particular, 0.1% accuracy gain on the ModelNet40 classification benchmark, and 0.6% medium intersection of union gain on the ShapeNet part segmentation benchmark.

7.
Sensors (Basel) ; 22(23)2022 Dec 05.
Artigo em Inglês | MEDLINE | ID: mdl-36502189

RESUMO

With the objective of addressing the problem of the fixed convolutional kernel of a standard convolution neural network and the isotropy of features making 3D point cloud data ineffective in feature learning, this paper proposes a point cloud processing method based on graph convolution multilayer perceptron, named GC-MLP. Unlike traditional local aggregation operations, the algorithm generates an adaptive kernel through the dynamic learning features of points, so that it can dynamically adapt to the structure of the object, i.e., the algorithm first adaptively assigns different weights to adjacent points according to the different relationships between the different points captured. Furthermore, local information interaction is then performed with the convolutional layers through a weight-sharing multilayer perceptron. Experimental results show that, under different task benchmark datasets (including ModelNet40 dataset, ShapeNet Part dataset, S3DIS dataset), our proposed algorithm achieves state-of-the-art for both point cloud classification and segmentation tasks.


Assuntos
Algoritmos , Redes Neurais de Computação , Benchmarking , Computação em Nuvem , Aprendizagem
8.
J Opt Soc Am A Opt Image Sci Vis ; 39(6): 1085-1094, 2022 Jun 01.
Artigo em Inglês | MEDLINE | ID: mdl-36215539

RESUMO

The success of deep neural networks usually relies on massive amounts of manually labeled data, which is both expensive and difficult to obtain in many real-world datasets. In this paper, a novel unsupervised representation learning network, UMA-Net, is proposed for the downstream 3D object classification. First, the multi-scale shell-based encoder is proposed, which is able to extract the local features from different scales in a simple yet effective manner. Second, an improved angular loss is presented to get a good metric for measuring the similarity between local features and global representations. Subsequently, the self-reconstruction loss is introduced to ensure the global representations do not deviate from the input data. Additionally, the output point clouds are generated by the proposed cross-dim-based decoder. Finally, a linear classifier is trained using the global representations obtained from the pre-trained model. Furthermore, the performance of this model is evaluated on ModelNet40 and applied to the real-world 3D Terracotta Warriors fragments dataset. Experimental results demonstrate that our model achieves comparable performance and narrows the gap between unsupervised and supervised learning approaches in downstream object classification tasks. Moreover, it is the first attempt to apply the unsupervised representation learning for 3D Terracotta Warriors fragments. We hope this success can provide a new avenue for the virtual protection of cultural relics.


Assuntos
Redes Neurais de Computação
9.
Sci Rep ; 12(1): 9450, 2022 Jun 08.
Artigo em Inglês | MEDLINE | ID: mdl-35676310

RESUMO

To obtain a higher simplification rate while retaining geometric features, a simplification framework for the point cloud is proposed. Firstly, multi-angle images of the original point cloud are obtained with a virtual camera. Then, feature lines of each image are extracted by deep neural network. Furthermore, according to the proposed mapping relationship between the acquired 2D feature lines and original point cloud, feature points of the point cloud are extracted automatically. Finally, the simplified point cloud is obtained by fusing feature points and simplified non-feature points. The proposed simplification method is applied to four data sets and compared with the other six algorithms. The experimental results demonstrate that our proposed simplification method has the superiority in terms of both retaining geometric features and high simplification rate.

10.
Appl Opt ; 61(6): C80-C88, 2022 Feb 20.
Artigo em Inglês | MEDLINE | ID: mdl-35201001

RESUMO

This study proposes a novel, to the best of our knowledge, transformer-based end-to-end network (TDNet) for point cloud denoising based on encoder-decoder architecture. The encoder is based on the structure of a transformer in natural language processing (NLP). Even though points and sentences are different types of data, the NLP transformer can be improved to be suitable for a point cloud because the point can be regarded as a word. The improved model facilitates point cloud feature extraction and transformation of the input point cloud into the underlying high-dimensional space, which can characterize the semantic relevance between points. Subsequently, the decoder learns the latent manifold of each sampled point from the high-dimensional features obtained by the encoder, finally achieving a clean point cloud. An adaptive sampling approach is introduced during denoising to select points closer to the clean point cloud to reconstruct the surface. This is based on the view that a 3D object is essentially a 2D manifold. Extensive experiments demonstrate that the proposed network is superior in terms of quantitative and qualitative results for synthetic data sets and real-world terracotta warrior fragments.

11.
Comput Methods Programs Biomed ; 215: 106645, 2022 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-35091228

RESUMO

BACKGROUND: The development of Cone-beam X-ray luminescence computed tomography (CB-XLCT) has allowed the quantitative in-depth biological imaging, but with a greatly ill-posed and ill-conditioned inverse problem. Although the predefined permissible source region (PSR) is a widely used way to alleviate the problem for CB-XLCT imaging, how to obtain the accurate PSR is still a challenge for the process of inverse reconstruction. METHODS: We proposed an optimized prior knowledge via a sparse non-convex approach (OPK_SNCA) for CB-XLCT imaging. Firstly, non-convex Lp-norm optimization model was employed for copying with the inverse problem, and an iteratively reweighted split augmented lagrangian shrinkage algorithm was developed to obtain a group of sparse solutions based on different non-convex p values. Secondly, a series of permissible regions (PRs) with different discretized mesh was further achieved, and the intersection operation was implemented on the group of PRs to get a reasonable PSR. After that, the final PSR was adopted as an optimized prior knowledge to enhance the reconstruction quality of inverse reconstruction. RESULTS: Both simulation experiments and in vivo experiment were performed to evaluate the efficiency and robustness of the proposed method. CONCLUSIONS: The experimental results demonstrated that our proposed method could significantly improve the imaging quality of the distribution of X-ray-excitable nanophosphors for CB-XLCT.


Assuntos
Processamento de Imagem Assistida por Computador , Luminescência , Algoritmos , Tomografia Computadorizada de Feixe Cônico , Imagens de Fantasmas , Raios X
12.
Entropy (Basel) ; 23(12)2021 Nov 23.
Artigo em Inglês | MEDLINE | ID: mdl-34945867

RESUMO

Automatically selecting a set of representative views of a 3D virtual cultural relic is crucial for constructing wisdom museums. There is no consensus regarding the definition of a good view in computer graphics; the same is true of multiple views. View-based methods play an important role in the field of 3D shape retrieval and classification. However, it is still difficult to select views that not only conform to subjective human preferences but also have a good feature description. In this study, we define two novel measures based on information entropy, named depth variation entropy and depth distribution entropy. These measures were used to determine the amount of information about the depth swings and different depth quantities of each view. Firstly, a canonical pose 3D cultural relic was generated using principal component analysis. A set of depth maps obtained by orthographic cameras was then captured on the dense vertices of a geodesic unit-sphere by subdividing the regular unit-octahedron. Afterwards, the two measures were calculated separately on the depth maps gained from the vertices and the results on each one-eighth sphere form a group. The views with maximum entropy of depth variation and depth distribution were selected, and further scattered viewpoints were selected. Finally, the threshold word histogram derived from the vector quantization of salient local descriptors on the selected depth maps represented the 3D cultural relic. The viewpoints obtained by the proposed method coincided with an arbitrary pose of the 3D model. The latter eliminated the steps of manually adjusting the model's pose and provided acceptable display views for people. In addition, it was verified on several datasets that the proposed method, which uses the Bag-of-Words mechanism and a deep convolution neural network, also has good performance regarding retrieval and classification when dealing with only four views.

13.
Sci Rep ; 11(1): 22573, 2021 Nov 19.
Artigo em Inglês | MEDLINE | ID: mdl-34799593

RESUMO

Geometry images parameterise a mesh with a square domain and store the information in a single chart. A one-to-one correspondence between the 2D plane and the 3D model is convenient for processing 3D models. However, the parameterised vertices are not all located at the intersection of the gridlines the existing geometry images. Thus, errors are unavoidable when a 3D mesh is reconstructed from the chart. In this paper, we propose parameterise surface onto a novel geometry image that preserves the constraint of topological neighbourhood information at integer coordinate points on a 2D grid and ensures that the shape of the reconstructed 3D mesh does not change from supplemented image data. We find a collection of edges that opens the mesh into simply connected surface with a single boundary. The point distribution with approximate blue noise spectral characteristics is computed by capacity-constrained delaunay triangulation without retriangulation. We move the vertices to the constrained mesh intersection, adjust the degenerate triangles on a regular grid, and fill the blank part by performing a local affine transformation between each triangle in the mesh and image. Unlike other geometry images, the proposed method results in no error in the reconstructed surface model when floating-point data are stored in the image. High reconstruction accuracy is achieved when the xyz positions are in a 16-bit data format in each image channel because only rounding errors exist in the topology-preserving geometry images, there are no sampling errors. This method performs one-to-one mapping between the 3D surface mesh and the points in the 2D image, while foldovers do not appear in the 2D triangular mesh, maintaining the topological structure. This also shows the potential of using a 2D image processing algorithm to process 3D models.

14.
Entropy (Basel) ; 22(2)2020 Feb 22.
Artigo em Inglês | MEDLINE | ID: mdl-33286026

RESUMO

Increasingly, popular online museums have significantly changed the way people acquire cultural knowledge. These online museums have been generating abundant amounts of cultural relics data. In recent years, researchers have used deep learning models that can automatically extract complex features and have rich representation capabilities to implement named-entity recognition (NER). However, the lack of labeled data in the field of cultural relics makes it difficult for deep learning models that rely on labeled data to achieve excellent performance. To address this problem, this paper proposes a semi-supervised deep learning model named SCRNER (Semi-supervised model for Cultural Relics' Named Entity Recognition) that utilizes the bidirectional long short-term memory (BiLSTM) and conditional random fields (CRF) model trained by seldom labeled data and abundant unlabeled data to attain an effective performance. To satisfy the semi-supervised sample selection, we propose a repeat-labeled (relabeled) strategy to select samples of high confidence to enlarge the training set iteratively. In addition, we use embeddings from language model (ELMo) representations to dynamically acquire word representations as the input of the model to solve the problem of the blurred boundaries of cultural objects and Chinese characteristics of texts in the field of cultural relics. Experimental results demonstrate that our proposed model, trained on limited labeled data, achieves an effective performance in the task of named entity recognition of cultural relics.

15.
Entropy (Basel) ; 22(10)2020 Oct 16.
Artigo em Inglês | MEDLINE | ID: mdl-33286937

RESUMO

Knowledge graph completion can make knowledge graphs more complete, which is a meaningful research topic. However, the existing methods do not make full use of entity semantic information. Another challenge is that a deep model requires large-scale manually labelled data, which greatly increases manual labour. In order to alleviate the scarcity of labelled data in the field of cultural relics and capture the rich semantic information of entities, this paper proposes a model based on the Bidirectional Encoder Representations from Transformers (BERT) with entity-type information for the knowledge graph completion of the Chinese texts of cultural relics. In this work, the knowledge graph completion task is treated as a classification task, while the entities, relations and entity-type information are integrated as a textual sequence, and the Chinese characters are used as a token unit in which input representation is constructed by summing token, segment and position embeddings. A small number of labelled data are used to pre-train the model, and then, a large number of unlabelled data are used to fine-tune the pre-training model. The experiment results show that the BERT-KGC model with entity-type information can enrich the semantics information of the entities to reduce the degree of ambiguity of the entities and relations to some degree and achieve more effective performance than the baselines in triple classification, link prediction and relation prediction tasks using 35% of the labelled data of cultural relics.

16.
Entropy (Basel) ; 22(11)2020 Nov 13.
Artigo em Inglês | MEDLINE | ID: mdl-33287058

RESUMO

Computer-aided classification serves as the basis of virtual cultural relic management and display. The majority of the existing cultural relic classification methods require labelling of the samples of the dataset; however, in practical applications, there is often a lack of category labels of samples or an uneven distribution of samples of different categories. To solve this problem, we propose a 3D cultural relic classification method based on a low dimensional descriptor and unsupervised learning. First, the scale-invariant heat kernel signature (Si-HKS) was computed. The heat kernel signature denotes the heat flow of any two vertices across a 3D shape and the heat diffusion propagation is governed by the heat equation. Secondly, the Bag-of-Words (BoW) mechanism was utilized to transform the Si-HKS descriptor into a low-dimensional feature tensor, named a SiHKS-BoW descriptor that is related to entropy. Finally, we applied an unsupervised learning algorithm, called MKDSIF-FCM, to conduct the classification task. A dataset consisting of 3D models from 41 Tang tri-color Hu terracotta Eures was utilized to validate the effectiveness of the proposed method. A series of experiments demonstrated that the SiHKS-BoW descriptor along with the MKDSIF-FCM algorithm showed the best classification accuracy, up to 99.41%, which is a solution for an actual case with the absence of category labels and an uneven distribution of different categories of data. The present work promotes the application of virtual reality in digital projects and enriches the content of digital archaeology.

17.
J Opt Soc Am A Opt Image Sci Vis ; 37(11): 1711-1720, 2020 Nov 01.
Artigo em Inglês | MEDLINE | ID: mdl-33175747

RESUMO

The emergence of the three-dimensional (3D) scanner has greatly benefited archeology, which can now store cultural heritage artifacts in computers and present them on the Internet. As many Terracotta Warriors have been predominantly found in fragments, the pre-processing of these fragments is very important. The raw point cloud of the fragments has lots of redundant points; it requires an excessively large storage space and much time for post-processing. Thus, an effective method for point cloud simplification is proposed for 3D Terracotta Warrior fragments. First, an algorithm for extracting feature points is proposed that is based on local structure. By constructing a k-dimension tree to establish the k-nearest neighborhood of the point cloud, and comparing the feature discriminant parameter and characteristic threshold, the feature points, as well as the non-feature points, are separated. Second, a deep neural network is constructed to simplify the non-feature points. Finally, the feature points and the simplified non-feature points are merged to form the complete simplified point cloud. Experiments with the public point cloud data and the real-world Terracotta Warrior fragments data are designed and conducted. Excellent simplification results were obtained, indicating that the geometric feature can be preserved very well.

18.
Biomed Opt Express ; 11(7): 3717-3732, 2020 Jul 01.
Artigo em Inglês | MEDLINE | ID: mdl-33014562

RESUMO

Cone-beam X-ray luminescence computed tomography (CB-XLCT) emerged as a novel hybrid technique for early detection of small tumors in vivo. However, severe ill-posedness is still a challenge for CB-XLCT imaging. In this study, an adaptive shrinking reconstruction framework without a prior information is proposed for CB-XLCT. In reconstruction processing, the mesh nodes are automatically selected with higher probability to contribute to the distribution of target for imaging. Specially, an adaptive shrinking function is designed to automatically control the permissible source region at a multi-scale rate. Both 3D digital mouse and in vivo experiments were carried out to test the performance of our method. The results indicate that the proposed framework can dramatically improve the imaging quality of CB-XLCT.

19.
Biomed Res Int ; 2020: 8608209, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32420376

RESUMO

Skull sex estimation is one of the hot research topics in forensic anthropology, and has important research value in the fields of criminal investigation, archeology, anthropology, and so on. Sex estimation of skull is crucial in forensic investigations, whether in legal situations that involve living people or to identify mortal remains. The aim of this study is to establish a skull-based sex estimation model in Chinese population, providing a scientific reference for the practical application of forensic medicine and anthropology. We take the superior orbital margin and frontal bone of the skull as the research object and proposed a technology of objective sex estimation of the skull using wavelet transform and Fourier transform. Firstly, the supraorbital margin and frontal bone were quantified by wavelet transform and Fourier transform, and then the extracted features were classified by SVM, and the model was tested. The experimental results show that the accuracy rate of male and female sex discrimination is 90.9% and 94.4%, respectively, which is higher than that of morphological and measurement methods. Compared with the traditional methods, the method has more theoretical basis and objectivity, and the correct rate is higher.


Assuntos
Antropologia Forense , Determinação do Sexo pelo Esqueleto , Crânio/diagnóstico por imagem , Adulto , Feminino , Análise de Fourier , Humanos , Masculino
20.
Comput Math Methods Med ; 2019: 9163547, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-30774706

RESUMO

Sex determination from skeletons is a significant step in the analysis of forensic anthropology. Previous skeletal sex assessments were analyzed by anthropologists' subjective vision and sexually dimorphic features. In this paper, we proposed an improved backpropagation neural network (BPNN) to determine gender from skull. It adds the momentum term to improve the convergence speed and avoids falling into local minimum. The regularization operator is used to ensure the stability of the algorithm, and the Adaboost integration algorithm is used to improve the generalization ability of the model. 267 skulls were used in the experiment, of which 153 were females and 114 were males. Six characteristics of the skull measured by computer-aided measurement are used as the network inputs. There are two structures of BPNN for experiment, namely, [6; 6; 2] and [6; 12; 2], of which the [6; 12; 2] model has better average accuracy. While η = 0.5 and α = 0.9, the classification accuracy is the best. The accuracy rate of the training stage is 97.232%, and the mean squared error (MSE) is 0.01; the accuracy rate of the testing stage is 96.764%, and the MSE is 1.016. Compared with traditional methods, it has stronger learning ability, faster convergence speed, and higher classification accuracy.


Assuntos
Determinação do Sexo pelo Esqueleto/métodos , Crânio/anatomia & histologia , Adolescente , Adulto , Idoso , Idoso de 80 Anos ou mais , Algoritmos , Cefalometria/métodos , Cefalometria/estatística & dados numéricos , Análise Discriminante , Feminino , Antropologia Forense/métodos , Antropologia Forense/estatística & dados numéricos , Humanos , Imageamento Tridimensional , Masculino , Pessoa de Meia-Idade , Modelos Anatômicos , Redes Neurais de Computação , Caracteres Sexuais , Determinação do Sexo pelo Esqueleto/estatística & dados numéricos , Crânio/diagnóstico por imagem , Tomografia Computadorizada por Raios X , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...