Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 30
Filter
1.
Neural Netw ; 179: 106524, 2024 Jul 10.
Article in English | MEDLINE | ID: mdl-39029299

ABSTRACT

Human pose estimation typically encompasses three categories: heatmap-, regression-, and integral-based methods. While integral-based methods possess advantages such as end-to-end learning, full-convolution learning, and being free from quantization errors, they have garnered comparatively less attention due to inferior performance. In this paper, we revisit integral-based approaches for human pose estimation and propose a novel implicit heatmap learning framework. The framework learns the true distribution of keypoints from the perspective of maximum likelihood estimation, aiming to mitigate inherent ambiguity in shape and variance associated with implicit heatmaps. Specifically, Simple Implicit Heatmap Normalization (SIHN) is first introduced to calculate implicit heatmaps as an efficient and effective representation for keypoint localization, which replaces the vanilla softmax normalization method. As implicit heatmaps may introduce potential challenges related to variance and shape ambiguity arising from the inherent nature of implicit heatmaps, we thus propose a Differentiable Spatial-to-Distributive Transform (DSDT) method to aptly map those implicit heatmaps onto the transformation coefficients of a deformed distribution. The deformed distribution is predicted by a likelihood-based generative model to unravel the shape ambiguity quandary effectively, and the transformation coefficients are learned by a regression model to resolve the variance ambiguity issue. Additionally, to expedite the acquisition of precise shape representations throughout the training process, we introduce a Wasserstein Distance-based Constraint (WDC) to ensure stable and reasonable supervision during the initial generation of implicit heatmaps. Experimental results on both the MSCOCO and MPII datasets demonstrate the effectiveness of our proposed method, achieving competitive performance against heatmap-based approaches while maintaining the advantages of integral-based approaches. Our source codes and pre-trained models are available at https://github.com/ducongju/IHL.

2.
Int J Biol Macromol ; 266(Pt 2): 131109, 2024 May.
Article in English | MEDLINE | ID: mdl-38531520

ABSTRACT

Water buffalo is the only mammal found to degrade lignin so far, and laccase plays an indispensable role in the degradation of lignin. In this study, multiple laccase genes were amplified based on the water buffalo rumen derived lignin-degrading bacteria Bacillus cereus and Ochrobactrum pseudintermedium. Subsequently, the corresponding recombinant plasmids were transformed into E. coli expression system BL21 (DE3) for induced expression by Isopropyl-ß-D-thiogalactopyranoside (IPTG). After preliminary screening, protein purification and enzyme activity assays, Lac3833 with soluble expression and high enzyme activity was selected to test its characteristics, especially the ability of lignin degradation. The results showed that the optimum reaction temperature of Lac3833 was 40 °C for different substrates. The relative activity of Lac3833 reached the highest at pH 4.5 and pH 5.5 when the substrates were ABTS or 2,6-DMP and guaiacol, respectively. Additionally, Lac3833 could maintain high enzyme activity in different temperatures, pH and solutions containing Na+, K+, Mg2+, Ca2+ and Mn2+. Importantly, compared to negative treatment, recombinant laccase Lac3833 treatment showed that it had a significant function in degrading lignin. In conclusion, this is a pioneering study to produce recombinant laccase with lignin-degrading ability by bacteria from water buffalo rumen, which will provide new insights for the exploitation of more lignin-degrading enzymes.


Subject(s)
Buffaloes , Cloning, Molecular , Laccase , Lignin , Recombinant Proteins , Rumen , Temperature , Animals , Laccase/genetics , Laccase/metabolism , Lignin/metabolism , Rumen/microbiology , Recombinant Proteins/metabolism , Recombinant Proteins/genetics , Hydrogen-Ion Concentration , Gene Expression , Escherichia coli/genetics , Escherichia coli/metabolism , Bacteria/enzymology , Bacteria/genetics , Substrate Specificity
3.
Neural Netw ; 160: 164-174, 2023 Mar.
Article in English | MEDLINE | ID: mdl-36657330

ABSTRACT

Existing face super-resolution methods depend on deep convolutional networks (DCN) to recover high-quality reconstructed images. They either acquire information in a single space by designing complex models for direct reconstruction, or employ additional networks to extract multiple prior information to enhance the representation of features. However, existing methods are still challenging to perform well due to the inability to learn complete and uniform representations. To this end, we propose a self-attention learning network (SLNet) for three-stage face super-resolution, which fully explores the interdependence of low- and high-level spaces to achieve compensation of the information used for reconstruction. Firstly, SLNet uses a hierarchical feature learning framework to obtain shallow information in the low-level space. Then, the shallow information with cumulative errors due to DCN is improved under high-resolution (HR) supervision, while bringing an intermediate reconstruction result and a powerful intermediate benchmark. Finally, the improved feature representation is further enhanced in high-level space by a multi-scale context-aware encoder-decoder for facial reconstruction. The features in both spaces are explored progressively from coarse to fine reconstruction information. The experimental results show that SLNet has a competitive performance compared to the state-of-the-art methods.


Subject(s)
Deep Learning , Learning , Benchmarking , Attention , Image Processing, Computer-Assisted
4.
IEEE Trans Neural Netw Learn Syst ; 34(7): 3594-3608, 2023 Jul.
Article in English | MEDLINE | ID: mdl-34559666

ABSTRACT

Deep learning models have been able to generate rain-free images effectively, but the extension of these methods to complex rain conditions where rain streaks show various blurring degrees, shapes, and densities has remained an open problem. Among the major challenges are the capacity to encode the rain streaks and the sheer difficulty of learning multi-scale context features that preserve both global color coherence and exactness of detail. To address the first problem, we design a non-local fusion module (NFM) and an attention fusion module (AFM), and construct the multi-level pyramids' architecture to explore the local and global correlations of rain information from the rain image pyramid. More specifically, we apply the non-local operation to fully exploit the self-similarity of rain streaks and perform the fusion of multi-scale features along the image pyramid. To address the latter challenge, we additionally design a residual learning branch that is capable of adaptively bridging the gaps (e.g., texture and color information) between the predicted rain-free image and the clean background via a hybrid embedding representation. Extensive results have demonstrated that our proposed method is able to generate much better rain-free images on several benchmark datasets than the state-of-the-art algorithms. Moreover, we conduct the joint evaluation experiments with respect to deraining performance and the detection/segmentation accuracy to further verify the effectiveness of our deraining method for downstream vision tasks/applications. The source code is available at https://github.com/kuihua/MSHFN.


Subject(s)
Algorithms , Neural Networks, Computer , Benchmarking , Software , Image Processing, Computer-Assisted
5.
Entropy (Basel) ; 24(11)2022 Nov 13.
Article in English | MEDLINE | ID: mdl-36421502

ABSTRACT

This paper proposes an improved human-body-segmentation algorithm with attention-based feature fusion and a refined corner-based feature-point design with sub-pixel stereo matching for the anthropometric system. In the human-body-segmentation algorithm, four CBAMs are embedded in the four middle convolution layers of the backbone network (ResNet101) of PSPNet to achieve better feature fusion in space and channels, so as to improve accuracy. The common convolution in the residual blocks of ResNet101 is substituted by group convolution to reduce model parameters and computational cost, thereby optimizing efficiency. For the stereo-matching scheme, a corner-based feature point is designed to obtain the feature-point coordinates at sub-pixel level, so that precision is refined. A regional constraint is applied according to the characteristic of the checkerboard corner points, thereby reducing complexity. Experimental results demonstrated that the anthropometric system with the proposed CBAM-based human-body-segmentation algorithm and corner-based stereo-matching scheme can significantly outperform the state-of-the-art system in accuracy. It can also meet the national standards GB/T 2664-2017, GA 258-2009 and GB/T 2665-2017; and the textile industry standards FZ/T 73029-2019, FZ/T 73017-2014, FZ/T 73059-2017 and FZ/T 73022-2019.

6.
Article in English | MEDLINE | ID: mdl-36054383

ABSTRACT

Recently, facial priors (e.g., facial parsing maps and facial landmarks) have been widely employed in prior-guided face super-resolution (FSR) because it provides the location of facial components and facial structure information, and helps predict the missing high-frequency (HF) information. However, most existing approaches suffer from two shortcomings: 1) the extracted facial priors are inaccurate since they are extracted from low-resolution (LR) or low-quality super-resolved (SR) face images and 2) they only consider embedding facial priors into the reconstruction process from LR to SR face images, thus failing to explore facial priors to generate LR face image. In this article, we propose a novel pre-prior guided approach that extracts facial prior information from original high-resolution (HR) face images and embeds them into LR ones to obtain HF information-rich LR face images, thereby improving the performance of face reconstruction. Specifically, a novel component hybrid method is proposed, which fuses HR facial components and LR facial background to generate new LR face images (namely, LRmix) via facial parsing maps extracted from HR face images. Furthermore, we design a component hybrid network (CHNet) that learns the LR to LRmix mapping function to ensure that the LRmix can be obtained from LR face images in testing and real-world datasets. Experimental results show that our proposed scheme significantly improves the reconstruction performance for FSR.

7.
Entropy (Basel) ; 24(8)2022 Aug 08.
Article in English | MEDLINE | ID: mdl-36010755

ABSTRACT

With the development of convolutional neural networks, the effect of pedestrian detection has been greatly improved by deep learning models. However, the presence of pseudo pedestrians will lead to accuracy reduction in pedestrian detection. To solve the problem that the existing pedestrian detection algorithms cannot distinguish pseudo pedestrians from real pedestrians, a real and pseudo pedestrian detection method with CA-YOLOv5s based on stereo image fusion is proposed in this paper. Firstly, the two-view images of the pedestrian are captured by a binocular stereo camera. Then, a proposed CA-YOLOv5s pedestrian detection algorithm is used for the left-view and right-view images, respectively, to detect the respective pedestrian regions. Afterwards, the detected left-view and right-view pedestrian regions are matched to obtain the feature point set, and the 3D spatial coordinates of the feature point set are calculated with Zhengyou Zhang's calibration method. Finally, the RANSAC plane-fitting algorithm is adopted to extract the 3D features of the feature point set, and the real and pseudo pedestrian detection is achieved by the trained SVM. The proposed real and pseudo pedestrian detection method with CA-YOLOv5s based on stereo image fusion effectively solves the pseudo pedestrian detection problem and efficiently improves the accuracy. Experimental results also show that for the dataset with real and pseudo pedestrians, the proposed method significantly outperforms other existing pedestrian detection algorithms in terms of accuracy and precision.

8.
IEEE Trans Neural Netw Learn Syst ; 33(1): 378-391, 2022 01.
Article in English | MEDLINE | ID: mdl-33074829

ABSTRACT

Along with the performance improvement of deep-learning-based face hallucination methods, various face priors (facial shape, facial landmark heatmaps, or parsing maps) have been used to describe holistic and partial facial features, making the cost of generating super-resolved face images expensive and laborious. To deal with this problem, we present a simple yet effective dual-path deep fusion network (DPDFN) for face image super-resolution (SR) without requiring additional face prior, which learns the global facial shape and local facial components through two individual branches. The proposed DPDFN is composed of three components: a global memory subnetwork (GMN), a local reinforcement subnetwork (LRN), and a fusion and reconstruction module (FRM). In particular, GMN characterize the holistic facial shape by employing recurrent dense residual learning to excavate wide-range context across spatial series. Meanwhile, LRN is committed to learning local facial components, which focuses on the patch-wise mapping relations between low-resolution (LR) and high-resolution (HR) space on local regions rather than the entire image. Furthermore, by aggregating the global and local facial information from the preceding dual-path subnetworks, FRM can generate the corresponding high-quality face image. Experimental results of face hallucination on public face data sets and face recognition on real-world data sets (VGGface and SCFace) show the superiority both on visual effect and objective indicators over the previous state-of-the-art methods.


Subject(s)
Facial Recognition , Neural Networks, Computer , Algorithms , Face , Hallucinations , Humans , Image Processing, Computer-Assisted/methods
9.
Entropy (Basel) ; 23(7)2021 Jul 07.
Article in English | MEDLINE | ID: mdl-34356407

ABSTRACT

This paper proposes an improved stereo matching algorithm for vehicle speed measurement system based on spatial and temporal image fusion (STIF). Firstly, the matching point pairs in the license plate area with obviously abnormal distance to the camera are roughly removed according to the characteristic of license plate specification. Secondly, more mismatching point pairs are finely removed according to local neighborhood consistency constraint (LNCC). Thirdly, the optimum speed measurement point pairs are selected for successive stereo frame pairs by STIF of binocular stereo video, so that the 3D points corresponding to the matching point pairs for speed measurement in the successive stereo frame pairs are in the same position on the real vehicle, which can significantly improve the vehicle speed measurement accuracy. LNCC and STIF can be used not only for license plate, but also for vehicle logo, light, mirror etc. Experimental results demonstrate that the vehicle speed measurement system with the proposed LNCC+STIF stereo matching algorithm can significantly outperform the state-of-the-art system in accuracy.

10.
Entropy (Basel) ; 23(7)2021 Jul 17.
Article in English | MEDLINE | ID: mdl-34356451

ABSTRACT

A robust vehicle speed measurement system based on feature information fusion for vehicle multi-characteristic detection is proposed in this paper. A vehicle multi-characteristic dataset is constructed. With this dataset, seven CNN-based modern object detection algorithms are trained for vehicle multi-characteristic detection. The FPN-based YOLOv4 is selected as the best vehicle multi-characteristic detection algorithm, which applies feature information fusion of different scales with both rich high-level semantic information and detailed low-level location information. The YOLOv4 algorithm is improved by combing with the attention mechanism, in which the residual module in YOLOv4 is replaced by the ECA channel attention module with cross channel interaction. An improved ECA-YOLOv4 object detection algorithm based on both feature information fusion and cross channel interaction is proposed, which improves the performance of YOLOv4 for vehicle multi-characteristic detection and reduces the model parameter size and FLOPs as well. A multi-characteristic fused speed measurement system based on license plate, logo, and light is designed accordingly. The system performance is verified by experiments. The experimental results show that the speed measurement error rate of the proposed system meets the requirement of the China national standard GB/T 21555-2007 in which the speed measurement error rate should be less than 6%. The proposed system can efficiently enhance the vehicle speed measurement accuracy and effectively improve the vehicle speed measurement robustness.

11.
Sensors (Basel) ; 19(11)2019 Jun 05.
Article in English | MEDLINE | ID: mdl-31195685

ABSTRACT

Cooperative communication improves the link throughput of wireless networks through spatial diversity. However, it reduces the frequency reuse of the entire network due to the enlarged link interference range introduced by each helper. In this paper, we propose a cooperative medium access control (MAC) protocol with optimal relay selection (ORS-CMAC) for multihop, multirate large scale networks, which can reduce the interference range and improve the network throughput. Then, we investigate the performance gain achieved by these two competitive factors, i.e., the spatial frequency reuse gain and spatial diversity gain, in large scale wireless networks. The expressions of maximum network throughput for direct transmissions and cooperative transmissions in the whole network are derived as a function of the number of concurrent transmission links, data packet length, and average packet transmission time. Simulation results validate the effectiveness of the theoretical results. The theoretical and simulation results show that the helper can reduce the spatial frequency reuse slightly, and spatial diversity gain can compensate for the decrease of the spatial frequency reuse, thereby improving the network throughput from the viewpoint of the whole network.

12.
IEEE Trans Neural Netw Learn Syst ; 30(11): 3275-3286, 2019 11.
Article in English | MEDLINE | ID: mdl-30703043

ABSTRACT

Convolutional neural networks (CNNs) have wide applications in pattern recognition and image processing. Despite recent advances, much remains to be done for CNNs to learn a better representation of image samples. Therefore, constant optimizations should be provided on CNNs. To achieve a good performance on classification, intuitively, samples' interclass separability, or intraclass compactness should be simultaneously maximized. Accordingly, in this paper, we propose a new network, named separability and compactness network (SCNet) to rectify this problem. SCNet minimizes the softmax loss and the distance between features of samples from the same class under a jointly supervised framework, resulting in simultaneous maximization of interclass separability and intraclass compactness of samples. Furthermore, considering the convenience and the efficiency of the cosine similarity in face recognition tasks, we incorporate it into SCNet's distance metric to enable sample features from the same class to line up in the same direction and those from different classes to have a large angle of separation. We apply SCNet to three different tasks: visual classification, face recognition, and image superresolution. Experiments on both public data sets and real-world satellite images validate the effectiveness of our SCNet.

13.
Int J Comput Assist Radiol Surg ; 12(12): 2129-2143, 2017 Dec.
Article in English | MEDLINE | ID: mdl-28432489

ABSTRACT

PURPOSE: There are many proven problems associated with traditional surgical planning methods for orthognathic surgery. To address these problems, we developed a computer-aided surgical simulation (CASS) system, the AnatomicAligner, to plan orthognathic surgery following our streamlined clinical protocol. METHODS: The system includes six modules: image segmentation and three-dimensional (3D) reconstruction, registration and reorientation of models to neutral head posture, 3D cephalometric analysis, virtual osteotomy, surgical simulation, and surgical splint generation. The accuracy of the system was validated in a stepwise fashion: first to evaluate the accuracy of AnatomicAligner using 30 sets of patient data, then to evaluate the fitting of splints generated by AnatomicAligner using 10 sets of patient data. The industrial gold standard system, Mimics, was used as the reference. RESULT: When comparing the results of segmentation, virtual osteotomy and transformation achieved with AnatomicAligner to the ones achieved with Mimics, the absolute deviation between the two systems was clinically insignificant. The average surface deviation between the two models after 3D model reconstruction in AnatomicAligner and Mimics was 0.3 mm with a standard deviation (SD) of 0.03 mm. All the average surface deviations between the two models after virtual osteotomy and transformations were smaller than 0.01 mm with a SD of 0.01 mm. In addition, the fitting of splints generated by AnatomicAligner was at least as good as the ones generated by Mimics. CONCLUSION: We successfully developed a CASS system, the AnatomicAligner, for planning orthognathic surgery following the streamlined planning protocol. The system has been proven accurate. AnatomicAligner will soon be available freely to the boarder clinical and research communities.


Subject(s)
Cephalometry/methods , Computer Simulation , Computer-Aided Design , Imaging, Three-Dimensional , Orthognathic Surgical Procedures/methods , Surgery, Computer-Assisted/instrumentation , User-Computer Interface , Humans
14.
Biomed Res Int ; 2014: 351095, 2014.
Article in English | MEDLINE | ID: mdl-24949437

ABSTRACT

Ovarian carcinoma immunoreactive antigen-like protein 2 (OCIAD2) is a protein with unknown function. Frequently methylated or downregulated, OCIAD2 has been observed in kinds of tumors, and TGFß signaling has been proved to induce the expression of OCIAD2. However, current pathway analysis tools do not cover the genes without reported interactions like OCIAD2 and also miss some significant genes with relatively lower expression. To investigate potential biological milieu of OCIAD2, especially in cancer microenvironment, a nova approach pbMOO was created to find the potential pathways from TGFß to OCIAD2 by searching on the pathway bridge, which consisted of cancer enriched looping patterns from the complicated entire protein interactions network. The pbMOO approach was further applied to study the modulator of ligand TGFß1, receptor TGFßR1, intermediate transfer proteins, transcription factor, and signature OCIAD2. Verified by literature and public database, the pathway TGFß1-TGFßR1-SMAD2/3-SMAD4/AR-OCIAD2 was detected, which concealed the androgen receptor (AR) which was the possible transcription factor of OCIAD2 in TGFßsignal, and it well explained the mechanism of TGFß induced OCIAD2 expression in cancer microenvironment, therefore providing an important clue for the future functional analysis of OCIAD2 in tumor pathogenesis.


Subject(s)
Carcinoma/genetics , Neoplasm Proteins/genetics , Ovarian Neoplasms/genetics , Transforming Growth Factor beta/genetics , Carcinoma/pathology , Female , Gene Expression Regulation, Neoplastic , Humans , Neoplasm Proteins/biosynthesis , Ovarian Neoplasms/pathology , Signal Transduction/genetics , Transforming Growth Factor beta/biosynthesis , Tumor Microenvironment
15.
J Xray Sci Technol ; 21(2): 251-82, 2013.
Article in English | MEDLINE | ID: mdl-23694914

ABSTRACT

Recent advances in cone-beam computed tomography (CBCT) have rapidly enabled widepsread applications of dentomaxillofacial imaging and orthodontic practices in the past decades due to its low radiation dose, high spatial resolution, and accessibility. However, low contrast resolution in CBCT image has become its major limitation in building skull models. Intensive hand-segmentation is usually required to reconstruct the skull models. One of the regions affected by this limitation the most is the thin bone images. This paper presents a novel segmentation approach based on wavelet density model (WDM) for a particular interest in the outer surface of anterior wall of maxilla. Nineteen CBCT datasets are used to conduct two experiments. This mode-based segmentation approach is validated and compared with three different segmentation approaches. The results show that the performance of this model-based segmentation approach is better than those of the other approaches. It can achieve 0.25 ± 0.2 mm of surface error from ground truth of bone surface.


Subject(s)
Cone-Beam Computed Tomography/methods , Imaging, Three-Dimensional/methods , Maxilla/diagnostic imaging , Wavelet Analysis , Algorithms , Artificial Intelligence , Databases, Factual , Humans , Models, Statistical , Reproducibility of Results
16.
J Oral Maxillofac Surg ; 70(4): 952-62, 2012 Apr.
Article in English | MEDLINE | ID: mdl-21764490

ABSTRACT

PURPOSE: The purpose of the present study was to evaluate the accuracy of our newly developed approach to digital dental model articulation. MATERIALS AND METHODS: Twelve sets of stone dental models from patients with craniomaxillofacial deformities were used for validation. All the models had stable occlusion and no evidence of early contact. The stone models were hand articulated to the maximal intercuspation (MI) position and scanned using a 3-dimensional surface laser scanner. These digital dental models at the MI position served as the control group. To establish an experimental group, each mandibular dental model was disarticulated from its original MI position to 80 initial positions. Using a regular office personal computer, they were digitally articulated to the MI position using our newly developed approach. These rearticulated mandibular models served as the experimental group. Finally, the translational, rotational, and surface deviations in the mandibular position were calculated between the experimental and control groups, and statistical analyses were performed. RESULTS: All the digital dental models were successfully articulated. Between the control and experimental groups, the largest translational difference in mandibular position was within 0.2 mm ± 0.6 mm. The largest rotational difference was within 0.1° ± 1.1°. The averaged surface deviation was 0.08 ± 0.07. The results of the Bland and Altman method of assessing measurement agreement showed tight limits for the translational, rotational, and surface deviations. In addition, the final positions of the mandibular articulated from the 80 initial positions were absolutely agreed on. CONCLUSION: The results of our study have demonstrated that using our approach, the digital dental models can be accurately and effectively articulated to the MI position. In addition, the 3-dimensional surface geometry of the mandibular teeth played a more important role in digital dental articulation than the initial position of the mandibular teeth.


Subject(s)
Algorithms , Dental Occlusion , Models, Dental , Orthognathic Surgical Procedures/standards , Patient Care Planning/standards , Anatomic Landmarks/anatomy & histology , Computer Simulation , Dental Arch/anatomy & histology , Humans , Imaging, Three-Dimensional/methods , Incisor/anatomy & histology , Lasers , Mandible/anatomy & histology , Molar/anatomy & histology , Rotation
17.
Med Image Comput Comput Assist Interv ; 13(Pt 3): 278-86, 2010.
Article in English | MEDLINE | ID: mdl-20879410

ABSTRACT

Articulating digital dental models is often inaccurate and very time-consuming. This paper presents an automated approach to efficiently articulate digital dental models to maximum intercuspation (MI). There are two steps in our method. The first step is to position the models to an initial position based on dental curves and a point matching algorithm. The second step is to finally position the models to the MI position based on our novel approach of using iterative surface-based minimum distance mapping with collision constraints. Finally, our method was validated using 12 sets of digital dental models. The results showed that using our method the digital dental models can be accurately and effectively articulated to MI position.


Subject(s)
Algorithms , Dental Casting Technique , Image Interpretation, Computer-Assisted/methods , Imaging, Three-Dimensional/methods , Microscopy, Confocal/methods , Models, Dental , Pattern Recognition, Automated/methods , Image Enhancement/methods , Reproducibility of Results , Sensitivity and Specificity
18.
IEEE Trans Med Imaging ; 29(9): 1652-63, 2010 Sep.
Article in English | MEDLINE | ID: mdl-20529735

ABSTRACT

In the field of craniomaxillofacial (CMF) surgery, surgical planning can be performed on composite 3-D models that are generated by merging a computerized tomography scan with digital dental models. Digital dental models can be generated by scanning the surfaces of plaster dental models or dental impressions with a high-resolution laser scanner. During the planning process, one of the essential steps is to reestablish the dental occlusion. Unfortunately, this task is time-consuming and often inaccurate. This paper presents a new approach to automatically and efficiently reestablish dental occlusion. It includes two steps. The first step is to initially position the models based on dental curves and a point matching technique. The second step is to reposition the models to the final desired occlusion based on iterative surface-based minimum distance mapping with collision constraints. With linearization of rotation matrix, the alignment is modeled by solving quadratic programming. The simulation was completed on 12 sets of digital dental models. Two sets of dental models were partially edentulous, and another two sets have first premolar extractions for orthodontic treatment. Two validation methods were applied to the articulated models. The results show that using our method, the dental models can be successfully articulated with a small degree of deviations from the occlusion achieved with the gold-standard method.


Subject(s)
Algorithms , Dental Occlusion , Imaging, Three-Dimensional/methods , Models, Dental , Tooth/anatomy & histology , Computer Simulation , Humans , Mandible/anatomy & histology , Maxilla/anatomy & histology , Reproducibility of Results , Tomography, X-Ray Computed
19.
IEEE Trans Image Process ; 18(3): 534-51, 2009 Mar.
Article in English | MEDLINE | ID: mdl-19211330

ABSTRACT

Following recent works on the rate region of the quadratic Gaussian two-terminal source coding problem and limit-approaching code designs, this paper examines multiterminal source coding of two correlated, i.e., stereo, video sequences to save the sum rate over independent coding of both sequences. Two multiterminal video coding schemes are proposed. In the first scheme, the left sequence of the stereo pair is coded by H.264/AVC and used at the joint decoder to facilitate Wyner-Ziv coding of the right video sequence. The first I-frame of the right sequence is successively coded by H.264/AVC Intracoding and Wyner-Ziv coding. An efficient stereo matching algorithm based on loopy belief propagation is then adopted at the decoder to produce pixel-level disparity maps between the corresponding frames of the two decoded video sequences on the fly. Based on the disparity maps, side information for both motion vectors and motion-compensated residual frames of the right sequence are generated at the decoder before Wyner-Ziv encoding. In the second scheme, source splitting is employed on top of classic and Wyner-Ziv coding for compression of both I-frames to allow flexible rate allocation between the two sequences. Experiments with both schemes on stereo video sequences using H.264/AVC, LDPC codes for Slepian-Wolf coding of the motion vectors, and scalar quantization in conjunction with LDPC codes for Wyner-Ziv coding of the residual coefficients give a slightly lower sum rate than separate H.264/AVC coding of both sequences at the same video quality.


Subject(s)
Algorithms , Computer Communication Networks , Data Compression/methods , Decision Support Techniques , Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Video Recording/methods , Data Compression/standards , Internationality , Reproducibility of Results , Sensitivity and Specificity , Signal Processing, Computer-Assisted , Video Recording/standards
20.
IEEE Trans Image Process ; 15(12): 3791-803, 2006 Dec.
Article in English | MEDLINE | ID: mdl-17153952

ABSTRACT

Following recent theoretical works on successive Wyner-Ziv coding (WZC), we propose a practical layered Wyner-Ziv video coder using the DCT, nested scalar quantization, and irregular LDPC code based Slepian-Wolf coding (or lossless source coding with side information at the decoder). Our main novelty is to use the base layer of a standard scalable video coder (e.g., MPEG-4/H.26L FGS or H.263+) as the decoder side information and perform layered WZC for quality enhancement. Similar to FGS coding, there is no performance difference between layered and monolithic WZC when the enhancement bitstream is generated in our proposed coder. Using an H.26L coded version as the base layer, experiments indicate that WZC gives slightly worse performance than FGS coding when the channel (for both the base and enhancement layers) is noiseless. However, when the channel is noisy, extensive simulations of video transmission over wireless networks conforming to the CDMA2000 1X standard show that H.26L base layer coding plus Wyner-Ziv enhancement layer coding are more robust against channel errors than H.26L FGS coding. These results demonstrate that layered Wyner-Ziv video coding is a promising new technique for video streaming over wireless networks.


Subject(s)
Algorithms , Computer Communication Networks , Computer Graphics , Data Compression/methods , Image Enhancement/methods , Information Storage and Retrieval/methods , Signal Processing, Computer-Assisted , Video Recording/methods , Reproducibility of Results , Sensitivity and Specificity
SELECTION OF CITATIONS
SEARCH DETAIL
...