Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 25
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
BMC Oral Health ; 24(1): 500, 2024 May 09.
Artigo em Inglês | MEDLINE | ID: mdl-38724912

RESUMO

BACKGROUND: Teeth identification has a pivotal role in the dental curriculum and provides one of the important foundations of clinical practice. Accurately identifying teeth is a vital aspect of dental education and clinical practice, but can be challenging due to the anatomical similarities between categories. In this study, we aim to explore the possibility of using a deep learning model to classify isolated tooth by a set of photographs. METHODS: A collection of 5,100 photographs from 850 isolated human tooth specimens were assembled to serve as the dataset for this study. Each tooth was carefully labeled during the data collection phase through direct observation. We developed a deep learning model that incorporates the state-of-the-art feature extractor and attention mechanism to classify each tooth based on a set of 6 photographs captured from multiple angles. To increase the validity of model evaluation, a voting-based strategy was applied to refine the test set to generate a more reliable label, and the model was evaluated under different types of classification granularities. RESULTS: This deep learning model achieved top-3 accuracies of over 90% in all classification types, with an average AUC of 0.95. The Cohen's Kappa demonstrated good agreement between model prediction and the test set. CONCLUSIONS: This deep learning model can achieve performance comparable to that of human experts and has the potential to become a valuable tool for dental education and various applications in accurately identifying isolated tooth.


Assuntos
Aprendizado Profundo , Dente , Humanos , Dente/anatomia & histologia , Dente/diagnóstico por imagem , Fotografia Dentária/métodos
2.
Artigo em Inglês | MEDLINE | ID: mdl-38767999

RESUMO

Even though the collaboration between traditional and neuromorphic event cameras brings prosperity to frame-event based vision applications, the performance is still confined by the resolution gap crossing two modalities in both spatial and temporal domains. This paper is devoted to bridging the gap by increasing the temporal resolution for images, i.e., motion deblurring, and the spatial resolution for events, i.e., event super-resolving, respectively. To this end, we introduce CrossZoom, a novel unified neural Network (CZ-Net) to jointly recover sharp latent sequences within the exposure period of a blurry input and the corresponding High-Resolution (HR) events. Specifically, we present a multi-scale blur-event fusion architecture that leverages the scale-variant properties and effectively fuses cross-modal information to achieve cross-enhancement. Attention-based adaptive enhancement and cross-interaction prediction modules are devised to alleviate the distortions inherent in Low-Resolution (LR) events and enhance the final results through the prior blur-event complementary information. Furthermore, we propose a new dataset containing HR sharp-blurry images and the corresponding HR-LR event streams to facilitate future research. Extensive qualitative and quantitative experiments on synthetic and real-world datasets demonstrate the effectiveness and robustness of the proposed method. Codes and datasets are released at https://bestrivenzc.github.io/CZ-Net/.

3.
IEEE Trans Pattern Anal Mach Intell ; 46(5): 2866-2881, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-37983154

RESUMO

Making line segment detectors more reliable under motion blurs is one of the most important challenges for practical applications, such as visual SLAM and 3D line mapping. Existing line segment detection methods face severe performance degradation for accurately detecting and locating line segments when motion blur occurs. While event data shows strong complementary characteristics to images for minimal blur and edge awareness at high-temporal resolution, potentially beneficial for reliable line segment recognition. To robustly detect line segments over motion blurs, we propose to leverage the complementary information of images and events. Specifically, we first design a general frame-event feature fusion network to extract and fuse the detailed image textures and low-latency event edges, which consists of a channel-attention-based shallow fusion module and a self-attention-based dual hourglass module. We then utilize the state-of-the-art wireframe parsing networks to detect line segments on the fused feature map. Moreover, due to the lack of line segment detection datasets with pairwise motion-blurred images and events, we contribute two datasets, i.e., synthetic FE-Wireframe and realistic FE-Blurframe, for network training and evaluation. Extensive analyses on the component configurations demonstrate the design effectiveness of our fusion network. When compared to the state-of-the-arts, the proposed approach achieves the highest detection accuracy while maintaining comparable real-time performance. In addition to being robust to motion blur, our method also exhibits superior performance for line detection under high dynamic range scenes.

4.
IEEE Trans Pattern Anal Mach Intell ; 45(12): 14727-14744, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37676811

RESUMO

This article presents Holistically-Attracted Wireframe Parsing (HAWP), a method for geometric analysis of 2D images containing wireframes formed by line segments and junctions. HAWP utilizes a parsimonious Holistic Attraction (HAT) field representation that encodes line segments using a closed-form 4D geometric vector field. The proposed HAWP consists of three sequential components empowered by end-to-end and HAT-driven designs: 1) generating a dense set of line segments from HAT fields and endpoint proposals from heatmaps, 2) binding the dense line segments to sparse endpoint proposals to produce initial wireframes, and 3) filtering false positive proposals through a novel endpoint-decoupled line-of-interest aligning (EPD LOIAlign) module that captures the co-occurrence between endpoint proposals and HAT fields for better verification. Thanks to our novel designs, HAWPv2 shows strong performance in fully supervised learning, while HAWPv3 excels in self-supervised learning, achieving superior repeatability scores and efficient training (24 GPU hours on a single GPU). Furthermore, HAWPv3 exhibits a promising potential for wireframe parsing in out-of-distribution images without providing ground truth labels of wireframes.

5.
IEEE Trans Pattern Anal Mach Intell ; 45(12): 15233-15248, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37698973

RESUMO

This article studies the challenging two-view 3D reconstruction problem in a rigorous sparse-view configuration, which is suffering from insufficient correspondences in the input image pairs for camera pose estimation. We present a novel Neural One-PlanE RANSAC framework (termed NOPE-SAC in short) that exerts excellent capability of neural networks to learn one-plane pose hypotheses from 3D plane correspondences. Building on the top of a Siamese network for plane detection, our NOPE-SAC first generates putative plane correspondences with a coarse initial pose. It then feeds the learned 3D plane correspondences into shared MLPs to estimate the one-plane camera pose hypotheses, which are subsequently reweighed in a RANSAC manner to obtain the final camera pose. Because the neural one-plane pose minimizes the number of plane correspondences for adaptive pose hypotheses generation, it enables stable pose voting and reliable pose refinement with a few of plane correspondences for the sparse-view inputs. In the experiments, we demonstrate that our NOPE-SAC significantly improves the camera pose estimation for the two-view inputs with severe viewpoint changes, setting several new state-of-the-art performances on two challenging benchmarks, i.e., MatterPort3D and ScanNet, for sparse-view 3D reconstruction.

6.
IEEE Trans Pattern Anal Mach Intell ; 45(7): 8660-8678, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-37015491

RESUMO

Although synthetic aperture imaging (SAI) can achieve the seeing-through effect by blurring out off-focus foreground occlusions while recovering in-focus occluded scenes from multi-view images, its performance is often deteriorated by dense occlusions and extreme lighting conditions. To address the problem, this paper presents an Event-based SAI (E-SAI) method by relying on the asynchronous events with extremely low latency and high dynamic range acquired by an event camera. Specifically, the collected events are first refocused by a Refocus-Net module to align in-focus events while scattering out off-focus ones. Following that, a hybrid network composed of spiking neural networks (SNNs) and convolutional neural networks (CNNs) is proposed to encode the spatio-temporal information from the refocused events and reconstruct a visual image of the occluded targets. Extensive experiments demonstrate that our proposed E-SAI method can achieve remarkable performance in dealing with very dense occlusions and extreme lighting conditions and produce high-quality images from pure events. Codes and datasets are available at https://dvs-whu.cn/projects/esai/.

7.
IEEE Trans Med Imaging ; 42(6): 1809-1821, 2023 06.
Artigo em Inglês | MEDLINE | ID: mdl-37022247

RESUMO

Whole-slide image (WSI) classification is fundamental to computational pathology, which is challenging in extra-high resolution, expensive manual annotation, data heterogeneity, etc. Multiple instance learning (MIL) provides a promising way towards WSI classification, which nevertheless suffers from the memory bottleneck issue inherently, due to the gigapixel high resolution. To avoid this issue, the overwhelming majority of existing approaches have to decouple the feature encoder and the MIL aggregator in MIL networks, which may largely degrade the performance. Towards this end, this paper presents a Bayesian Collaborative Learning (BCL) framework to address the memory bottleneck issue with WSI classification. Our basic idea is to introduce an auxiliary patch classifier to interact with the target MIL classifier to be learned, so that the feature encoder and the MIL aggregator in the MIL classifier can be learned collaboratively while preventing the memory bottleneck issue. Such a collaborative learning procedure is formulated under a unified Bayesian probabilistic framework and a principled Expectation-Maximization algorithm is developed to infer the optimal model parameters iteratively. As an implementation of the E-step, an effective quality-aware pseudo labeling strategy is also suggested. The proposed BCL is extensively evaluated on three publicly available WSI datasets, i.e., CAMELYON16, TCGA-NSCLC and TCGA-RCC, achieving an AUC of 95.6%, 96.0% and 97.5% respectively, which consistently outperforms all the methods compared. Comprehensive analysis and discussion will also be presented for in-depth understanding of the method. To promote future work, our source code is released at: https://github.com/Zero-We/BCL.


Assuntos
Carcinoma Pulmonar de Células não Pequenas , Práticas Interdisciplinares , Neoplasias Pulmonares , Humanos , Teorema de Bayes , Algoritmos
8.
IEEE Trans Pattern Anal Mach Intell ; 45(8): 10027-10043, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-37022275

RESUMO

Super-Resolution from a single motion Blurred image (SRB) is a severely ill-posed problem due to the joint degradation of motion blurs and low spatial resolution. In this article, we employ events to alleviate the burden of SRB and propose an Event-enhanced SRB (E-SRB) algorithm, which can generate a sequence of sharp and clear images with High Resolution (HR) from a single blurry image with Low Resolution (LR). To achieve this end, we formulate an event-enhanced degeneration model to consider the low spatial resolution, motion blurs, and event noises simultaneously. We then build an event-enhanced Sparse Learning Network (eSL-Net++) upon a dual sparse learning scheme where both events and intensity frames are modeled with sparse representations. Furthermore, we propose an event shuffle-and-merge scheme to extend the single-frame SRB to the sequence-frame SRB without any additional training process. Experimental results on synthetic and real-world datasets show that the proposed eSL-Net++ outperforms state-of-the-art methods by a large margin. Datasets, codes, and more results are available at https://github.com/ShinyWang33/eSL-Net-Plusplus.

9.
ISPRS J Photogramm Remote Sens ; 196: 178-196, 2023 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-36824311

RESUMO

High-resolution satellite images can provide abundant, detailed spatial information for land cover classification, which is particularly important for studying the complicated built environment. However, due to the complex land cover patterns, the costly training sample collections, and the severe distribution shifts of satellite imageries caused by, e.g., geographical differences or acquisition conditions, few studies have applied high-resolution images to land cover mapping in detailed categories at large scale. To fill this gap, we present a large-scale land cover dataset, Five-Billion-Pixels. It contains more than 5 billion labeled pixels of 150 high-resolution Gaofen-2 (4 m) satellite images, annotated in a 24-category system covering artificial-constructed, agricultural, and natural classes. In addition, we propose a deep-learning-based unsupervised domain adaptation approach that can transfer classification models trained on labeled dataset (referred to as the source domain) to unlabeled data (referred to as the target domain) for large-scale land cover mapping. Specifically, we introduce an end-to-end Siamese network employing dynamic pseudo-label assignment and class balancing strategy to perform adaptive domain joint learning. To validate the generalizability of our dataset and the proposed approach across different sensors and different geographical regions, we carry out land cover mapping on five megacities in China and six cities in other five Asian countries severally using: PlanetScope (3 m), Gaofen-1 (8 m), and Sentinel-2 (10 m) satellite images. Over a total study area of 60,000 km2, the experiments show promising results even though the input images are entirely unlabeled. The proposed approach, trained with the Five-Billion-Pixels dataset, enables high-quality and detailed land cover mapping across the whole country of China and some other Asian countries at meter-resolution.

10.
IEEE Trans Pattern Anal Mach Intell ; 45(1): 1294-1301, 2023 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-35344484

RESUMO

Extracting building footprints from aerial images is essential for precise urban mapping with photogrammetric computer vision technologies. Existing approaches mainly assume that the roof and footprint of a building are well overlapped, which may not hold in off-nadir aerial images as there is often a big offset between them. In this paper, we propose an offset vector learning scheme, which turns the building footprint extraction problem in off-nadir images into an instance-level joint prediction problem of the building roof and its corresponding "roof to footprint" offset vector. Thus the footprint can be estimated by translating the predicted roof mask according to the predicted offset vector. We further propose a simple but effective feature-level offset augmentation module, which can significantly refine the offset vector prediction by introducing little extra cost. Moreover, a new dataset, Buildings in Off-Nadir Aerial Images (BONAI), is created and released in this paper. It contains 268,958 building instances across 3,300 aerial images with fully annotated instance-level roof, footprint, and corresponding offset vector for each building. Experiments on the BONAI dataset demonstrate that our method achieves the state-of-the-art, outperforming other competitors by 3.37 to 7.39 points in F1-score. The codes, datasets, and trained models are available at https://github.com/jwwangchn/BONAI.git.

11.
Artigo em Inglês | MEDLINE | ID: mdl-35380956

RESUMO

Unsupervised pre-training aims at learning transferable features that are beneficial for downstream tasks. However, most state-of-the-art unsupervised methods concentrate on learning global representations for image-level classification tasks instead of discriminative local region representations, which limits their transferability to region-level downstream tasks, such as object detection. To improve the transferability of pre-trained features to object detection, we present Deeply Unsupervised Patch Re-ID (DUPR), a simple yet effective method for unsupervised visual representation learning. The patch Re-ID task treats individual patch as a pseudo-identity and contrastively learns its correspondence in two views, enabling us to obtain discriminative local features for object detection. Then the proposed patch Re-ID is performed in a deeply unsupervised manner, appealing to object detection, which usually requires multi-level feature maps. Extensive experiments demonstrate that DUPR outperforms state-of-the-art unsupervised pre-trainings and even the ImageNet supervised pre-training on various downstream tasks related to object detection.

12.
IEEE Trans Pattern Anal Mach Intell ; 44(10): 6602-6609, 2022 10.
Artigo em Inglês | MEDLINE | ID: mdl-34043504

RESUMO

This article presents a context-aware tracing strategy (CATS) for crisp edge detection with deep edge detectors, based on an observation that the localization ambiguity of deep edge detectors is mainly caused by the mixing phenomenon of convolutional neural networks: Feature mixing in edge classification and side mixing during fusing side predictions. The CATS consists of two modules: A novel tracing loss that performs feature unmixing by tracing boundaries for better side edge learning, and a context-aware fusion block that tackles the side mixing by aggregating the complementary merits of learned side edges. Experiments demonstrate that the proposed CATS can be integrated into modern deep edge detectors to improve localization accuracy. With the vanilla VGG16 backbone, in terms of BSDS500 dataset, our CATS improves the F-measure (ODS) of the RCF and BDCN deep edge detectors by 12 and 6 percent, respectively when evaluating without using the morphological non-maximal suppression scheme for edge detection.


Assuntos
Aprendizado Profundo , Algoritmos , Redes Neurais de Computação
13.
IEEE Trans Pattern Anal Mach Intell ; 44(11): 7778-7796, 2022 11.
Artigo em Inglês | MEDLINE | ID: mdl-34613910

RESUMO

In he past decade, object detection has achieved significant progress in natural images but not in aerial images, due to the massive variations in the scale and orientation of objects caused by the bird's-eye view of aerial images. More importantly, the lack of large-scale benchmarks has become a major obstacle to the development of object detection in aerial images (ODAI). In this paper, we present a large-scale Dataset of Object deTection in Aerial images (DOTA) and comprehensive baselines for ODAI. The proposed DOTA dataset contains 1,793,658 object instances of 18 categories of oriented-bounding-box annotations collected from 11,268 aerial images. Based on this large-scale and well-annotated dataset, we build baselines covering 10 state-of-the-art algorithms with over 70 configurations, where the speed and accuracy performances of each model have been evaluated. Furthermore, we provide a code library for ODAI and build a website for evaluating different algorithms. Previous challenges run on DOTA have attracted more than 1300 teams worldwide. We believe that the expanded large-scale DOTA dataset, the extensive baselines, the code library and the challenges can facilitate the designs of robust algorithms and reproducible research on the problem of object detection in aerial images.


Assuntos
Algoritmos , Benchmarking
14.
IEEE Trans Image Process ; 30: 6498-6511, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34236963

RESUMO

Aerial scene recognition is challenging due to the complicated object distribution and spatial arrangement in a large-scale aerial image. Recent studies attempt to explore the local semantic representation capability of deep learning models, but how to exactly perceive the key local regions remains to be handled. In this paper, we present a local semantic enhanced ConvNet (LSE-Net) for aerial scene recognition, which mimics the human visual perception of key local regions in aerial scenes, in the hope of building a discriminative local semantic representation. Our LSE-Net consists of a context enhanced convolutional feature extractor, a local semantic perception module and a classification layer. Firstly, we design a multi-scale dilated convolution operators to fuse multi-level and multi-scale convolutional features in a trainable manner in order to fully receive the local feature responses in an aerial scene. Then, these features are fed into our two-branch local semantic perception module. In this module, we design a context-aware class peak response (CACPR) measurement to precisely depict the visual impulse of key local regions and the corresponding context information. Also, a spatial attention weight matrix is extracted to describe the importance of each key local region for the aerial scene. Finally, the refined class confidence maps are fed into the classification layer. Exhaustive experiments on three aerial scene classification benchmarks indicate that our LSE-Net achieves the state-of-the-art performance, which validates the effectiveness of our local semantic perception module and CACPR measurement.

15.
IEEE Trans Image Process ; 30: 2461-2475, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33481712

RESUMO

The goal of exemplar-based texture synthesis is to generate texture images that are visually similar to a given exemplar. Recently, promising results have been reported by methods relying on convolutional neural networks (ConvNets) pretrained on large-scale image datasets. However, these methods have difficulties in synthesizing image textures with non-local structures and extending to dynamic or sound textures. In this article, we present a conditional generative ConvNet (cgCNN) model which combines deep statistics and the probabilistic framework of generative ConvNet (gCNN) model. Given a texture exemplar, cgCNN defines a conditional distribution using deep statistics of a ConvNet, and synthesizes new textures by sampling from the conditional distribution. In contrast to previous deep texture models, the proposed cgCNN does not rely on pre-trained ConvNets but learns the weights of ConvNets for each input exemplar instead. As a result, cgCNN can synthesize high quality dynamic, sound and image textures in a unified manner. We also explore the theoretical connections between our model and other texture models. Further investigations show that the cgCNN model can be easily generalized to texture expansion and inpainting. Extensive experiments demonstrate that our model can achieve better or at least comparable results than the state-of-the-art methods.

16.
IEEE Trans Pattern Anal Mach Intell ; 43(6): 1998-2013, 2021 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-31831408

RESUMO

This paper presents regional attraction of line segment maps, and hereby poses the problem of line segment detection (LSD) as a problem of region coloring. Given a line segment map, the proposed regional attraction first establishes the relationship between line segments and regions in the image lattice. Based on this, the line segment map is equivalently transformed to an attraction field map (AFM), which can be remapped to a set of line segments without loss of information. Accordingly, we develop an end-to-end framework to learn attraction field maps for raw input images, followed by a squeeze module to detect line segments. Apart from existing works, the proposed detector properly handles the local ambiguity and does not rely on the accurate identification of edge pixels. Comprehensive experiments on the Wireframe dataset and the YorkUrban dataset demonstrate the superiority of our method. In particular, we achieve an F-measure of 0.831 on the Wireframe dataset, advancing the state-of-the-art performance by 10.3 percent.

17.
IEEE Trans Pattern Anal Mach Intell ; 43(4): 1452-1459, 2021 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-32086194

RESUMO

Object detection has recently experienced substantial progress. Yet, the widely adopted horizontal bounding box representation is not appropriate for ubiquitous oriented objects such as objects in aerial images and scene texts. In this paper, we propose a simple yet effective framework to detect multi-oriented objects. Instead of directly regressing the four vertices, we glide the vertex of the horizontal bounding box on each corresponding side to accurately describe a multi-oriented object. Specifically, We regress four length ratios characterizing the relative gliding offset on each corresponding side. This may facilitate the offset learning and avoid the confusion issue of sequential label points for oriented objects. To further remedy the confusion issue for nearly horizontal objects, we also introduce an obliquity factor based on area ratio between the object and its horizontal bounding box, guiding the selection of horizontal or oriented detection for each object. We add these five extra target variables to the regression head of faster R-CNN, which requires ignorable extra computation time. Extensive experimental results demonstrate that without bells and whistles, the proposed method achieves superior performances on multiple multi-oriented object detection benchmarks including object detection in aerial images, scene text detection, pedestrian detection in fisheye images.

18.
ISPRS J Photogramm Remote Sens ; 167: 12-23, 2020 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-32904376

RESUMO

This paper addresses the problem of semi-supervised transfer learning with limited cross-modality data in remote sensing. A large amount of multi-modal earth observation images, such as multispectral imagery (MSI) or synthetic aperture radar (SAR) data, are openly available on a global scale, enabling parsing global urban scenes through remote sensing imagery. However, their ability in identifying materials (pixel-wise classification) remains limited, due to the noisy collection environment and poor discriminative information as well as limited number of well-annotated training images. To this end, we propose a novel cross-modal deep-learning framework, called X-ModalNet, with three well-designed modules: self-adversarial module, interactive learning module, and label propagation module, by learning to transfer more discriminative information from a small-scale hyperspectral image (HSI) into the classification task using a large-scale MSI or SAR data. Significantly, X-ModalNet generalizes well, owing to propagating labels on an updatable graph constructed by high-level features on the top of the network, yielding semi-supervised cross-modality learning. We evaluate X-ModalNet on two multi-modal remote sensing datasets (HSI-MSI and HSI-SAR) and achieve a significant improvement in comparison with several state-of-the-art methods.

19.
Artigo em Inglês | MEDLINE | ID: mdl-32149687

RESUMO

In contrast with nature scenes, aerial scenes are often composed of many objects crowdedly distributed on the surface in bird's view, the description of which usually demands more discriminative features as well as local semantics. However, when applied to scene classification, most of the existing convolution neural networks (ConvNets) tend to depict global semantics of images, and the loss of low- and mid-level features can hardly be avoided, especially when the model goes deeper. To tackle these challenges, in this paper, we propose a multiple-instance densely-connected ConvNet (MIDC-Net) for aerial scene classification. It regards aerial scene classification as a multiple-instance learning problem so that local semantics can be further investigated. Our classification model consists of an instance-level classifier, a multiple instance pooling and followed by a bag-level classification layer. In the instance-level classifier, we propose a simplified dense connection structure to effectively preserve features from different levels. The extracted convolution features are further converted into instance feature vectors. Then, we propose a trainable attention-based multiple instance pooling. It highlights the local semantics relevant to the scene label and outputs the bag-level probability directly. Finally, with our bag-level classification layer, this multiple instance learning framework is under the direct supervision of bag labels. Experiments on three widely-utilized aerial scene benchmarks demonstrate that our proposed method outperforms many state-of-the-art methods by a large margin with much fewer parameters.

20.
IEEE Trans Med Imaging ; 39(6): 2110-2120, 2020 06.
Artigo em Inglês | MEDLINE | ID: mdl-31944947

RESUMO

Rapid development of ultrafast ultrasound imaging has led to novel medical ultrasound applications, including shear wave elastography and super-resolution vascular imaging. However, these have yet to incorporate endoscopic ultrasonography (EUS) with a circular array, which provides a wider view in the alimentary canal than traditional linear and convex arrays. A coherent diverging wave compounding (CDWC) imaging method was proposed for ultrafast EUS imaging and implemented on a custom circular array. In CDWC, virtual acoustic point sources are allocated and virtually insonified diverging waves from each source are achieved by adjusting all circular array elements' emission time delays. Diverging waves emitted from different virtual sources are coherently compounded, generating synthetic transmit focusing at every location in the image plane. As the field of view of the circular array is centrally symmetric, all virtual sources are equidistantly distributed on a concentric circle of radius r . To achieve the highest frame rate possible with image quality comparable to that obtained with the traditional multi-focus imaging method, the effects of various radii r and virtual source quantities on the compounded image quality were theoretically analyzed and experimentally verified. Simulation, phantom, and ex-vivo experiments were conducted with an 8 MHz, 124-element circular array, with a 5.35 mm radius. When 16 virtual sources were used with r=1.605 mm, image quality comparable to that obtained with the multi-focus approach was achieved at a frame rate of 1000 frames/s. This demonstrates the feasibility of the proposed ultrafast EUS imaging method and promotes further development of multi-functional EUS devices.


Assuntos
Técnicas de Imagem por Elasticidade , Endossonografia , Imagens de Fantasmas , Tomografia Computadorizada por Raios X , Ultrassonografia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...