Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 9 de 9
Filter
Add more filters










Database
Language
Publication year range
1.
PLoS One ; 13(2): e0191447, 2018.
Article in English | MEDLINE | ID: mdl-29420568

ABSTRACT

In this paper, we present a new method to recognise the leaf type and identify plant species using phenetic parts of the leaf; lobes, apex and base detection. Most of the research in this area focuses on the popular features such as the shape, colour, vein, and texture, which consumes large amounts of computational processing and are not efficient, especially in the Acer database with a high complexity structure of the leaves. This paper is focused on phenetic parts of the leaf which increases accuracy. Detecting the local maxima and local minima are done based on Centroid Contour Distance for Every Boundary Point, using north and south region to recognise the apex and base. Digital morphology is used to measure the leaf shape and the leaf margin. Centroid Contour Gradient is presented to extract the curvature of leaf apex and base. We analyse 32 leaf images of tropical plants and evaluated with two different datasets, Flavia, and Acer. The best accuracy obtained is 94.76% and 82.6% respectively. Experimental results show the effectiveness of the proposed technique without considering the commonly used features with high computational cost.


Subject(s)
Plant Leaves/anatomy & histology , Plants/classification
2.
Forensic Sci Int ; 279: 41-52, 2017 Oct.
Article in English | MEDLINE | ID: mdl-28843097

ABSTRACT

This paper presents a review on the state of the art in offline text-independent writer identification methods for three major languages, namely English, Chinese and Arabic, which were published in literatures from 2011 till 2016. For ease of discussions, we grouped the techniques into three categories: texture-, structure-, and allograph-based. Results are analysed, compared and tabulated along with datasets used for fair and just comparisons. It is observed that during that period, there are significant progresses achieved on English and Arabic; however, the growth on Chinese is rather slow and far from satisfactory in comparison to its wide usage. This is due to its complex writing structure. Meanwhile, issues on datasets used by previous studies are also highlighted because the size matter - accuracy of the writer identification deteriorates as database size increases.

3.
Forensic Sci Int ; 266: 565-572, 2016 Sep.
Article in English | MEDLINE | ID: mdl-27574113

ABSTRACT

Forgery is an act of modifying a document, product, image or video, among other media. Video tampering detection research requires an inclusive database of video modification. This paper aims to discuss a comprehensive proposal to create a dataset composed of modified videos for forensic investigation, in order to standardize existing techniques for detecting video tampering. The primary purpose of developing and designing this new video library is for usage in video forensics, which can be consciously associated with reliable verification using dynamic and static camera recognition. To the best of the author's knowledge, there exists no similar library among the research community. Videos were sourced from YouTube and by exploring social networking sites extensively by observing posted videos and rating their feedback. The video tampering dataset (VTD) comprises a total of 33 videos, divided among three categories in video tampering: (1) copy-move, (2) splicing, and (3) swapping-frames. Compared to existing datasets, this is a higher number of tampered videos, and with longer durations. The duration of every video is 16s, with a 1280×720 resolution, and a frame rate of 30 frames per second. Moreover, all videos possess the same formatting quality (720p(HD).avi). Both temporal and spatial video features were considered carefully during selection of the videos, and there exists complete information related to the doctored regions in every modified video in the VTD dataset. This database has been made publically available for research on splicing, Swapping frames, and copy-move tampering, and, as such, various video tampering detection issues with ground truth. The database has been utilised by many international researchers and groups of researchers.

4.
Interdiscip Sci ; 7(3): 319-25, 2015 Sep.
Article in English | MEDLINE | ID: mdl-26199211

ABSTRACT

In computed tomography (CT), blurring occurs due to different hardware or software errors and hides certain medical details that are present in an image. Image blur is difficult to avoid in many circumstances and can frequently ruin an image. For this, many methods have been developed to reduce the blurring artifact from CT images. The problems with these methods are the high implementation time, noise amplification and boundary artifacts. Hence, this article presents an amended version of the iterative Landweber algorithm to attain artifact-free boundaries and less noise amplification in a faster application time. In this study, both synthetic and real blurred CT images are used to validate the proposed method properly. Similarly, the quality of the processed synthetic images is measured using the feature similarity index, structural similarity and visual information fidelity in pixel domain metrics. Finally, the results obtained from intensive experiments and performance evaluations show the efficiency of the proposed algorithm, which has potential as a new approach in medical image processing.


Subject(s)
Algorithms , Radiographic Image Interpretation, Computer-Assisted/methods , Tomography, X-Ray Computed/methods , Humans , Time Factors
5.
Magn Reson Imaging ; 33(6): 787-803, 2015 Jul.
Article in English | MEDLINE | ID: mdl-25865822

ABSTRACT

Resection of brain tumors is a tricky task in surgery due to its direct influence on the patients' survival rate. Determining the tumor resection extent for its complete information via-à-vis volume and dimensions in pre- and post-operative Magnetic Resonance Images (MRI) requires accurate estimation and comparison. The active contour segmentation technique is used to segment brain tumors on pre-operative MR images using self-developed software. Tumor volume is acquired from its contours via alpha shape theory. The graphical user interface is developed for rendering, visualizing and estimating the volume of a brain tumor. Internet Brain Segmentation Repository dataset (IBSR) is employed to analyze and determine the repeatability and reproducibility of tumor volume. Accuracy of the method is validated by comparing the estimated volume using the proposed method with that of gold-standard. Segmentation by active contour technique is found to be capable of detecting the brain tumor boundaries. Furthermore, the volume description and visualization enable an interactive examination of tumor tissue and its surrounding. Admirable features of our results demonstrate that alpha shape theory in comparison to other existing standard methods is superior for precise volumetric measurement of tumor.


Subject(s)
Brain Neoplasms/pathology , Image Processing, Computer-Assisted/methods , Imaging, Three-Dimensional/methods , Magnetic Resonance Imaging/methods , Algorithms , Brain/pathology , Brain/surgery , Brain Neoplasms/surgery , Humans , Postoperative Care/methods , Preoperative Care/methods , Reproducibility of Results , Tumor Burden
6.
Interdiscip Sci ; 2015 Feb 06.
Article in English | MEDLINE | ID: mdl-25663110

ABSTRACT

In computed tomography (CT), blurring occurs due to different hardware or software errors, and hides certain medical details that are present in an image. Image blur is difficult to avoid in many circumstances and can frequently ruin an image. For this, many methods have been developed to reduce the blurring artifact from CT images. The problems with these methods are the high implementation time, noise amplification and boundary artifacts. Hence, this article presents an amended version of the iterative Landweber algorithm to attain artifact-free boundaries and less noise amplification in a faster application time. In this study, both synthetic and real blurred CT images are used to validate the proposed method properly. Similarly, the quality of the processed synthetic images is measured using the Feature Similarity Index (FSIM), Structural Similarity (SSIM) and Visual Information Fidelity in Pixel Domain (VIFP) metrics. Finally, the results obtained from intensive experiments and performance evaluations show the efficiency of the proposed algorithm, which has potential as a new approach in medical image processing.

7.
Scanning ; 37(2): 116-25, 2015.
Article in English | MEDLINE | ID: mdl-25663630

ABSTRACT

Contrast is a distinctive visual attribute that indicates the quality of an image. Computed Tomography (CT) images are often characterized as poor quality due to their low-contrast nature. Although many innovative ideas have been proposed to overcome this problem, the outcomes, especially in terms of accuracy, visual quality and speed, are falling short and there remains considerable room for improvement. Therefore, an improved version of the single-scale Retinex algorithm is proposed to enhance the contrast while preserving the standard brightness and natural appearance, with low implementation time and without accentuating the noise for CT images. The novelties of the proposed algorithm consist of tuning the standard single-scale Retinex, adding a normalized-ameliorated Sigmoid function and adapting some parameters to improve its enhancement ability. The proposed algorithm is tested with synthetically and naturally degraded low-contrast CT images, and its performance is also verified with contemporary enhancement techniques using two prevalent quality evaluation metrics-SSIM and UIQI. The results obtained from intensive experiments exhibited significant improvement not only in enhancing the contrast but also in increasing the visual quality of the processed images. Finally, the proposed low-complexity algorithm provided satisfactory results with no apparent errors and outperformed all the comparative methods.

8.
Malays J Med Sci ; 22(Spec Issue): 9-19, 2015 Dec.
Article in English | MEDLINE | ID: mdl-27006633

ABSTRACT

Neuroimaging is a new technique used to create images of the structure and function of the nervous system in the human brain. Currently, it is crucial in scientific fields. Neuroimaging data are becoming of more interest among the circle of neuroimaging experts. Therefore, it is necessary to develop a large amount of neuroimaging tools. This paper gives an overview of the tools that have been used to image the structure and function of the nervous system. This information can help developers, experts, and users gain insight and a better understanding of the neuroimaging tools available, enabling better decision making in choosing tools of particular research interest. Sources, links, and descriptions of the application of each tool are provided in this paper as well. Lastly, this paper presents the language implemented, system requirements, strengths, and weaknesses of the tools that have been widely used to image the structure and function of the nervous system.

9.
Microsc Res Tech ; 75(12): 1609-12, 2012 Dec.
Article in English | MEDLINE | ID: mdl-23034955

ABSTRACT

Because of the limitations of the X-ray hardware systems in mammogram machines, the quality of the breast mammogram images may undergo from poor resolution or low contrast. Quantum noise occurs in the mammogram images during acquisition due to low-count X-ray photons. In this work, an adaptive frost filter has been used to remove quantum noise. Local binary patterns have been extracted to classify breast mammograms into benign and malignant using different classifiers. Results show the superiority of the proposed algorithm in terms of sensitivity, specificity, and accuracy. Mammographic Institute Society Analysis database of mammography has been used for experimentation. Peak signal-to-noise ratio and structural similarity index measure are used to test the validity of adaptive frost filter. Experiment results show that proposed technique produces better results.


Subject(s)
Breast/pathology , Image Processing, Computer-Assisted/methods , Mammography/methods , Algorithms , Humans
SELECTION OF CITATIONS
SEARCH DETAIL
...