Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
1.
Lang Speech ; 52(Pt 1): 1-27, 2009.
Artigo em Inglês | MEDLINE | ID: mdl-19334414

RESUMO

Automatic syllabification of words is challenging, not least because the syllable is not easy to define precisely. Consequently, no accepted standard algorithm for automatic syllabification exists. There are two broad approaches: rule-based and data-driven. The rule-based method effectively embodies some theoretical position regarding the syllable, whereas the data-driven paradigm tries to infer "new" syllabifications from examples assumed to be correctly syllabified already. This article compares the performance of several variants of the two basic approaches. Given the problems of definition, it is difficult to determine a correct syllabification in all cases and so to establish the quality of the "gold standard" corpus used either to evaluate quantitatively the output of an automatic algorithm or as the example-set on which data-driven methods crucially depend. Thus, we look for consensus in the entries in multiple lexical databases of pre-syllabified words. In this work, we have used two independent lexicons, and extracted from them the same 18,016 words with their corresponding (possibly different) syllabifications. We have also created a third lexicon corresponding to the 13,594 words that share the same syllabifications in these two sources. As well as two rule-based approaches (Hammond's and Fisher's implementation of Kahn's), three data-driven techniques are evaluated: a look-up procedure, an exemplar-based generalization technique, and syllabification by analogy (SbA). The results on the three databases show consistent and robust patterns. First, the data-driven techniques outperform the rule-based systems in word and juncture accuracies by a very significant margin but require training data and are slower. Second, syllabification in the pronunciation domain is easier than in the spelling domain. Finally, best results are consistently obtained with SbA.


Assuntos
Algoritmos , Inteligência Artificial , Idioma , Linguística , Reconhecimento Automatizado de Padrão , Interface para o Reconhecimento da Fala , Humanos , Fonética , Vocabulário
2.
Neural Netw ; 22(1): 49-57, 2009 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-19118976

RESUMO

Fourier-based regularisation is considered for the support vector machine classification problem over absolutely integrable loss functions. By invoking the modest assumption that the decision function belongs to a Paley-Wiener space, it is shown that the classification problem can be developed in the context of signal theory. Furthermore, by employing the Paley-Wiener reproducing kernel, namely the sinc function, it is shown that a principled and finite kernel hyper-parameter search space can be discerned, a priori. Subsequent simulations performed on a commonly-available hyperspectral image data set reveal that the approach yields results that surpass state-of-the-art benchmarks.


Assuntos
Algoritmos , Inteligência Artificial , Simulação por Computador , Redes Neurais de Computação , Processamento de Sinais Assistido por Computador
3.
IEEE Trans Pattern Anal Mach Intell ; 28(11): 1738-52, 2006 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-17063680

RESUMO

Extracting full-body motion of walking people from monocular video sequences in complex, real-world environments is an important and difficult problem, going beyond simple tracking, whose satisfactory solution demands an appropriate balance between use of prior knowledge and learning from data. We propose a consistent Bayesian framework for introducing strong prior knowledge into a system for extracting human gait. In this work, the strong prior is built from a simple articulated model having both time-invariant (static) and time-variant (dynamic) parameters. The model is easily modified to cater to situations such as walkers wearing clothing that obscures the limbs. The statistics of the parameters are learned from high-quality (indoor laboratory) data and the Bayesian framework then allows us to "bootstrap" to accurate gait extraction on the noisy images typical of cluttered, outdoor scenes. To achieve automatic fitting, we use a hidden Markov model to detect the phases of images in a walking cycle. We demonstrate our approach on silhouettes extracted from fronto-parallel ("sideways on") sequences of walkers under both high-quality indoor and noisy outdoor conditions. As well as high-quality data with synthetic noise and occlusions added, we also test walkers with rucksacks, skirts, and trench coats. Results are quantified in terms of chamfer distance and average pixel error between automatically extracted body points and corresponding hand-labeled points. No one part of the system is novel in itself, but the overall framework makes it feasible to extract gait from very much poorer quality image sequences than hitherto. This is confirmed by comparing person identification by gait using our method and a well-established baseline recognition algorithm.


Assuntos
Algoritmos , Inteligência Artificial , Fenômenos Biomecânicos/métodos , Marcha/fisiologia , Interpretação de Imagem Assistida por Computador/métodos , Articulações/fisiologia , Reconhecimento Automatizado de Padrão/métodos , Teorema de Bayes , Análise por Conglomerados , Simulação por Computador , Diagnóstico por Computador/métodos , Humanos , Aumento da Imagem/métodos , Imageamento Tridimensional/métodos , Armazenamento e Recuperação da Informação/métodos , Modelos Biológicos , Modelos Estatísticos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
4.
Med Eng Phys ; 26(1): 71-86, 2004 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-14644600

RESUMO

Segmentation of medical images is very important for clinical research and diagnosis, leading to a requirement for robust automatic methods. This paper reports on the combined use of a neural network (a multilayer perceptron, MLP) and active contour model ('snake') to segment structures in magnetic resonance (MR) images. The perceptron is trained to produce a binary classification of each pixel as either a boundary or a non-boundary point. Subsequently, the resulting binary (edge-point) image forms the external energy function for a snake, used to link the candidate boundary points into a continuous, closed contour. We report here on the segmentation of the lungs from multiple MR slices of the torso; lung-specific constraints have been avoided to keep the technique as general as possible. In initial investigations, the inputs to the MLP were limited to normalised intensity values of the pixels from an (7 x 7) window scanned across the image. The use of spatial coordinates as additional inputs to the MLP is then shown to provide an improvement in segmentation performance as quantified using the effectiveness measure (a weighted product of precision and recall). Training sets were first developed using a lengthy iterative process. Thereafter, a novel cost function based on effectiveness is proposed for training that allows us to achieve dramatic improvements in segmentation performance, as well as faster, non-iterative selection of training examples. The classifications produced using this cost function were sufficiently good that the binary image produced by the MLP could be post-processed using an active contour model to provide an accurate segmentation of the lungs from the multiple slices in almost all cases, including unseen slices and subjects.


Assuntos
Algoritmos , Interpretação de Imagem Assistida por Computador/métodos , Pulmão/anatomia & histologia , Imageamento por Ressonância Magnética/métodos , Redes Neurais de Computação , Reconhecimento Automatizado de Padrão , Processamento de Sinais Assistido por Computador , Inteligência Artificial , Estudos Transversais , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
5.
Int J Neural Syst ; 12(2): 95-108, 2002 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-12035124

RESUMO

The traveling salesman problem (TSP) is a prototypical problem of combinatorial optimization and, as such, it has received considerable attention from neural-network researchers seeking quick, heuristic solutions. An early stage in many computer vision tasks is the extraction of object shape from an image consisting of noisy candidate edge points. Since the desired shape will often be a closed contour, this problem can be viewed as a version of the TSP in which we wish to link only a subset of the points/cities (i.e. the "noisefree" ones). None of the extant neural techniques for solving the TSP can deal directly with this case. In this paper, we present a simple but effective modification to the (analog) elastic net of Durbin and Willshaw which shifts emphasis from global to local behavior during convergence, so allowing the net to ignore some image points. Unlike the original elastic net, this semi-localized version is shown to tolerate considerable amounts of noise. As an example practical application, we describe the extraction of "pseudo-3D" human lung outlines from multiple preprocessed magnetic resonance images of the torso. An effectiveness measure (ideally zero) quantifies the difference between the extracted shape and some idealized shape exemplar. Our method produces average effectiveness scores of 0.06 for lung shapes extracted from initial semi-automatic segmentations which define the noisefree case. This deteriorates to 0.1 when extraction is from a noisy edge-point image obtained fully-automatically using a feedforward neural network.


Assuntos
Processamento de Imagem Assistida por Computador/estatística & dados numéricos , Redes Neurais de Computação , Algoritmos , Percepção de Forma , Humanos , Pulmão/anatomia & histologia , Imageamento por Ressonância Magnética , Distribuição Normal
6.
Behav Brain Sci ; 24(6): 1055-1056, 2001 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-18241365

RESUMO

The question "Are 'biorobots' good models of biological behaviour?" can be seen as a specific instance of a more general question about the relation between computer programs and models, between models and theories, and between theories and reality. This commentary develops a personal view of these relations, from an antirealism perspective. Programs, models, theories and reality are separate and distinct entities which may converge in particular cases but should never be confused.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...