Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Heliyon ; 10(1): e23142, 2024 Jan 15.
Artigo em Inglês | MEDLINE | ID: mdl-38163154

RESUMO

Among the 17 Sustainable Development Goals (SDGs) proposed within the 2030 Agenda and adopted by all the United Nations member states, the 13th SDG is a call for action to combat climate change. Moreover, SDGs 14 and 15 claim the protection and conservation of life below water and life on land, respectively. In this work, we provide a literature-founded overview of application areas, in which computer audition - a powerful but in this context so far hardly considered technology, combining audio signal processing and machine intelligence - is employed to monitor our ecosystem with the potential to identify ecologically critical processes or states. We distinguish between applications related to organisms, such as species richness analysis and plant health monitoring, and applications related to the environment, such as melting ice monitoring or wildfire detection. This work positions computer audition in relation to alternative approaches by discussing methodological strengths and limitations, as well as ethical aspects. We conclude with an urgent call to action to the research community for a greater involvement of audio intelligence methodology in future ecosystem monitoring approaches.

2.
Artigo em Inglês | MEDLINE | ID: mdl-38082880

RESUMO

The manipulation and stimulation of cell growth is invaluable for neuroscience research such as brain-machine interfaces or applications of neural tissue engineering. For the implementation of such research avenues, in particular the analysis of cells' migration behaviour, and accordingly, the determination of cell positions on microscope images is essential, causing a current need for labour-intensive, manual annotation efforts of the cell positions. In an attempt towards automation of the required annotation efforts, we i) introduce NeuroCellCentreDB, a novel dataset of neuron-like cells on microscope images with annotated cell centres, ii) evaluate a common (bounding box-based) object detector, faster region-based convolutional neural network (FRCNN), for the task at hand, and iii) design and test a fully convolutional neural network, with the specific goal of cell centre detection. We achieve an F1 score of up to 0.766 on the test data with a tolerance radius of 16 pixels. Our code and dataset are publicly available.


Assuntos
Microscopia , Redes Neurais de Computação , Automação , Proliferação de Células , Neurônios
3.
Front Digit Health ; 5: 1196079, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37767523

RESUMO

Recent years have seen a rapid increase in digital medicine research in an attempt to transform traditional healthcare systems to their modern, intelligent, and versatile equivalents that are adequately equipped to tackle contemporary challenges. This has led to a wave of applications that utilise AI technologies; first and foremost in the fields of medical imaging, but also in the use of wearables and other intelligent sensors. In comparison, computer audition can be seen to be lagging behind, at least in terms of commercial interest. Yet, audition has long been a staple assistant for medical practitioners, with the stethoscope being the quintessential sign of doctors around the world. Transforming this traditional technology with the use of AI entails a set of unique challenges. We categorise the advances needed in four key pillars: Hear, corresponding to the cornerstone technologies needed to analyse auditory signals in real-life conditions; Earlier, for the advances needed in computational and data efficiency; Attentively, for accounting to individual differences and handling the longitudinal nature of medical data; and, finally, Responsibly, for ensuring compliance to the ethical standards accorded to the field of medicine. Thus, we provide an overview and perspective of HEAR4Health: the sketch of a modern, ubiquitous sensing system that can bring computer audition on par with other AI technologies in the strive for improved healthcare systems.

4.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 2623-2626, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-36086314

RESUMO

Although running is a common leisure activity and a core training regiment for several athletes, between 29% and 79% of runners sustain an overuse injury each year. These injuries are linked to excessive fatigue, which alters how someone runs. In this work, we explore the feasibility of modelling the Borg received perception of exertion (RPE) scale (range: [6]-[19] [20]), a well-validated subjective measure of fatigue, using audio data captured in realistic outdoor environments via smartphones attached to the runners' arms. Using convolutional neural networks (CNNs) on log-Mel spectrograms, we obtain a mean absolute error (MAE) of 2.35 in subject-dependent experiments, demonstrating that audio can be effectively used to model fatigue, while being more easily and non-invasively acquired than by signals from other sensors.


Assuntos
Fadiga , Fadiga Muscular , Fadiga/diagnóstico , Humanos , Redes Neurais de Computação
5.
Sci Rep ; 11(1): 23480, 2021 12 06.
Artigo em Inglês | MEDLINE | ID: mdl-34873193

RESUMO

Biometric identification techniques such as photo-identification require an array of unique natural markings to identify individuals. From 1975 to present, Bigg's killer whales have been photo-identified along the west coast of North America, resulting in one of the largest and longest-running cetacean photo-identification datasets. However, data maintenance and analysis are extremely time and resource consuming. This study transfers the procedure of killer whale image identification into a fully automated, multi-stage, deep learning framework, entitled FIN-PRINT. It is composed of multiple sequentially ordered sub-components. FIN-PRINT is trained and evaluated on a dataset collected over an 8-year period (2011-2018) in the coastal waters off western North America, including 121,000 human-annotated identification images of Bigg's killer whales. At first, object detection is performed to identify unique killer whale markings, resulting in 94.4% recall, 94.1% precision, and 93.4% mean-average-precision (mAP). Second, all previously identified natural killer whale markings are extracted. The third step introduces a data enhancement mechanism by filtering between valid and invalid markings from previous processing levels, achieving 92.8% recall, 97.5%, precision, and 95.2% accuracy. The fourth and final step involves multi-class individual recognition. When evaluated on the network test set, it achieved an accuracy of 92.5% with 97.2% top-3 unweighted accuracy (TUA) for the 100 most commonly photo-identified killer whales. Additionally, the method achieved an accuracy of 84.5% and a TUA of 92.9% when applied to the entire 2018 image collection of the 100 most common killer whales. The source code of FIN-PRINT can be adapted to other species and will be publicly available.

6.
Trends Hear ; 25: 23312165211046135, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34751066

RESUMO

Computer audition (i.e., intelligent audio) has made great strides in recent years; however, it is still far from achieving holistic hearing abilities, which more appropriately mimic human-like understanding. Within an audio scene, a human listener is quickly able to interpret layers of sound at a single time-point, with each layer varying in characteristics such as location, state, and trait. Currently, integrated machine listening approaches, on the other hand, will mainly recognise only single events. In this context, this contribution aims to provide key insights and approaches, which can be applied in computer audition to achieve the goal of a more holistic intelligent understanding system, as well as identifying challenges in reaching this goal. We firstly summarise the state-of-the-art in traditional signal-processing-based audio pre-processing and feature representation, as well as automated learning such as by deep neural networks. This concerns, in particular, audio interpretation, decomposition, understanding, as well as ontologisation. We then present an agent-based approach for integrating these concepts as a holistic audio understanding system. Based on this, concluding, avenues are given towards reaching the ambitious goal of 'holistic human-parity' machine listening abilities.


Assuntos
Redes Neurais de Computação , Processamento de Sinais Assistido por Computador , Humanos , Inteligência , Aprendizagem , Som
7.
Clin Orthop Relat Res ; 471(3): 956-64, 2013 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-22806261

RESUMO

BACKGROUND: The role of the synovial biopsy in the preoperative diagnosis of a periprosthetic joint infection (PJI) of the hip has not been clearly defined. QUESTIONS/PURPOSES: We asked whether the value of a biopsy for a PJI is greater than that of aspiration and C-reactive protein (CRP). METHODS: Before revision in 100 hip endoprostheses, we obtained CRP values, aspirated the joint, and obtained five synovial biopsy samples for bacteriologic analysis and five for histologic analysis. Microbiologic and histologic analyses of the periprosthetic tissue during revision surgery were used to verify the results of the preoperative diagnostic methods. The minimum followup was 24 months (median 32; range, 24-47 months). RESULTS: Forty-five of the 100 prostheses were identified as infected. The biopsy, with a combination of the bacteriologic and histologic examinations, showed the greatest diagnostic value of all the diagnostic procedures and led to a sensitivity of 82% (95% CI, ± 11%), specificity of 98% (95% CI, ± 4%), positive predictive value of 97% (95% CI, ± 5%), negative predictive value of 87% (95% CI, ± 8.3%), and accuracy of 91%. CONCLUSIONS: The biopsy technique has a greater value than aspiration and CRP in the diagnosis of PJI of the hip (Masri et al. J Arthroplasty 22:72-78, 2007). In patients with a negative aspirate, but increased CRP or clinical signs of infection, we regard biopsy to be preferable to just repeating the aspiration. LEVEL OF EVIDENCE: Level II prognostic study. See Guidelines for Authors for a complete description of levels of evidence.


Assuntos
Artroplastia de Quadril/efeitos adversos , Biópsia , Articulação do Quadril/cirurgia , Prótese de Quadril/efeitos adversos , Infecções Relacionadas à Prótese/diagnóstico , Sinovectomia , Adulto , Idoso , Idoso de 80 Anos ou mais , Artroplastia de Quadril/instrumentação , Técnicas Bacteriológicas , Biomarcadores/sangue , Biópsia/métodos , Biópsia por Agulha , Proteína C-Reativa/análise , Feminino , Articulação do Quadril/microbiologia , Articulação do Quadril/patologia , Humanos , Masculino , Pessoa de Meia-Idade , Valor Preditivo dos Testes , Estudos Prospectivos , Infecções Relacionadas à Prótese/microbiologia , Infecções Relacionadas à Prótese/patologia , Infecções Relacionadas à Prótese/cirurgia , Reoperação , Sensibilidade e Especificidade , Membrana Sinovial/microbiologia , Membrana Sinovial/patologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...