Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
Heliyon ; 10(9): e29358, 2024 May 15.
Article in English | MEDLINE | ID: mdl-38694054

ABSTRACT

Chemosensation is important for the survival and reproduction of animals. The odorant binding proteins (OBPs) are thought to be involved in chemosensation together with chemosensory receptors. While OBPs were initially considered to deliver hydrophobic odorants to olfactory receptors in the aqueous lymph solution, recent studies suggest more complex roles in various organs. Here, we use GAL4 transgenes to systematically analyze the expression patterns of all 52 members of the Obp gene family and 3 related chemosensory protein genes in adult Drosophila, focusing on chemosensory organs such as the antenna, maxillary palp, pharynx, and labellum, and other organs such as the brain, ventral nerve cord, leg, wing, and intestine. The OBPs were observed to express in diverse organs and in multiple cell types, suggesting that these proteins can indeed carry out diverse functional roles. Also, we constructed 10 labellar-expressing Obp mutants, and obtained behavioral evidence that these OBPs may be involved in bitter sensing. The resources we constructed should be useful for future Drosophila OBP gene family research.

2.
Comput Biol Med ; 166: 107453, 2023 Sep 09.
Article in English | MEDLINE | ID: mdl-37774560

ABSTRACT

Surgical workflow analysis is essential to help optimize surgery by encouraging efficient communication and the use of resources. However, the performance of phase recognition is limited by the use of information related to the presence of surgical instruments. To address the problem, we propose visual modality-based multimodal fusion for surgical phase recognition to overcome the limited diversity of information such as the presence of instruments. Using the proposed methods, we extracted a visual kinematics-based index related to using instruments, such as movement and their interrelations during surgery. In addition, we improved recognition performance using an effective convolutional neural network (CNN)-based fusion method for visual features and a visual kinematics-based index (VKI). The visual kinematics-based index improves the understanding of a surgical procedure since information is related to instrument interaction. Furthermore, these indices can be extracted in any environment, such as laparoscopic surgery, and help obtain complementary information for system kinematics log errors. The proposed methodology was applied to two multimodal datasets, a virtual reality (VR) simulator-based dataset (PETRAW) and a private distal gastrectomy surgery dataset, to verify that it can help improve recognition performance in clinical environments. We also explored the influence of a visual kinematics-based index to recognize each surgical workflow by the instrument's existence and the instrument's trajectory. Through the experimental results of a distal gastrectomy video dataset, we validated the effectiveness of our proposed fusion approach in surgical phase recognition. The relatively simple yet index-incorporated fusion we propose can yield significant performance improvements over only CNN-based training and exhibits effective training results compared to fusion based on Transformers, which require a large amount of pre-trained data.

3.
Comput Methods Programs Biomed ; 236: 107561, 2023 Jun.
Article in English | MEDLINE | ID: mdl-37119774

ABSTRACT

BACKGROUND AND OBJECTIVE: In order to be context-aware, computer-assisted surgical systems require accurate, real-time automatic surgical workflow recognition. In the past several years, surgical video has been the most commonly-used modality for surgical workflow recognition. But with the democratization of robot-assisted surgery, new modalities, such as kinematics, are now accessible. Some previous methods use these new modalities as input for their models, but their added value has rarely been studied. This paper presents the design and results of the "PEg TRAnsfer Workflow recognition" (PETRAW) challenge with the objective of developing surgical workflow recognition methods based on one or more modalities and studying their added value. METHODS: The PETRAW challenge included a data set of 150 peg transfer sequences performed on a virtual simulator. This data set included videos, kinematic data, semantic segmentation data, and annotations, which described the workflow at three levels of granularity: phase, step, and activity. Five tasks were proposed to the participants: three were related to the recognition at all granularities simultaneously using a single modality, and two addressed the recognition using multiple modalities. The mean application-dependent balanced accuracy (AD-Accuracy) was used as an evaluation metric to take into account class balance and is more clinically relevant than a frame-by-frame score. RESULTS: Seven teams participated in at least one task with four participating in every task. The best results were obtained by combining video and kinematic data (AD-Accuracy of between 93% and 90% for the four teams that participated in all tasks). CONCLUSION: The improvement of surgical workflow recognition methods using multiple modalities compared with unimodal methods was significant for all teams. However, the longer execution time required for video/kinematic-based methods(compared to only kinematic-based methods) must be considered. Indeed, one must ask if it is wise to increase computing time by 2000 to 20,000% only to increase accuracy by 3%. The PETRAW data set is publicly available at www.synapse.org/PETRAW to encourage further research in surgical workflow recognition.


Subject(s)
Algorithms , Robotic Surgical Procedures , Humans , Workflow , Robotic Surgical Procedures/methods
4.
Comput Biol Med ; 43(6): 670-82, 2013 Jul.
Article in English | MEDLINE | ID: mdl-23668342

ABSTRACT

We eliminate similar frames from a wireless capsule endoscopy video of the human intestines to maximize spatial coverage and minimize the redundancy in images. We combine an intensity correction method with a method based an optical flow and features to detect and reduce near-duplicate images acquired during the repetitive backward and forward egomotions due to peristalsis. In experiments, this technique reduced duplicate image of 52.3% from images of the small intestine.


Subject(s)
Capsule Endoscopes , Capsule Endoscopy/instrumentation , Capsule Endoscopy/methods , Image Processing, Computer-Assisted/methods , Intestine, Small/pathology , Female , Humans , Intestine, Small/physiopathology , Male
SELECTION OF CITATIONS
SEARCH DETAIL
...