Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 27
Filter
1.
IEEE Trans Pattern Anal Mach Intell ; 45(11): 12922-12943, 2023 Nov.
Article in English | MEDLINE | ID: mdl-37022830

ABSTRACT

Transformer models have shown great success handling long-range interactions, making them a promising tool for modeling video. However, they lack inductive biases and scale quadratically with input length. These limitations are further exacerbated when dealing with the high dimensionality introduced by the temporal dimension. While there are surveys analyzing the advances of Transformers for vision, none focus on an in-depth analysis of video-specific designs. In this survey, we analyze the main contributions and trends of works leveraging Transformers to model video. Specifically, we delve into how videos are handled at the input level first. Then, we study the architectural changes made to deal with video more efficiently, reduce redundancy, re-introduce useful inductive biases, and capture long-term temporal dynamics. In addition, we provide an overview of different training regimes and explore effective self-supervised learning strategies for video. Finally, we conduct a performance comparison on the most common benchmark for Video Transformers (i.e., action classification), finding them to outperform 3D ConvNets even with less computational complexity.

2.
BMC Prim Care ; 24(1): 14, 2023 01 14.
Article in English | MEDLINE | ID: mdl-36641467

ABSTRACT

BACKGROUND: Artificial intelligence (AI) is increasingly used to support general practice in the early detection of disease and treatment recommendations. However, AI systems aimed at alleviating time-consuming administrative tasks currently appear limited. This scoping review thus aims to summarize the research that has been carried out in methods of machine learning applied to the support and automation of administrative tasks in general practice. METHODS: Databases covering the fields of health care and engineering sciences (PubMed, Embase, CINAHL with full text, Cochrane Library, Scopus, and IEEE Xplore) were searched. Screening for eligible studies was completed using Covidence, and data was extracted along nine research-based attributes concerning general practice, administrative tasks, and machine learning. The search and screening processes were completed during the period of April to June 2022. RESULTS: 1439 records were identified and 1158 were screened for eligibility criteria. A total of 12 studies were included. The extracted attributes indicate that most studies concern various scheduling tasks using supervised machine learning methods with relatively low general practitioner (GP) involvement. Importantly, four studies employed the latest available machine learning methods and the data used frequently varied in terms of setting, type, and availability. CONCLUSION: The limited field of research developing in the application of machine learning to administrative tasks in general practice indicates that there is a great need and high potential for such methods. However, there is currently a lack of research likely due to the unavailability of open-source data and a prioritization of diagnostic-based tasks. Future research would benefit from open-source data, cutting-edge methods of machine learning, and clearly stated GP involvement, so that improved and replicable scientific research can be done.


Subject(s)
Artificial Intelligence , General Practice , Family Practice , Automation , Machine Learning
3.
IEEE Int Conf Rehabil Robot ; 2022: 1-5, 2022 07.
Article in English | MEDLINE | ID: mdl-36176141

ABSTRACT

This study describes an interdisciplinary approach to develop a 5 degrees of freedom assistive upper limb exoskeleton (ULE) for users with severe to complete functional tetraplegia. Four different application levels were identified for the ULE ranging from basic technical application to interaction with users, interaction with caregivers and interaction with the society, each level posing requirements for the design and functionality of the ULE. These requirements were addressed through an interdisciplinary collaboration involving users, clinicians and researchers within social sciences and humanities, mechanical engineering, control engineering media technology and biomedical engineering. The results showed that the developed ULE, the EXOTIC, had a high level of usability, safety and adoptability. Further, the results showed that several topics are important to explicitly address in relation to the facilitation of interdisciplinary collaboration including, defining a common language, a joint visualization of the end goal and a physical frame for the collaboration, such as a shared laboratory. The study underlined the importance of interdisciplinarity and we believe that future collaboration amongst interdisciplinary researchers and centres, also at an international level, can strongly facilitate the usefulness and adoption of assistive exoskeletons and similar technologies.


Subject(s)
Disabled Persons , Exoskeleton Device , Humans , Motivation , Upper Extremity
4.
Sensors (Basel) ; 22(18)2022 Sep 13.
Article in English | MEDLINE | ID: mdl-36146260

ABSTRACT

This paper presents the EXOTIC- a novel assistive upper limb exoskeleton for individuals with complete functional tetraplegia that provides an unprecedented level of versatility and control. The current literature on exoskeletons mainly focuses on the basic technical aspects of exoskeleton design and control while the context in which these exoskeletons should function is less or not prioritized even though it poses important technical requirements. We considered all sources of design requirements, from the basic technical functions to the real-world practical application. The EXOTIC features: (1) a compact, safe, wheelchair-mountable, easy to don and doff exoskeleton capable of facilitating multiple highly desired activities of daily living for individuals with tetraplegia; (2) a semi-automated computer vision guidance system that can be enabled by the user when relevant; (3) a tongue control interface allowing for full, volitional, and continuous control over all possible motions of the exoskeleton. The EXOTIC was tested on ten able-bodied individuals and three users with tetraplegia caused by spinal cord injury. During the tests the EXOTIC succeeded in fully assisting tasks such as drinking and picking up snacks, even for users with complete functional tetraplegia and the need for a ventilator. The users confirmed the usability of the EXOTIC.


Subject(s)
Exoskeleton Device , Activities of Daily Living , Humans , Power, Psychological , Quadriplegia , Tongue , Upper Extremity
5.
Sensors (Basel) ; 22(10)2022 May 10.
Article in English | MEDLINE | ID: mdl-35632017

ABSTRACT

The safe in-field operation of autonomous agricultural vehicles requires detecting all objects that pose a risk of collision. Current vision-based algorithms for object detection and classification are unable to detect unknown classes of objects. In this paper, the problem is posed as anomaly detection instead, where convolutional autoencoders are applied to identify any objects deviating from the normal pattern. Training an autoencoder network to reconstruct normal patterns in agricultural fields makes it possible to detect unknown objects by high reconstruction error. Basic autoencoder (AE), vector-quantized variational autoencoder (VQ-VAE), denoising autoencoder (DAE) and semisupervised autoencoder (SSAE) with a max-margin-inspired loss function are investigated and compared with a baseline object detector based on YOLOv5. Results indicate that SSAE with an area under the curve for precision/recall (PR AUC) of 0.9353 outperforms other autoencoder models and is comparable to an object detector with a PR AUC of 0.9794. Qualitative results show that SSAE is capable of detecting unknown objects, whereas the object detector is unable to do so and fails to identify known classes of objects in specific cases.


Subject(s)
Algorithms
6.
Sensors (Basel) ; 22(4)2022 Feb 18.
Article in English | MEDLINE | ID: mdl-35214497

ABSTRACT

Recent advances in computer vision are primarily driven by the usage of deep learning, which is known to require large amounts of data, and creating datasets for this purpose is not a trivial task. Larger benchmark datasets often have detailed processes with multiple stages and users with different roles during annotation. However, this can be difficult to implement in smaller projects where resources can be limited. Therefore, in this work we present our processes for creating an image dataset for kernel fragmentation and stover overlengths in Whole Plant Corn Silage. This includes the guidelines for annotating object instances in respective classes and statistics of gathered annotations. Given the challenging image conditions, where objects are present in large amounts of occlusion and clutter, the datasets appear appropriate for training models. However, we experience annotator inconsistency, which can hamper evaluation. Based on this we argue the importance of having an evaluation form independent of the manual annotation where we evaluate our models with physically based sieving metrics. Additionally, instead of the traditional time-consuming manual annotation approach, we evaluate Semi-Supervised Learning as an alternative, showing competitive results while requiring fewer annotations. Specifically, given a relatively large supervised set of around 1400 images we can improve the Average Precision by a number of percentage points. Additionally, we show a significantly large improvement when using an extremely small set of just over 100 images, with over 3× in Average Precision and up to 20 percentage points when estimating the quality.


Subject(s)
Deep Learning , Data Curation , Silage , Supervised Machine Learning , Zea mays
7.
Sensors (Basel) ; 22(3)2022 Jan 22.
Article in English | MEDLINE | ID: mdl-35161571

ABSTRACT

Outdoor fall detection, in the context of accidents, such as falling from heights or in water, is a research area that has not received as much attention as other automated surveillance areas. Gathering sufficient data for developing deep-learning models for such applications has also proven to be not a straight-forward task. Normally, footage of volunteer people falling is used for providing data, but that can be a complicated and dangerous process. In this paper, we propose an application for thermal images of a low-cost rubber doll falling in a harbor, for simulating real emergencies. We achieve thermal signatures similar to a human on different parts of the doll's body. The change of these thermal signatures over time is measured, and its stability is verified. We demonstrate that, even with the size and weight differences of the doll, the produced videos of falls have a similar motion and appearance to what is expected from real people. We show that the captured thermal doll data can be used for the real-world application of pedestrian detection by running the captured data through a state-of-the-art object detector trained on real people. An average confidence score of 0.730 is achieved, compared to a confidence score of 0.761 when using footage of real people falling. The captured fall sequences using the doll can be used as a substitute to sequences of people.


Subject(s)
Accidental Falls , Emergencies , Humans
8.
Sensors (Basel) ; 22(2)2022 Jan 14.
Article in English | MEDLINE | ID: mdl-35062580

ABSTRACT

Satisfactory indoor thermal environments can improve working efficiencies of office staff. To build such satisfactory indoor microclimates, individual thermal comfort assessment is important, for which personal clothing insulation rate (Icl) and metabolic rate (M) need to be estimated dynamically. Therefore, this paper proposes a vision-based method. Specifically, a human tracking-by-detection framework is implemented to acquire each person's clothing status (short-sleeved, long-sleeved), key posture (sitting, standing), and bounding box information simultaneously. The clothing status together with a key body points detector locate the person's skin region and clothes region, allowing the measurement of skin temperature (Ts) and clothes temperature (Tc), and realizing the calculation of Icl from Ts and Tc. The key posture and the bounding box change across time can category the person's activity intensity into a corresponding level, from which the M value is estimated. Moreover, we have collected a multi-person thermal dataset to evaluate the method. The tracking-by-detection framework achieves a mAP50 (Mean Average Precision) rate of 89.1% and a MOTA (Multiple Object Tracking Accuracy) rate of 99.5%. The Icl estimation module gets an accuracy of 96.2% in locating skin and clothes. The M estimation module obtains a classification rate of 95.6% in categorizing activity level. All of these prove the usefulness of the proposed method in a multi-person scenario of real-life applications.


Subject(s)
Body Temperature Regulation , Skin Temperature , Clothing , Humans , Microclimate , Temperature
9.
IEEE Trans Cybern ; 52(5): 3314-3324, 2022 May.
Article in English | MEDLINE | ID: mdl-28207407

ABSTRACT

Pain is an unpleasant feeling that has been shown to be an important factor for the recovery of patients. Since this is costly in human resources and difficult to do objectively, there is the need for automatic systems to measure it. In this paper, contrary to current state-of-the-art techniques in pain assessment, which are based on facial features only, we suggest that the performance can be enhanced by feeding the raw frames to deep learning models, outperforming the latest state-of-the-art results while also directly facing the problem of imbalanced data. As a baseline, our approach first uses convolutional neural networks (CNNs) to learn facial features from VGG_Faces, which are then linked to a long short-term memory to exploit the temporal relation between video frames. We further compare the performances of using the so popular schema based on the canonically normalized appearance versus taking into account the whole image. As a result, we outperform current state-of-the-art area under the curve performance in the UNBC-McMaster Shoulder Pain Expression Archive Database. In addition, to evaluate the generalization properties of our proposed methodology on facial motion recognition, we also report competitive results in the Cohn Kanade+ facial expression database.


Subject(s)
Facial Expression , Memory, Short-Term , Emotions , Humans , Neural Networks, Computer , Pain
10.
J Med Internet Res ; 23(12): e26611, 2021 12 13.
Article in English | MEDLINE | ID: mdl-34898454

ABSTRACT

BACKGROUND: Certain types of artificial intelligence (AI), that is, deep learning models, can outperform health care professionals in particular domains. Such models hold considerable promise for improved diagnostics, treatment, and prevention, as well as more cost-efficient health care. They are, however, opaque in the sense that their exact reasoning cannot be fully explicated. Different stakeholders have emphasized the importance of the transparency/explainability of AI decision making. Transparency/explainability may come at the cost of performance. There is need for a public policy regulating the use of AI in health care that balances the societal interests in high performance as well as in transparency/explainability. A public policy should consider the wider public's interests in such features of AI. OBJECTIVE: This study elicited the public's preferences for the performance and explainability of AI decision making in health care and determined whether these preferences depend on respondent characteristics, including trust in health and technology and fears and hopes regarding AI. METHODS: We conducted a choice-based conjoint survey of public preferences for attributes of AI decision making in health care in a representative sample of the adult Danish population. Initial focus group interviews yielded 6 attributes playing a role in the respondents' views on the use of AI decision support in health care: (1) type of AI decision, (2) level of explanation, (3) performance/accuracy, (4) responsibility for the final decision, (5) possibility of discrimination, and (6) severity of the disease to which the AI is applied. In total, 100 unique choice sets were developed using fractional factorial design. In a 12-task survey, respondents were asked about their preference for AI system use in hospitals in relation to 3 different scenarios. RESULTS: Of the 1678 potential respondents, 1027 (61.2%) participated. The respondents consider the physician having the final responsibility for treatment decisions the most important attribute, with 46.8% of the total weight of attributes, followed by explainability of the decision (27.3%) and whether the system has been tested for discrimination (14.8%). Other factors, such as gender, age, level of education, whether respondents live rurally or in towns, respondents' trust in health and technology, and respondents' fears and hopes regarding AI, do not play a significant role in the majority of cases. CONCLUSIONS: The 3 factors that are most important to the public are, in descending order of importance, (1) that physicians are ultimately responsible for diagnostics and treatment planning, (2) that the AI decision support is explainable, and (3) that the AI system has been tested for discrimination. Public policy on AI system use in health care should give priority to such AI system use and ensure that patients are provided with information.


Subject(s)
Artificial Intelligence , Delivery of Health Care , Humans , Surveys and Questionnaires , Technology , Trust
11.
Sensors (Basel) ; 21(12)2021 Jun 08.
Article in English | MEDLINE | ID: mdl-34201036

ABSTRACT

Effective 3D perception of an observed scene greatly enriches the knowledge about the surrounding environment and is crucial to effectively develop high-level applications for various purposes [...].


Subject(s)
Computers , Perception
12.
Sensors (Basel) ; 21(7)2021 Apr 06.
Article in English | MEDLINE | ID: mdl-33917392

ABSTRACT

Automating inspection of critical infrastructure such as sewer systems will help utilities optimize maintenance and replacement schedules. The current inspection process consists of manual reviews of video as an operator controls a sewer inspection vehicle remotely. The process is slow, labor-intensive, and expensive and presents a huge potential for automation. With this work, we address a central component of the next generation of robotic inspection of sewers, namely the choice of 3D sensing technology. We investigate three prominent techniques for 3D vision: passive stereo, active stereo, and time-of-flight (ToF). The Realsense D435 camera is chosen as the representative of the first two techniques wheres the PMD CamBoard pico flexx represents ToF. The 3D reconstruction performance of the sensors is assessed in both a laboratory setup and in an outdoor above-ground setup. The acquired point clouds from the sensors are compared with reference 3D models using the cloud-to-mesh metric. The reconstruction performance of the sensors is tested with respect to different illuminance levels and different levels of water in the pipes. The results of the tests show that the ToF-based point cloud from the pico flexx is superior to the output of the active and passive stereo cameras.

13.
Entropy (Basel) ; 22(5)2020 May 07.
Article in English | MEDLINE | ID: mdl-33286302

ABSTRACT

Human behaviour analysis has introduced several challenges in various fields, such as applied information theory, affective computing, robotics, biometrics and pattern recognition [...].

14.
Sensors (Basel) ; 20(7)2020 Apr 02.
Article in English | MEDLINE | ID: mdl-32252230

ABSTRACT

Thermal cameras are popular in detection for their precision in surveillance in the dark and for privacy preservation. In the era of data driven problem solving approaches, manually finding and annotating a large amount of data is inefficient in terms of cost and effort. With the introduction of transfer learning, rather than having large datasets, a dataset covering all characteristics and aspects of the target place is more important. In this work, we studied a large thermal dataset recorded for 20 weeks and identified nine phenomena in it. Moreover, we investigated the impact of each phenomenon for model adaptation in transfer learning. Each phenomenon was investigated separately and in combination. the performance was analyzed by computing the F1 score, precision, recall, true negative rate, and false negative rate. Furthermore, to underline our investigation, the trained model with our dataset was further tested on publicly available datasets, and encouraging results were obtained. Finally, our dataset was also made publicly available.

15.
Sensors (Basel) ; 20(6)2020 Mar 11.
Article in English | MEDLINE | ID: mdl-32168888

ABSTRACT

The challenge of getting machines to understand and interact with natural objects is encountered in important areas such as medicine, agriculture, and, in our case, slaughterhouse automation. Recent breakthroughs have enabled the application of Deep Neural Networks (DNN) directly to point clouds, an efficient and natural representation of 3D objects. The potential of these methods has mostly been demonstrated for classification and segmentation tasks involving rigid man-made objects. We present a method, based on the successful PointNet architecture, for learning to regress correct tool placement from human demonstrations, using virtual reality. Our method is applied to a challenging slaughterhouse cutting task, which requires an understanding of the local geometry including the shape, size, and orientation. We propose an intermediate five-Degree of Freedom (DoF) cutting plane representation, a point and a normal vector, which eases the demonstration and learning process. A live experiment is conducted in order to unveil issues and begin to understand the required accuracy. Eleven cuts are rated by an expert, with 8 / 11 being rated as acceptable. The error on the test set is subsequently reduced through the addition of more training data and improvements to the DNN. The result is a reduction in the average translation from 1.5 cm to 0.8 cm and the orientation error from 4 . 59 to 4 . 48 . The method's generalization capacity is assessed on a similar task from the slaughterhouse and on the very different public LINEMOD dataset for object pose estimation across view points. In both cases, the method shows promising results. Code, datasets, and supplementary materials are available at https://github.com/markpp/PoseFromPointClouds.

16.
Disabil Rehabil Assist Technol ; 15(7): 731-745, 2020 10.
Article in English | MEDLINE | ID: mdl-31268368

ABSTRACT

Purpose: The advances in artificial intelligence have started to reach a level where autonomous systems are becoming increasingly popular as a way to aid people in their everyday life. Such intelligent systems may especially be beneficially for people struggling to complete common everyday tasks, such as individuals with movement-related disabilities. The focus of this paper is hence to review recent work in using computer vision for semi-autonomous control of assistive robotic manipulators (ARMs). Methods: Four databases were searched using a block search, yielding 257 papers which were reduced to 14 papers after applying various filtering criteria. Each paper was reviewed with focus on the hardware used, the autonomous behaviour achieved using computer vision and the scheme for semi-autonomous control of the system. Each of the reviewed systems were also sought characterized by grading their level of autonomy on a pre-defined scale.Conclusions: A re-occurring issue in the reviewed systems was the inability to handle arbitrary objects. This makes the systems unlikely to perform well outside a controlled environment, such as a lab. This issue could be addressed by having the systems recognize good grasping points or primitive shapes instead of specific pre-defined objects. Most of the reviewed systems did also use a rather simple strategy for the semi-autonomous control, where they switch either between full manual control or full automatic control. An alternative could be a control scheme relying on adaptive blending which could provide a more seamless experience for the user.Implications for rehabilitationAssistive robotic manipulators (ARMs) have the potential to empower individuals with disabilities by enabling them to complete common everyday tasks. This potential can be further enhanced by making the ARM semi-autonomous in order to actively aid the user.The scheme used for the semi-autonomous control of the ARM is crucial as it may be a hindrance if done incorrectly. Especially the ability to customize the semi-autonomous behaviour of the ARM is found to be important.Further research is needed to make the final move from the lab to the homes of the users. Most of the reviewed systems suffer from a rather fixed scheme for the semi-autonomous control and an inability to handle arbitrary objects.


Subject(s)
Artificial Intelligence , Automation , Disabled Persons/rehabilitation , Exoskeleton Device , Robotics , Self-Help Devices , Activities of Daily Living , Humans
17.
Sensors (Basel) ; 19(16)2019 Aug 10.
Article in English | MEDLINE | ID: mdl-31405164

ABSTRACT

Efficient and robust evaluation of kernel processing from corn silage is an important indicator to a farmer to determine the quality of their harvested crop. Current methods are cumbersome to conduct and take between hours to days. We present the adoption of two deep learning-based methods for kernel processing prediction without the cumbersome step of separating kernels and stover before capturing images. The methods show that kernels can be detected both with bounding boxes and at pixel-level instance segmentation. Networks were trained on up to 1393 images containing just over 6907 manually annotated kernel instances. Both methods showed promising results despite the challenging setting, with an average precision at an intersection-over-union of 0.5 of 34.0% and 36.1% on the test set consisting of images from three different harvest seasons for the bounding-box and instance segmentation networks respectively. Additionally, analysis of the correlation between the Kernel Processing Score (KPS) of annotations against the KPS of model predictions showed a strong correlation, with the best performing at r(15) = 0.88, p = 0.00003. The adoption of deep learning-based object recognition approaches for kernel processing measurement has the potential to lower the quality assessment process to minutes, greatly aiding a farmer in the strenuous harvesting season.

18.
Sensors (Basel) ; 18(1)2018 Jan 03.
Article in English | MEDLINE | ID: mdl-29301337

ABSTRACT

We present a pattern recognition framework for semantic segmentation of visual structures, that is, multi-class labelling at pixel level, and apply it to the task of segmenting organs in the eviscerated viscera from slaughtered poultry in RGB-D images. This is a step towards replacing the current strenuous manual inspection at poultry processing plants. Features are extracted from feature maps such as activation maps from a convolutional neural network (CNN). A random forest classifier assigns class probabilities, which are further refined by utilizing context in a conditional random field. The presented method is compatible with both 2D and 3D features, which allows us to explore the value of adding 3D and CNN-derived features. The dataset consists of 604 RGB-D images showing 151 unique sets of eviscerated viscera from four different perspectives. A mean Jaccard index of 78.11 % is achieved across the four classes of organs by using features derived from 2D, 3D and a CNN, compared to 74.28 % using only basic 2D image features.

19.
Sensors (Basel) ; 16(11)2016 Nov 18.
Article in English | MEDLINE | ID: mdl-27869730

ABSTRACT

In order to enable a robust 24-h monitoring of traffic under changing environmental conditions, it is beneficial to observe the traffic scene using several sensors, preferably from different modalities. To fully benefit from multi-modal sensor output, however, one must fuse the data. This paper introduces a new approach for fusing color RGB and thermal video streams by using not only the information from the videos themselves, but also the available contextual information of a scene. The contextual information is used to judge the quality of a particular modality and guides the fusion of two parallel segmentation pipelines of the RGB and thermal video streams. The potential of the proposed context-aware fusion is demonstrated by extensive tests of quantitative and qualitative characteristics on existing and novel video datasets and benchmarked against competing approaches to multi-modal fusion.

20.
Med Sci Sports Exerc ; 48(12): 2571-2579, 2016 12.
Article in English | MEDLINE | ID: mdl-27327026

ABSTRACT

PURPOSE: Noninvasive imaging of oxygen uptake may provide a useful tool for the quantification of energy expenditure during human locomotion. A novel thermal imaging method (optical flow) was validated against indirect calorimetry for the estimation of energy expenditure during human walking and running. METHODS: Fourteen endurance-trained subjects completed a discontinuous incremental exercise test on a treadmill. Subjects performed 4-min intervals at 3, 5, and 7 km·h (walking) and at 8, 10, 12, 14, 16, and 18 km·h (running) with 30 s of rest between intervals. Heart rate, gas exchange, and mean accelerations of ankle, thigh, wrist, and hip were measured throughout the exercise test. A thermal camera (30 frames per second) was used to quantify optical flow, calculated as the movements of the limbs relative to the trunk (internal mechanical work) and vertical movement of the trunk (external vertical mechanical work). RESULTS: Heart rate, gross oxygen uptake (mL·kg·min) together with gross and net energy expenditure (J·kg·min) rose with increasing treadmill velocities, as did optical flow measurements and mean accelerations (g) of ankle, thigh, wrist, and hip. Oxygen uptake was linearly correlated with optical flow across all exercise intensities (R = 0.96, P < 0.0001; V˙O2 [mL·kg·min] = 7.35 + 9.85 × optical flow [arbitrary units]). Only 3-4 s of camera recording was required to estimate an optical flow value at each velocity. CONCLUSIONS: Optical flow measurements provide an accurate estimation of energy expenditure during horizontal walking and running. The technique offers a novel experimental method of estimating energy expenditure during human locomotion, without use of interfering equipment attached to the subject.


Subject(s)
Energy Metabolism/physiology , Running/physiology , Thermography/methods , Walking/physiology , Adult , Exercise Test , Female , Heart Rate/physiology , Humans , Male , Optical Phenomena , Oxygen Consumption/physiology , Pulmonary Gas Exchange/physiology
SELECTION OF CITATIONS
SEARCH DETAIL
...