Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 9 de 9
Filter
Add more filters










Database
Language
Publication year range
1.
J Imaging ; 10(3)2024 Mar 13.
Article in English | MEDLINE | ID: mdl-38535150

ABSTRACT

While Siamese object tracking has witnessed significant advancements, its hard real-time behaviour on embedded devices remains inadequately addressed. In many application cases, an embedded implementation should not only have a minimal execution latency, but this latency should ideally also have zero variance, i.e., be predictable. This study aims to address this issue by meticulously analysing real-time predictability across different components of a deep-learning-based video object tracking system. Our detailed experiments not only indicate the superiority of Field-Programmable Gate Array (FPGA) implementations in terms of hard real-time behaviour but also unveil important time predictability bottlenecks. We introduce dedicated hardware accelerators for key processes, focusing on depth-wise cross-correlation and padding operations, utilizing high-level synthesis (HLS). Implemented on a KV260 board, our enhanced tracker exhibits not only a speed up, with a factor of 6.6, in mean execution time but also significant improvements in hard real-time predictability by yielding 11 times less latency variation as compared to our baseline. A subsequent analysis of power consumption reveals our approach's contribution to enhanced power efficiency. These advancements underscore the crucial role of hardware acceleration in realizing time-predictable object tracking on embedded systems, setting new standards for future hardware-software co-design endeavours in this domain.

2.
J Gen Intern Med ; 37(6): 1408-1414, 2022 05.
Article in English | MEDLINE | ID: mdl-34031854

ABSTRACT

BACKGROUND: Physicians' gaze towards their patients may affect patients' trust in them. This is especially relevant considering recent developments, including the increasing use of Electronic Health Records, which affect physicians' gaze behavior. Moreover, socially anxious patients' trust in particular may be affected by the gaze of the physician. OBJECTIVE: We aimed to evaluate if physicians' gaze towards the face of their patient influenced patient trust and to assess if this relation was stronger for socially anxious patients. We furthermore explored the relation between physicians' gaze and patients' perception of physician empathy and patients' distress. DESIGN: This was an observational study using eye-tracking glasses and questionnaires. PARTICIPANTS: One hundred patients and 16 residents, who had not met before, participated at an internal medicine out-patient clinic. MEASURES: Physicians wore eye-tracking glasses during medical consultations to assess their gaze towards patients' faces. Questionnaires were used to assess patient outcomes. Multilevel analyses were conducted to assess the relation between physicians' relative face gaze time and trust, while correcting for patient background characteristics, and including social anxiety as a moderator. Analyses were then repeated with perceived empathy and distress as outcomes. RESULTS: More face gaze towards patients was associated with lower trust, after correction for gender, age, education level, presence of caregivers, and social anxiety (ß=-0.17, P=0.048). There was no moderation effect of social anxiety nor a relation between face gaze and perceived empathy or distress. CONCLUSIONS: These results challenge the notion that more physician gaze is by definition beneficial for the physician-patient relationship. For example, the extent of conversation about emotional issues might explain our findings, where more emotional talk could be associated with more intense gazing and feelings of discomfort in the patient. To better understand the relation between physician gaze and patient outcomes, future studies should assess bidirectional face gaze during consultations.


Subject(s)
Physicians , Trust , Communication , Empathy , Humans , Physician-Patient Relations
3.
J Imaging ; 7(4)2021 Apr 01.
Article in English | MEDLINE | ID: mdl-34460514

ABSTRACT

Object detection models are usually trained and evaluated on highly complicated, challenging academic datasets, which results in deep networks requiring lots of computations. However, a lot of operational use-cases consist of more constrained situations: they have a limited number of classes to be detected, less intra-class variance, less lighting and background variance, constrained or even fixed camera viewpoints, etc. In these cases, we hypothesize that smaller networks could be used without deteriorating the accuracy. However, there are multiple reasons why this does not happen in practice. Firstly, overparameterized networks tend to learn better, and secondly, transfer learning is usually used to reduce the necessary amount of training data. In this paper, we investigate how much we can reduce the computational complexity of a standard object detection network in such constrained object detection problems. As a case study, we focus on a well-known single-shot object detector, YoloV2, and combine three different techniques to reduce the computational complexity of the model without reducing its accuracy on our target dataset. To investigate the influence of the problem complexity, we compare two datasets: a prototypical academic (Pascal VOC) and a real-life operational (LWIR person detection) dataset. The three optimization steps we exploited are: swapping all the convolutions for depth-wise separable convolutions, perform pruning and use weight quantization. The results of our case study indeed substantiate our hypothesis that the more constrained a problem is, the more the network can be optimized. On the constrained operational dataset, combining these optimization techniques allowed us to reduce the computational complexity with a factor of 349, as compared to only a factor 9.8 on the academic dataset. When running a benchmark on an Nvidia Jetson AGX Xavier, our fastest model runs more than 15 times faster than the original YoloV2 model, whilst increasing the accuracy by 5% Average Precision (AP).

4.
Sensors (Basel) ; 20(23)2020 Dec 03.
Article in English | MEDLINE | ID: mdl-33287290

ABSTRACT

The extraction of permanent structures (such as walls, floors, and ceilings) is an important step in the reconstruction of building interiors from point clouds. These permanent structures are, in general, assumed to be planar. However, point clouds from building interiors often also contain clutter with planar surfaces such as furniture, cabinets, etc. Hence, not all planar surfaces that are extracted belong to permanent structures. This is undesirable as it can result in geometric errors in the reconstruction. Therefore, it is important that reconstruction methods can correctly detect and extract all permanent structures even in the presence of such clutter. We propose to perform semantic scene completion using deep learning, prior to the extraction of permanent structures to improve the reconstruction results. For this, we started from the ScanComplete network proposed by Dai et al. We adapted the network to use a different input representation to eliminate the need for scanning trajectory information as this is not always available. Furthermore, we optimized the architecture to make inference and training significantly faster. To further improve the results of the network, we created a more realistic dataset based on real-life scans from building interiors. The experimental results show that our approach significantly improves the extraction of the permanent structures from both synthetically generated as well as real-life point clouds, thereby improving the overall reconstruction results.

5.
Chemosphere ; 252: 126477, 2020 Aug.
Article in English | MEDLINE | ID: mdl-32222523

ABSTRACT

Digestate treatment techniques have recently been proposed as a strategy to increase the ultimate biogas yield from dairy manure and to improve the digestate quality as an organic fertilizer. These studies however rarely take the trace elements (TE) and nutrient partitioning into account. This study focusses on ozone treatment (5-40 g O3 kg-1 Total Solids (TS)) as a digestate treatment technique to control the concentration of TE and nutrients in the liquid phase of the digestate. Controlling the TE and nutrient concentrations in the liquid and solid digestate can improve the agronomic value of dairy manure digestate. The ozone concentration of the gas stream entering reactor was 48.53 g O3/Nm³ or 3.4% w/w O3 in O2-gas. The experiments were repeated using pure oxygen gas to investigate its influence. The results from ozonation and oxygenation of the dairy manure digestates revealed that O3 treatment up to 40 g O3 kg-1 TS did not have a more pronounced effect on the biochemical parameters compared to supplementation of pure O2. Ozonation of the digestate and the supernatant showed that the TE concentration in the liquid phase followed a parabolic profile. The observed initial increase in this parabolic profile was explained by the release of TE from the organic matter to the supernatant causing an increase in TE concentration, followed by a decrease due to precipitation of TE as hydroxides and sulfides, due to the increasing pH and sulphur concentrations.


Subject(s)
Manure , Micronutrients/chemistry , Ozone/chemistry , Trace Elements/chemistry , Anaerobiosis , Animals , Biofuels , Fertilizers
6.
Sensors (Basel) ; 19(4)2019 Feb 19.
Article in English | MEDLINE | ID: mdl-30791476

ABSTRACT

In this paper, we investigate whether fusing depth information on top of normal RGB data for camera-based object detection can help to increase the performance of current state-of-the-art single-shot detection networks. Indeed, depth sensing is easily acquired using depth cameras such as a Kinect or stereo setups. We investigate the optimal manner to perform this sensor fusion with a special focus on lightweight single-pass convolutional neural network (CNN) architectures, enabling real-time processing on limited hardware. For this, we implement a network architecture allowing us to parameterize at which network layer both information sources are fused together. We performed exhaustive experiments to determine the optimal fusion point in the network, from which we can conclude that fusing towards the mid to late layers provides the best results. Our best fusion models significantly outperform the baseline RGB network in both accuracy and localization of the detections.

7.
Article in English | MEDLINE | ID: mdl-26737424

ABSTRACT

Due to the rapidly aging population, developing automated home care systems is a very important step in taking care of elderly people. This will enable us to automatically monitor the health of senior citizens in their own living environment and prevent problems before they happen. One of the challenging tasks is to actively monitor walking habits of elderlies, who alternate between the use of different walking aids, and to combine this with automated fall risk assessment systems. We propose a camera based system that uses object categorization techniques to robustly detect walking aids, like a walker, in order to improve the classification of the fall risk. By automatically integrating the application specific scenery knowledge like camera position and used walker type, we succeed in detecting walking aids within a single frame with an accuracy of 68% for trajectory A and 38% for trajectory B. Furthermore, compared to current state of the art detection systems, we use a rather limited set of training data to achieve this accuracy and thus create a system that is easily adaptable for other applications. Moreover, we applied spatial constraints between detections to optimize the object detection output and to limit the amount of false positive detections. Finally, we evaluate the output on a walking sequence base, leading up to a 92.3% correct classification rate of walking sequences. It can be noted that adapting this approach to other walking aids, like a walking cane, is quite straightforward and opens up the door for many future applications.


Subject(s)
Monitoring, Physiologic/methods , Video Recording , Walkers , Walking/classification , Aged , Female , Humans
8.
Article in English | MEDLINE | ID: mdl-26737890

ABSTRACT

More than thirty percent of persons over 65 years fall at least once a year and are often not able to get up again. The lack of timely aid after such a fall incident can lead to severe complications. This timely aid can however be assured by a camera-based fall detection system triggering an alarm when a fall occurs. Most algorithms described in literature use the biggest object detected using background subtraction to extract the fall features. In this paper we compare the performance of our state-of-the-art fall detection algorithm when using only background subtraction, when using a particle filter to track the person and a hybrid method in which the particle filter is only used to enhance the background subtraction and not for the feature extraction. We tested this using our simulation data set containing reenactments of real-life falls. This comparison shows that this hybrid method significantly increases the sensitivity and robustness of the fall detection algorithm resulting in a sensitivity of 76.1% and a PPV of 41.2%.


Subject(s)
Accidental Falls , Filtration/instrumentation , Photography/instrumentation , Aged , Algorithms , Humans
9.
BMC Geriatr ; 13: 103, 2013 Oct 04.
Article in English | MEDLINE | ID: mdl-24090211

ABSTRACT

BACKGROUND: For prevention and detection of falls, it is essential to unravel the way in which older people fall. This study aims to provide a description of video-based real-life fall events and to examine real-life falls using the classification system by Noury and colleagues, which divides a fall into four phases (the prefall, critical, postfall and recovery phase). METHODS: Observational study of three older persons at high risk for falls, residing in assisted living or residential care facilities: a camera system was installed in each participant's room covering all areas, using a centralized PC platform in combination with standard Internet Protocol (IP) cameras. After a fall, two independent researchers analyzed recorded images using the camera position with the clearest viewpoint. RESULTS: A total of 30 falls occurred of which 26 were recorded on camera over 17 months. Most falls happened in the morning or evening (62%), when no other persons were present (88%). Participants mainly fell backward (initial fall direction and landing configuration) on the pelvis or torso and none could get up unaided. In cases where a call alarm was used (54%), an average of 70 seconds (SD=64; range 15-224) was needed to call for help. Staff responded to the call after an average of eight minutes (SD=8.4; range 2-33). Mean time on the ground was 28 minutes (SD=25.4; range 2-59) without using a call alarm compared to 11 minutes (SD=9.2; range 3-38) when using a call alarm (p=0.445).The real life falls were comparable with the prefall and recovery phase of Noury's classification system. The critical phase, however, showed a prolonged duration in all falls. We suggest distinguishing two separate phases: a prolonged loss of balance phase and the actual descending phase after failure to recover balance, resulting in the impact of the body on the ground. In contrast to the theoretical description, the postfall phase was not typically characterized by inactivity; this depended on the individual. CONCLUSIONS: This study contributes to a better understanding of the fall process in private areas of assisted living and residential care settings in older persons at high risk for falls.


Subject(s)
Accidental Falls , Activities of Daily Living/psychology , Frail Elderly/psychology , Video Recording/methods , Accidental Falls/prevention & control , Aged, 80 and over , Female , Humans , Incidence , Postural Balance/physiology , Risk Factors
SELECTION OF CITATIONS
SEARCH DETAIL
...