Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 92
Filtrar
1.
Opt Express ; 32(6): 10302-10316, 2024 Mar 11.
Artigo em Inglês | MEDLINE | ID: mdl-38571246

RESUMO

Optical coherence tomography (OCT) has already become one of the most important diagnostic tools in different fields of medicine, as well as in various industrial applications. The most important characteristic of OCT is its high resolution, both in depth and the transverse direction. Together with the information on the tissue density, OCT offers highly precise information on tissue geometry. However, the detectability of small and low-intensity features in OCT scans is limited by the presence of speckle noise. In this paper we present a new volumetric method for noise removal in OCT volumes, which aims at improving the quality of rendered 3D volumes. In order to remove noise uniformly, while preserving important details, the proposed algorithm simultaneously observes the estimated amounts of noise and the sharpness measure, and iteratively enhances the volume until it reaches the required quality. We evaluate the proposed method using four quality measures as well as visually, by evaluating the visualization of OCT volumes on an auto-stereoscopic 3D screen. The results show that the proposed method outperforms reference methods both visually and in terms of objective measures.

2.
Biomed Opt Express ; 15(2): 641-655, 2024 Feb 01.
Artigo em Inglês | MEDLINE | ID: mdl-38404312

RESUMO

An adequate supply of oxygen-rich blood is vital to maintain cell homeostasis, cellular metabolism, and overall tissue health. While classical methods of measuring tissue ischemia are often invasive, localized and require skin contact or contrast agents, spectral imaging shows promise as a non-invasive, wide field, and contrast-free approach. We evaluate three novel reflectance-based spectral indices from the 460 - 840 nm spectral range. With the aim of enabling real time visualization of tissue ischemia, information is extracted from only 2-3 spectral bands. Video-rate spectral data was acquired from arm occlusion experiments in 27 healthy volunteers. The performance of the indices was evaluated against binary Support Vector Machine (SVM) classification of healthy versus ischemic skin tissue, two other indices from literature, and tissue oxygenation estimated using spectral unmixing. Robustness was tested by evaluating these under various lighting conditions and on both the dorsal and palmar sides of the hand. A novel index with real-time capabilities using reflectance information only from 547 nm and 556 nm achieves an average classification accuracy of 88.48, compared to 92.65 using an SVM trained on all available wavelengths. Furthermore, the index has a higher accuracy compared to reference methods and its time dynamics compare well against the expected clinical responses. This holds promise for robust real-time detection of tissue ischemia, possibly contributing to improved patient care and clinical outcomes.

3.
Sensors (Basel) ; 23(24)2023 Dec 12.
Artigo em Inglês | MEDLINE | ID: mdl-38139618

RESUMO

In this paper, we propose a new cooperative method that improves the accuracy of Turn Movement Count (TMC) under challenging conditions by introducing contextual observations from the surrounding areas. The proposed method focuses on the correct identification of the movements in conditions where current methods have difficulties. Existing vision-based TMC systems are limited under heavy traffic conditions. The main problems for most existing methods are occlusions between vehicles that prevent the correct detection and tracking of the vehicles through the entire intersection and the assessment of the vehicle's entry and exit points, incorrectly assigning the movement. The proposed method intends to overcome this incapability by sharing information with other observation systems located at neighboring intersections. Shared information is used in a cooperative scheme to infer the missing data, thereby improving the assessment that would otherwise not be counted or miscounted. Experimental evaluation of the system shows a clear improvement over related reference methods.

4.
Sensors (Basel) ; 23(20)2023 Oct 17.
Artigo em Inglês | MEDLINE | ID: mdl-37896600

RESUMO

High dynamic range (HDR) imaging technology is increasingly being used in automated driving systems (ADS) for improving the safety of traffic participants in scenes with strong differences in illumination. Therefore, a combination of HDR video, that is video with details in all illumination regimes, and (HDR) object perception techniques that can deal with this variety in illumination is highly desirable. Although progress has been made in both HDR imaging solutions and object detection algorithms in the recent years, they have progressed independently of each other. This has led to a situation in which object detection algorithms are typically designed and constantly improved to operate on 8 bit per channel content. This makes these algorithms not ideally suited for use in HDR data processing, which natively encodes to a higher bit-depth (12 bits/16 bits per channel). In this paper, we present and evaluate two novel convolutional neural network (CNN) architectures that intelligently convert high bit depth HDR images into 8-bit images. We attempt to optimize reconstruction quality by focusing on ADS object detection quality. The first research novelty is to jointly perform tone-mapping with demosaicing by additionally successfully suppressing noise and demosaicing artifacts. The first CNN performs tone-mapping with noise suppression on a full-color HDR input, while the second performs joint demosaicing and tone-mapping with noise suppression on a raw HDR input. The focus is to increase the detectability of traffic-related objects in the reconstructed 8-bit content, while ensuring that the realism of the standard dynamic range (SDR) content in diverse conditions is preserved. The second research novelty is that for the first time, to the best of our knowledge, a thorough comparative analysis against the state-of-the-art tone-mapping and demosaicing methods is performed with respect to ADS object detection accuracy on traffic-related content that abounds with diverse challenging (i.e., boundary cases) scenes. The evaluation results show that the two proposed networks have better performance in object detection accuracy and image quality, than both SDR content and content obtained with the state-of-the-art tone-mapping and demosaicing algorithms.

5.
Appl Opt ; 62(17): F8-F13, 2023 Jun 10.
Artigo em Inglês | MEDLINE | ID: mdl-37707125

RESUMO

One of the crucial factors in achieving a higher level of autonomy of self-driving vehicles is a sensor capable of acquiring accurate and robust information about the environment and other participants in traffic. In the past few decades, various types of sensors have been used for this purpose, such as cameras registering visible, near-infrared, and thermal parts of the spectrum, as well as radars, ultrasonic sensors, and lidar. Due to their high range, accuracy, and robustness, lidars are gaining popularity in numerous applications. However, in many cases, their spatial resolution does not meet the requirements of the application. To solve this problem, we propose a strategy for better utilization of the available points. In particular, we propose an adaptive paradigm that scans the objects of interest with increased resolution, while the background is scanned using a lower point density. Initial region proposals are generated using an object detector that relies on an auxiliary camera. Such a strategy improves the quality of the representation of the object, while retaining the total number of projected points. The proposed method shows improvements compared to regular sampling in terms of the quality of upsampled point clouds.

6.
Sensors (Basel) ; 23(12)2023 Jun 20.
Artigo em Inglês | MEDLINE | ID: mdl-37420931

RESUMO

Intelligent driver assistance systems are becoming increasingly popular in modern passenger vehicles. A crucial component of intelligent vehicles is the ability to detect vulnerable road users (VRUs) for an early and safe response. However, standard imaging sensors perform poorly in conditions of strong illumination contrast, such as approaching a tunnel or at night, due to their dynamic range limitations. In this paper, we focus on the use of high-dynamic-range (HDR) imaging sensors in vehicle perception systems and the subsequent need for tone mapping of the acquired data into a standard 8-bit representation. To our knowledge, no previous studies have evaluated the impact of tone mapping on object detection performance. We investigate the potential for optimizing HDR tone mapping to achieve a natural image appearance while facilitating object detection of state-of-the-art detectors designed for standard dynamic range (SDR) images. Our proposed approach relies on a lightweight convolutional neural network (CNN) that tone maps HDR video frames into a standard 8-bit representation. We introduce a novel training approach called detection-informed tone mapping (DI-TM) and evaluate its performance with respect to its effectiveness and robustness in various scene conditions, as well as its performance relative to an existing state-of-the-art tone mapping method. The results show that the proposed DI-TM method achieves the best results in terms of detection performance metrics in challenging dynamic range conditions, while both methods perform well in typical, non-challenging conditions. In challenging conditions, our method improves the detection F2 score by 13%. Compared to SDR images, the increase in F2 score is 49%.

7.
Artigo em Inglês | MEDLINE | ID: mdl-37195853

RESUMO

In this article, we propose a novel bilayer low-rankness measure and two models based on it to recover a low-rank (LR) tensor. The global low rankness of underlying tensor is first encoded by LR matrix factorizations (MFs) to the all-mode matricizations, which can exploit multiorientational spectral low rankness. Presumably, the factor matrices of all-mode decomposition are LR, since local low-rankness property exists in within-mode correlation. In the decomposed subspace, to describe the refined local LR structures of factor/subspace, a new low-rankness insight of subspace: a double nuclear norm scheme is designed to explore the so-called second-layer low rankness. By simultaneously representing the bilayer low rankness of the all modes of the underlying tensor, the proposed methods aim to model multiorientational correlations for arbitrary N -way ( N ≥ 3 ) tensors. A block successive upper-bound minimization (BSUM) algorithm is designed to solve the optimization problem. Subsequence convergence of our algorithms can be established, and the iterates generated by our algorithms converge to the coordinatewise minimizers in some mild conditions. Experiments on several types of public datasets show that our algorithm can recover a variety of LR tensors from significantly fewer samples than its counterparts.

8.
Sensors (Basel) ; 22(22)2022 Nov 09.
Artigo em Inglês | MEDLINE | ID: mdl-36433238

RESUMO

Pedestrian detection is an important research domain due to its relevance for autonomous and assisted driving, as well as its applications in security and industrial automation. Often, more than one type of sensor is used to cover a broader range of operating conditions than a single-sensor system would allow. However, it remains difficult to make pedestrian detection systems perform well in highly dynamic environments, often requiring extensive retraining of the algorithms for specific conditions to reach satisfactory accuracy, which, in turn, requires large, annotated datasets captured in these conditions. In this paper, we propose a probabilistic decision-level sensor fusion method based on naive Bayes to improve the efficiency of the system by combining the output of available pedestrian detectors for colour and thermal images without retraining. The results in this paper, obtained through long-term experiments, demonstrate the efficacy of our technique, its ability to work with non-registered images, and its adaptability to cope with situations when one of the sensors fails. The results also show that our proposed technique improves the overall accuracy of the system and could be very useful in several applications.


Assuntos
Condução de Veículo , Pedestres , Humanos , Teorema de Bayes , Cor , Algoritmos
9.
Sensors (Basel) ; 22(20)2022 Oct 16.
Artigo em Inglês | MEDLINE | ID: mdl-36298202

RESUMO

Multi-exposure image fusion (MEF) methods for high dynamic range (HDR) imaging suffer from ghosting artifacts when dealing with moving objects in dynamic scenes. The state-of-the-art methods use optical flow to align low dynamic range (LDR) images before merging, introducing distortion into the aligned LDR images from inaccurate motion estimation due to large motion and occlusion. In place of pre-alignment, attention-based methods calculate the correlation between the reference LDR image and non-reference LDR images, thus excluding misaligned regions in LDR images. Nevertheless, they also exclude the saturated details at the same time. Taking advantage of both the alignment and attention-based methods, we propose an efficient Deep HDR Deghosting Fusion Network (DDFNet) guided by optical flow and image correlation attentions. Specifically, the DDFNet estimates the optical flow of the LDR images by a motion estimation module and encodes that optical flow as a flow feature. Additionally, it extracts correlation features between the reference LDR and other non-reference LDR images. The optical flow and correlation features are employed to adaptably combine information from LDR inputs in an attention-based fusion module. Following the merging of features, a decoder composed of Dense Networks reconstructs the HDR image without ghosting. Experimental results indicate that the proposed DDFNet achieves state-of-the-art image fusion performance on different public datasets.


Assuntos
Artefatos , Movimento (Física)
10.
Artigo em Inglês | MEDLINE | ID: mdl-35839200

RESUMO

With the recent development of the joint classification of hyperspectral image (HSI) and light detection and ranging (LiDAR) data, deep learning methods have achieved promising performance owing to their locally sematic feature extracting ability. Nonetheless, the limited receptive field restricted the convolutional neural networks (CNNs) to represent global contextual and sequential attributes, while visual image transformers (VITs) lose local semantic information. Focusing on these issues, we propose a fractional Fourier image transformer (FrIT) as a backbone network to extract both global and local contexts effectively. In the proposed FrIT framework, HSI and LiDAR data are first fused at the pixel level, and both multisource feature and HSI feature extractors are utilized to capture local contexts. Then, a plug-and-play image transformer FrIT is explored for global contextual and sequential feature extraction. Unlike the attention-based representations in classic VIT, FrIT is capable of speeding up the transformer architectures massively and learning valuable contextual information effectively and efficiently. More significantly, to reduce redundancy and loss of information from shallow to deep layers, FrIT is devised to connect contextual features in multiple fractional domains. Five HSI and LiDAR scenes including one newly labeled benchmark are utilized for extensive experiments, showing improvement over both CNNs and VITs.

11.
Sensors (Basel) ; 22(12)2022 Jun 10.
Artigo em Inglês | MEDLINE | ID: mdl-35746199

RESUMO

Dual cameras with visible-thermal multispectral pairs provide both visual and thermal appearance, thereby enabling detecting pedestrians around the clock in various conditions and applications, including autonomous driving and intelligent transportation systems. However, due to the greatly varying real-world scenarios, the performance of a detector trained on a source dataset might change dramatically when evaluated on another dataset. A large amount of training data is often necessary to guarantee the detection performance in a new scenario. Typically, human annotators need to conduct the data labeling work, which is time-consuming, labor-intensive and unscalable. To overcome the problem, we propose a novel unsupervised transfer learning framework for multispectral pedestrian detection, which adapts a multispectral pedestrian detector to the target domain based on pseudo training labels. In particular, auxiliary detectors are utilized and different label fusion strategies are introduced according to the estimated environmental illumination level. Intermediate domain images are generated by translating the source images to mimic the target ones, acting as a better starting point for the parameter update of the pedestrian detector. The experimental results on the KAIST and FLIR ADAS datasets demonstrate that the proposed method achieves new state-of-the-art performance without any manual training annotations on the target data.


Assuntos
Condução de Veículo , Pedestres , Algoritmos , Humanos , Iluminação , Aprendizado de Máquina
12.
Sensors (Basel) ; 22(10)2022 May 14.
Artigo em Inglês | MEDLINE | ID: mdl-35632151

RESUMO

In laser powder bed fusion (LPBF), melt pool instability can lead to the development of pores in printed parts, reducing the part's structural strength. While camera-based monitoring systems have been introduced to improve melt pool stability, these systems only measure melt pool stability in limited, indirect ways. We propose that melt pool stability can be improved by explicitly encoding stability into LPBF monitoring systems through the use of temporal features and pore density modelling. We introduce the temporal features, in the form of temporal variances of common LPBF monitoring features (e.g., melt pool area, intensity), to explicitly quantify printing stability. Furthermore, we introduce a neural network model trained to link these video features directly to pore densities estimated from the CT scans of previously printed parts. This model aims to reduce the number of online printer interventions to only those that are required to avoid porosity. These contributions are then implemented in a full LPBF monitoring system and tested on prints using 316L stainless steel. Results showed that our explicit stability quantification improved the correlation between our predicted pore densities and true pore densities by up to 42%.


Assuntos
Lasers , Aço Inoxidável , Redes Neurais de Computação , Porosidade , Pós , Aço Inoxidável/química
13.
Sensors (Basel) ; 22(7)2022 Mar 23.
Artigo em Inglês | MEDLINE | ID: mdl-35408072

RESUMO

In this paper, we propose a unified and flexible framework for general image fusion tasks, including multi-exposure image fusion, multi-focus image fusion, infrared/visible image fusion, and multi-modality medical image fusion. Unlike other deep learning-based image fusion methods applied to a fixed number of input sources (normally two inputs), the proposed framework can simultaneously handle an arbitrary number of inputs. Specifically, we use the symmetrical function (e.g., Max-pooling) to extract the most significant features from all the input images, which are then fused with the respective features from each input source. This symmetry function enables permutation-invariance of the network, which means the network can successfully extract and fuse the saliency features of each image without needing to remember the input order of the inputs. The property of permutation-invariance also brings convenience for the network during inference with unfixed inputs. To handle multiple image fusion tasks with one unified framework, we adopt continual learning based on Elastic Weight Consolidation (EWC) for different fusion tasks. Subjective and objective experiments on several public datasets demonstrate that the proposed method outperforms state-of-the-art methods on multiple image fusion tasks.


Assuntos
Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Processamento de Imagem Assistida por Computador/métodos , Registros
14.
Sensors (Basel) ; 22(3)2022 Feb 07.
Artigo em Inglês | MEDLINE | ID: mdl-35161990

RESUMO

Today, solar energy is taking an increasing share of the total energy mix. Unfortunately, many operational photovoltaic plants suffer from a plenitude of defects resulting in non-negligible power loss. The latter highly impacts the overall performance of the PV site; therefore, operators need to regularly inspect their solar parks for anomalies in order to prevent severe performance drops. As this operation is naturally labor-intensive and costly, we present in this paper a novel system for improved PV diagnostics using drone-based imagery. Our solution consists of three main steps. The first step locates the solar panels within the image. The second step detects the anomalies within the solar panels. The final step identifies the root cause of the anomaly. In this paper, we mainly focus on the second step comprising the detection of anomalies within solar panels, which is done using a region-based convolutional neural network (CNN). Experiments on six different PV sites with different specifications and a variety of defects demonstrate that our anomaly detector achieves a true positive rate or recall of more than 90% for a false positive rate of around 2% to 3% tested on a dataset containing nearly 9000 solar panels. Compared to the best state-of-the-art methods, the experiments revealed that we achieve a slightly higher true positive rate for a substantially lower false positive rate, while tested on a more realistic dataset.


Assuntos
Redes Neurais de Computação , Energia Solar , Centrais Elétricas , Luz Solar
15.
Comput Biol Med ; 139: 104953, 2021 12.
Artigo em Inglês | MEDLINE | ID: mdl-34735943

RESUMO

We propose a novel algorithm for segmenting cells of the cornea endothelium layer on confocal microscope images. To get an inter-cellular space with minimum gray-scale value and to enhance cell borders, we apply a difference of Gaussian filter before image binarization by thresholding with the minimum gray-scale value. Removal of segmented noise and artifacts is performed by automatic thresholding (using an image frequency analysis to obtain a global threshold value per image). Final segmentation of cells is achieved by fitting the largest inscribed circles into the centers of cell regions defined by the distance map of the binary images. Parameters of interest such as cell count and density, pleomorphism, polymegathism, and F-measure are computed on a publicly available data-set (Confocal Corneal Endothelial Microscopy Data Set - Rotterdam Ophthalmic Data Repository) and compared against the results of the segmentation methods included with the data set, and the results of state of the art automatic methods. The obtained results achieve higher accuracy compared to the results of the segmentation included with the data set (e.g., -proposed versus dataset in R2 and mean relative error-, cell count: 0.823, - 0.241 versus 0.017, 0.534; cell density: 0.933, - 0.067 versus 0.154, 0.639; cell polymegathism: 0.652, - 0.079 versus 0.075, 0.886; cell pleomorphism: 0.242, - 0.128 versus 0.0352, - 0.222, respectively), and are in good agreement with the results of the state of the art method.


Assuntos
Células Endoteliais , Processamento de Imagem Assistida por Computador , Algoritmos , Córnea/diagnóstico por imagem , Microscopia Confocal
16.
Med Image Anal ; 73: 102188, 2021 10.
Artigo em Inglês | MEDLINE | ID: mdl-34340102

RESUMO

This work reviews the scientific literature regarding digital image processing for in vivo confocal microscopy images of the cornea. We present and discuss a selection of prominent techniques designed for semi- and automatic analysis of four areas of the cornea (epithelium, sub-basal nerve plexus, stroma and endothelium). The main context is image enhancement, detection of structures of interest, and quantification of clinical information. We have found that the preprocessing stage lacks of quantitative studies regarding the quality of the enhanced image, or its effects in subsequent steps of the image processing. Threshold values are widely used in the reviewed methods, although generally, they are selected empirically and manually. The image processing results are evaluated in many cases through comparison with gold standards not widely accepted. It is necessary to standardize values to be quantified in terms of sensitivity and specificity of methods. Most of the reviewed studies do not show an estimation of the computational cost of the image processing. We conclude that reliable, automatic, computer-assisted image analysis of the cornea is still an open issue, constituting an interesting and worthwhile area of research.


Assuntos
Córnea , Processamento de Imagem Assistida por Computador , Córnea/diagnóstico por imagem , Aumento da Imagem , Microscopia Confocal , Sensibilidade e Especificidade
17.
Sensors (Basel) ; 21(14)2021 Jul 18.
Artigo em Inglês | MEDLINE | ID: mdl-34300631

RESUMO

Depth sensing has improved rapidly in recent years, which allows for structural information to be utilized in various applications, such as virtual reality, scene and object recognition, view synthesis, and 3D reconstruction. Due to the limitations of the current generation of depth sensors, the resolution of depth maps is often still much lower than the resolution of color images. This hinders applications, such as view synthesis or 3D reconstruction, from providing high-quality results. Therefore, super-resolution, which allows for the upscaling of depth maps while still retaining sharpness, has recently drawn much attention in the deep learning community. However, state-of-the-art deep learning methods are typically designed and trained to handle a fixed set of integer-scale factors. Moreover, the raw depth map collected by the depth sensor usually has many depth data missing or misestimated values along the edges and corners of observed objects. In this work, we propose a novel deep learning network for both depth completion and depth super-resolution with arbitrary scale factors. The experimental results on the Middlebury stereo, NYUv2, and Matterport3D datasets demonstrate that the proposed method can outperform state-of-the-art methods.


Assuntos
Realidade Virtual
18.
IEEE Trans Image Process ; 30: 3084-3097, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33596175

RESUMO

Hyperspectral image super-resolution by fusing high-resolution multispectral image (HR-MSI) and low-resolution hyperspectral image (LR-HSI) aims at reconstructing high resolution spatial-spectral information of the scene. Existing methods mostly based on spectral unmixing and sparse representation are often developed from a low-level vision task perspective, they cannot sufficiently make use of the spatial and spectral priors available from higher-level analysis. To this issue, this paper proposes a novel HSI super-resolution method that fully considers the spatial/spectral subspace low-rank relationships between available HR-MSI/LR-HSI and latent HSI. Specifically, it relies on a new subspace clustering method named "structured sparse low-rank representation" (SSLRR), to represent the data samples as linear combinations of the bases in a given dictionary, where the sparse structure is induced by low-rank factorization for the affinity matrix. Then we exploit the proposed SSLRR model to learn the SSLRR along spatial/spectral domain from the MSI/HSI inputs. By using the learned spatial and spectral low-rank structures, we formulate the proposed HSI super-resolution model as a variational optimization problem, which can be readily solved by the ADMM algorithm. Compared with state-of-the-art hyperspectral super-resolution methods, the proposed method shows better performance on three benchmark datasets in terms of both visual and quantitative evaluation.

19.
Sensors (Basel) ; 20(17)2020 Aug 26.
Artigo em Inglês | MEDLINE | ID: mdl-32858942

RESUMO

This paper presents a vulnerable road user (VRU) tracking algorithm capable of handling noisy and missing detections from heterogeneous sensors. We propose a cooperative fusion algorithm for matching and reinforcing of radar and camera detections using their proximity and positional uncertainty. The belief in the existence and position of objects is then maximized by temporal integration of fused detections by a multi-object tracker. By switching between observation models, the tracker adapts to the detection noise characteristics making it robust to individual sensor failures. The main novelty of this paper is an improved imputation sampling function for updating the state when detections are missing. The proposed function uses a likelihood without association that is conditioned on the sensor information instead of the sensor model. The benefits of the proposed solution are two-fold: firstly, particle updates become computationally tractable and secondly, the problem of imputing samples from a state which is predicted without an associated detection is bypassed. Experimental evaluation shows a significant improvement in both detection and tracking performance over multiple control algorithms. In low light situations, the cooperative fusion outperforms intermediate fusion by as much as 30%, while increases in tracking performance are most significant in complex traffic scenes.

20.
Sensors (Basel) ; 20(9)2020 Apr 29.
Artigo em Inglês | MEDLINE | ID: mdl-32365545

RESUMO

With the rapid development in sensing technology, data mining, and machine learning fields for human health monitoring, it became possible to enable monitoring of personal motion and vital signs in a manner that minimizes the disruption of an individual's daily routine and assist individuals with difficulties to live independently at home. A primary difficulty that researchers confront is acquiring an adequate amount of labeled data for model training and validation purposes. Therefore, activity discovery handles the problem that activity labels are not available using approaches based on sequence mining and clustering. In this paper, we introduce an unsupervised method for discovering activities from a network of motion detectors in a smart home setting. First, we present an intra-day clustering algorithm to find frequent sequential patterns within a day. As a second step, we present an inter-day clustering algorithm to find the common frequent patterns between days. Furthermore, we refine the patterns to have more compressed and defined cluster characterizations. Finally, we track the occurrences of various regular routines to monitor the functional health in an individual's patterns and lifestyle. We evaluate our methods on two public data sets captured in real-life settings from two apartments during seven-month and three-month periods.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...