Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
Rheumatol Adv Pract ; 6(3): rkac105, 2022.
Article in English | MEDLINE | ID: mdl-36540676

ABSTRACT

Objective: Clinical trials assessing systemic sclerosis (SSc)-related digital ulcers have been hampered by a lack of reliable outcome measures of healing. Our objective was to assess the feasibility of patients collecting high-quality mobile phone images of their digital lesions as a first step in developing a smartphone-based outcome measure. Methods: Patients with SSc-related digital (finger) lesions photographed one or more lesions each day for 30 days using their smartphone and uploaded the images to a secure Dropbox folder. Image quality was assessed using six criteria: blurriness, shadow, uniformity of lighting, dot location, dot angle and central positioning of the lesion. Patients completed a feedback questionnaire. Results: Twelve patients returned 332 photographs of 18 lesions. Each patient sent a median of 29.5 photographs [interquartile range (IQR) 15-33.5], with a median of 15 photographs per lesion (IQR 6-32). Twenty-two photographs were duplicates. Of the remaining 310 images, 256 (77%) were sufficiently in focus; 268 (81%) had some shadow; lighting was even in 56 (17%); dot location was acceptable in 233 (70%); dot angle was ideal in 107 (32%); and the lesion was centred in 255 (77%). Patient feedback suggested that 6 of 10 would be willing to record images daily in future studies, and 9 of 10 at least one to three times per week. Conclusion: Taking smartphone photographs of digital lesions was feasible for most patients, with most lesions in focus and central in the image. These promising results will inform the next research phase (to develop a smartphone monitoring application incorporating photographs and symptom tracking).

3.
J Imaging ; 7(8)2021 Aug 11.
Article in English | MEDLINE | ID: mdl-34460778

ABSTRACT

Long video datasets of facial macro- and micro-expressions remains in strong demand with the current dominance of data-hungry deep learning methods. There are limited methods of generating long videos which contain micro-expressions. Moreover, there is a lack of performance metrics to quantify the generated data. To address the research gaps, we introduce a new approach to generate synthetic long videos and recommend assessment methods to inspect dataset quality. For synthetic long video generation, we use the state-of-the-art generative adversarial network style transfer method-StarGANv2. Using StarGANv2 pre-trained on the CelebA dataset, we transfer the style of a reference image from SAMM long videos (a facial micro- and macro-expression long video dataset) onto a source image of the FFHQ dataset to generate a synthetic dataset (SAMM-SYNTH). We evaluate SAMM-SYNTH by conducting an analysis based on the facial action units detected by OpenFace. For quantitative measurement, our findings show high correlation on two Action Units (AUs), i.e., AU12 and AU6, of the original and synthetic data with a Pearson's correlation of 0.74 and 0.72, respectively. This is further supported by evaluation method proposed by OpenFace on those AUs, which also have high scores of 0.85 and 0.59. Additionally, optical flow is used to visually compare the original facial movements and the transferred facial movements. With this article, we publish our dataset to enable future research and to increase the data pool of micro-expressions research, especially in the spotting task.

4.
IEEE J Biomed Health Inform ; 22(4): 1218-1226, 2018 07.
Article in English | MEDLINE | ID: mdl-28796627

ABSTRACT

Breast lesion detection using ultrasound imaging is considered an important step of computer-aided diagnosis systems. Over the past decade, researchers have demonstrated the possibilities to automate the initial lesion detection. However, the lack of a common dataset impedes research when comparing the performance of such algorithms. This paper proposes the use of deep learning approaches for breast ultrasound lesion detection and investigates three different methods: a Patch-based LeNet, a U-Net, and a transfer learning approach with a pretrained FCN-AlexNet. Their performance is compared against four state-of-the-art lesion detection algorithms (i.e., Radial Gradient Index, Multifractal Filtering, Rule-based Region Ranking, and Deformable Part Models). In addition, this paper compares and contrasts two conventional ultrasound image datasets acquired from two different ultrasound systems. Dataset A comprises 306 (60 malignant and 246 benign) images and Dataset B comprises 163 (53 malignant and 110 benign) images. To overcome the lack of public datasets in this domain, Dataset B will be made available for research purposes. The results demonstrate an overall improvement by the deep learning approaches when assessed on both datasets in terms of True Positive Fraction, False Positives per image, and F-measure.


Subject(s)
Breast Neoplasms/diagnostic imaging , Breast/diagnostic imaging , Image Interpretation, Computer-Assisted/methods , Neural Networks, Computer , Ultrasonography, Mammary/methods , Algorithms , Databases, Factual , Female , Humans
SELECTION OF CITATIONS
SEARCH DETAIL
...