Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
Epidemics ; 41: 100640, 2022 12.
Article in English | MEDLINE | ID: mdl-36274569

ABSTRACT

We investigated the initial outbreak rates and subsequent social distancing behaviour over the initial phase of the COVID-19 pandemic across 29 Combined Statistical Areas (CSAs) of the United States. We used the Numerus Model Builder Data and Simulation Analysis (NMB-DASA) web application to fit the exponential phase of a SCLAIV+D (Susceptible, Contact, Latent, Asymptomatic infectious, symptomatic Infectious, Vaccinated, Dead) disease classes model to outbreaks, thereby allowing us to obtain an estimate of the basic reproductive number R0 for each CSA. Values of R0 ranged from 1.9 to 9.4, with a mean and standard deviation of 4.5±1.8. Fixing the parameters from the exponential fit, we again used NMB-DASA to estimate a set of social distancing behaviour parameters to compute an epidemic flattening index cflatten. Finally, we applied hierarchical clustering methods using this index to divide CSA outbreaks into two clusters: those presenting a social distancing response that was either weaker or stronger. We found cflatten to be more influential in the clustering process than R0. Thus, our results suggest that the behavioural response after a short initial exponential growth phase is likely to be more determinative of the rise of an epidemic than R0 itself.


Subject(s)
COVID-19 , Humans , COVID-19/epidemiology , Pandemics/prevention & control , Physical Distancing , Basic Reproduction Number , Disease Outbreaks/prevention & control
2.
Article in English | MEDLINE | ID: mdl-35984801

ABSTRACT

Real world data often exhibits a long-tailed and open-ended (i.e., with unseen classes) distribution. A practical recognition system must balance between majority (head) and minority (tail) classes, generalize across the distribution, and acknowledge novelty upon the instances of unseen classes (open classes). We define Open Long-Tailed Recognition++ (OLTR++) as learning from such naturally distributed data and optimizing for the classification accuracy over a balanced test set which includes both known and open classes. OLTR++ handles imbalanced classification, few-shot learning, open-set recognition, and active learning in one integrated algorithm, whereas existing classification approaches often focus only on one or two aspects and deliver poorly over the entire spectrum. The key challenges are: 1) how to share visual knowledge between head and tail classes, 2) how to reduce confusion between tail and open classes, and 3) how to actively explore open classes with learned knowledge. Our algorithm, OLTR++, maps images to a feature space such that visual concepts can relate to each other through a memory association mechanism and a learned metric (dynamic meta-embedding) that both respects the closed world classification of seen classes and acknowledges the novelty of open classes. Additionally, we propose an active learning scheme based on visual memory, which learns to recognize open classes in a data-efficient manner for future expansions. On three large-scale open long-tailed datasets we curated from ImageNet (object-centric), Places (scene-centric), and MS1M (face-centric) data, as well as three standard benchmarks (CIFAR-10-LT, CIFAR-100-LT, and iNaturalist-18), our approach, as a unified framework, consistently demonstrates competitive performance. Notably, our approach also shows strong potential for the active exploration of open classes and the fairness analysis of minority groups.

3.
Sci Rep ; 9(1): 8137, 2019 05 31.
Article in English | MEDLINE | ID: mdl-31148564

ABSTRACT

The implementation of intelligent software to identify and classify objects and individuals in visual fields is a technology of growing importance to operatives in many fields, including wildlife conservation and management. To non-experts, the methods can be abstruse and the results mystifying. Here, in the context of applying cutting edge methods to classify wildlife species from camera-trap data, we shed light on the methods themselves and types of features these methods extract to make efficient identifications and reliable classifications. The current state of the art is to employ convolutional neural networks (CNN) encoded within deep-learning algorithms. We outline these methods and present results obtained in training a CNN to classify 20 African wildlife species with an overall accuracy of 87.5% from a dataset containing 111,467 images. We demonstrate the application of a gradient-weighted class-activation-mapping (Grad-CAM) procedure to extract the most salient pixels in the final convolution layer. We show that these pixels highlight features in particular images that in some cases are similar to those used to train humans to identify these species. Further, we used mutual information methods to identify the neurons in the final convolution layer that consistently respond most strongly across a set of images of one particular species. We then interpret the features in the image where the strongest responses occur, and present dataset biases that were revealed by these extracted features. We also used hierarchical clustering of feature vectors (i.e., the state of the final fully-connected layer in the CNN) associated with each image to produce a visual similarity dendrogram of identified species. Finally, we evaluated the relative unfamiliarity of images that were not part of the training set when these images were one of the 20 species "known" to our CNN in contrast to images of the species that were "unknown" to our CNN.


Subject(s)
Animals, Wild/classification , Deep Learning , Neural Networks, Computer , Africa , Algorithms , Animals , Biodiversity , Cluster Analysis , Computer Graphics , Ecology , Image Processing, Computer-Assisted , Pattern Recognition, Automated , Reproducibility of Results , Software , Species Specificity
SELECTION OF CITATIONS
SEARCH DETAIL
...