Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 27
Filter
1.
PLoS Comput Biol ; 19(12): e1011757, 2023 Dec.
Article in English | MEDLINE | ID: mdl-38150476

ABSTRACT

The most common reported epidemic time series in epidemiological surveillance are the daily or weekly incidence of new cases, the hospital admission count, the ICU admission count, and the death toll, which played such a prominent role in the struggle to monitor the Covid-19 pandemic. We show that pairs of such curves are related to each other by a generalized renewal equation depending on a smooth time varying delay and a smooth ratio generalizing the reproduction number. Such a functional relation is also explored for pairs of simultaneous curves measuring the same indicator in two neighboring countries. Given two such simultaneous time series, we develop, based on a signal processing approach, an efficient numerical method for computing their time varying delay and ratio curves, and we verify that its results are consistent. Indeed, they experimentally verify symmetry and transitivity requirements and we also show, using realistic simulated data, that the method faithfully recovers time delays and ratios. We discuss several real examples where the method seems to display interpretable time delays and ratios. The proposed method generalizes and unifies many recent related attempts to take advantage of the plurality of these health data across regions or countries and time, providing a better understanding of the relationship between them. An implementation of the method is publicly available at the EpiInvert CRAN package.


Subject(s)
COVID-19 , Pandemics , Humans , Time Factors , COVID-19/epidemiology , Hospitalization , Incidence
2.
PLoS Comput Biol ; 19(6): e1010790, 2023 06.
Article in English | MEDLINE | ID: mdl-37343039

ABSTRACT

The COVID-19 pandemy has created a radically new situation where most countries provide raw measurements of their daily incidence and disclose them in real time. This enables new machine learning forecast strategies where the prediction might no longer be based just on the past values of the current incidence curve, but could take advantage of observations in many countries. We present such a simple global machine learning procedure using all past daily incidence trend curves. Each of the 27,418 COVID-19 incidence trend curves in our database contains the values of 56 consecutive days extracted from observed incidence curves across 61 world regions and countries. Given a current incidence trend curve observed over the past four weeks, its forecast in the next four weeks is computed by matching it with the first four weeks of all samples, and ranking them by their similarity to the query curve. Then the 28 days forecast is obtained by a statistical estimation combining the values of the 28 last observed days in those similar samples. Using comparison performed by the European Covid-19 Forecast Hub with the current state of the art forecast methods, we verify that the proposed global learning method, EpiLearn, compares favorably to methods forecasting from a single past curve.


Subject(s)
COVID-19 , Humans , COVID-19/epidemiology , Incidence , Forecasting
3.
Environ Sci Technol ; 56(14): 10517-10529, 2022 07 19.
Article in English | MEDLINE | ID: mdl-35797726

ABSTRACT

Methane (CH4) emission estimates from top-down studies over oil and gas basins have revealed systematic underestimation of CH4 emissions in current national inventories. Sparse but extremely large amounts of CH4 from oil and gas production activities have been detected across the globe, resulting in a significant increase of the overall oil and gas contribution. However, attribution to specific facilities remains a major challenge unless high-spatial-resolution images provide sufficient granularity within the oil and gas basin. In this paper, we monitor known oil and gas infrastructures across the globe using recurrent Sentinel-2 imagery to detect and quantify more than 1200 CH4 emissions. In combination with emission estimates from airborne and Sentinel-5P measurements, we demonstrate the robustness of the fit to a power law from 0.1 tCH4/h to 600 tCH4/h. We conclude here that the prevalence of ultraemitters (>25tCH4/h) detected globally by Sentinel-5P directly relates to emission occurrences below its detection threshold in the range >2tCH4/h, which correspond to large emitters covered by Sentinel-2. We also verified that this relation is also valid at a more local scale for two specific countries, namely, Algeria and Turkmenistan, and the Permian basin in the United States.


Subject(s)
Air Pollutants , Methane , Air Pollutants/analysis , Methane/analysis , Natural Gas/analysis , United States
4.
Nutrients ; 14(9)2022 Apr 20.
Article in English | MEDLINE | ID: mdl-35565680

ABSTRACT

Phytonutrients comprise many different chemicals, including carotenoids, indoles, glucosinolates, organosulfur compounds, phytosterols, polyphenols, and saponins. This review focuses on the human healthcare benefits of seven phytochemical families and highlights the significant potential contribution of phytonutrients in the prevention and management of pathologies and symptoms in the field of family health. The structure and function of these phytochemical families and their dietary sources are presented, along with an overview of their potential activities across different health and therapeutic targets. This evaluation has enabled complementary effects of the different families of phytonutrients in the same area of health to be recognized.


Subject(s)
Phytochemicals , Polyphenols , Antioxidants/pharmacology , Carotenoids/pharmacology , Delivery of Health Care , Flavonoids/pharmacology , Humans , Phytochemicals/chemistry , Phytochemicals/pharmacology , Polyphenols/chemistry , Polyphenols/pharmacology
5.
Biology (Basel) ; 11(4)2022 Mar 31.
Article in English | MEDLINE | ID: mdl-35453741

ABSTRACT

The sanitary crisis of the past two years has focused the public's attention on quantitative indicators of the spread of the COVID-19 pandemic. The daily reproduction number Rt, defined by the average number of new infections caused by a single infected individual at time t, is one of the best metrics for estimating the epidemic trend. In this paper, we provide a complete observation model for sampled epidemiological incidence signals obtained through periodic administrative measurements. The model is governed by the classic renewal equation using an empirical reproduction kernel, and subject to two perturbations: a time-varying gain with a weekly period and a white observation noise. We estimate this noise model and its parameters by extending a variational inversion of the model recovering its main driving variable Rt. Using Rt, a restored incidence curve, corrected of the weekly and festive day bias, can be deduced through the renewal equation. We verify experimentally on many countries that, once the weekly and festive days bias have been corrected, the difference between the incidence curve and its expected value is well approximated by an exponential distributed white noise multiplied by a power of the magnitude of the restored incidence curve.

6.
Proc Natl Acad Sci U S A ; 118(50)2021 12 14.
Article in English | MEDLINE | ID: mdl-34876517

ABSTRACT

The COVID-19 pandemic has undergone frequent and rapid changes in its local and global infection rates, driven by governmental measures or the emergence of new viral variants. The reproduction number Rt indicates the average number of cases generated by an infected person at time t and is a key indicator of the spread of an epidemic. A timely estimation of Rt is a crucial tool to enable governmental organizations to adapt quickly to these changes and assess the consequences of their policies. The EpiEstim method is the most widely accepted method for estimating Rt But it estimates Rt with a significant temporal delay. Here, we propose a method, EpiInvert, that shows good agreement with EpiEstim, but that provides estimates of Rt several days in advance. We show that Rt can be estimated by inverting the renewal equation linking Rt with the observed incidence curve of new cases, it Our signal-processing approach to this problem yields both Rt and a restored it corrected for the "weekend effect" by applying a deconvolution and denoising procedure. The implementations of the EpiInvert and EpiEstim methods are fully open source and can be run in real time on every country in the world and every US state.


Subject(s)
Basic Reproduction Number , COVID-19/transmission , COVID-19/epidemiology , Computer Simulation , Forecasting , Humans , Incidence , Models, Theoretical , SARS-CoV-2
7.
J Opt Soc Am A Opt Image Sci Vis ; 36(3): 450-463, 2019 Mar 01.
Article in English | MEDLINE | ID: mdl-30874182

ABSTRACT

The high spectral redundancy of hyper/ultraspectral Earth-observation satellite imaging raises three challenges: (a) to design accurate noise estimation methods, (b) to denoise images with very high signal-to-noise ratio (SNR), and (c) to secure unbiased denoising. We solve (a) by a new noise estimation, (b) by a novel Bayesian algorithm exploiting spectral redundancy and spectral clustering, and (c) by accurate measurements of the interchannel correlation after denoising. We demonstrate the effectiveness of our method on two ultraspectral Earth imagers, IASI and IASI-NG, one flying and the other in project, and sketch the major resolution gain of future instruments entailed by such unbiased denoising.

8.
IEEE Trans Image Process ; 26(6): 2694-2704, 2017 Jun.
Article in English | MEDLINE | ID: mdl-28333634

ABSTRACT

This paper addresses the question of identifying the right camera direct or inverse distortion model, permitting a high subpixel precision to fit to real camera distortion. Five classic camera distortion models are reviewed and their precision is compared for direct or inverse distortion. By definition, the three radially symmetric models can only model a distortion radially symmetric around some distortion center. They can be extended to deal with non-radially symmetric distortions by adding tangential distortion components, but still may be too simple for very accurate modeling of real cameras. The polynomial and the rational models instead miss a physical or optical interpretation, but can cope equally with radially and non-radially symmetric distortions. Indeed, they do not require the evaluation of a distortion center. When requiring high precisions, we found that the distortion modeling must also be evaluated primarily as a numerical problem. Indeed, all models except the polynomial involve a non-linear minimization, which increases the numerical risk. The estimation of a polynomial distortion model leads instead to a linear problem, which is secure and much faster. We concluded by extensive numerical experiments that, although high degree polynomials were required to reach a high precision of 1/100 pixels, such polynomials were easily estimated and produced a precise distortion modeling without overfitting. Our conclusion is validated by three independent experimental setups: the models were compared first on the lens distortion database of the Lensfun library by their distortion simulation and inversion power; second by fitting real camera distortions estimated by a non parametric algorithm; and finally by the absolute correction measurement provided by the photographs of tightly stretched strings, warranting a high straightness.

9.
Appl Opt ; 55(28): 7836-7846, 2016 Oct 01.
Article in English | MEDLINE | ID: mdl-27828013

ABSTRACT

To achieve higher resolutions, current earth observation satellites use larger, lightweight primary mirrors that can deform over time, affecting the image quality. To overcome this problem, we evaluated the possibility of combining a deformable mirror with a Shack-Hartman wavefront sensor (SHWFS) directly in the satellite. The SHWFS's performance depends entirely on the accuracy of the shift estimation algorithm employed, which should be computationally cheap to execute onboard. We analyzed the problem of fast, accurate shift estimation in this context and have proposed a new algorithm, based on a global optical flow method that estimates the shifts in linear time. Based on our experiments, we believe our method has proven to be more accurate and stable, as well as less sensitive to noise, than all current state-of-the-art methods, permitting a more precise onboard wavefront estimation.

10.
Vision Res ; 126: 183-191, 2016 09.
Article in English | MEDLINE | ID: mdl-26408332

ABSTRACT

We propose a novel approach to the grouping of dot patterns by the good continuation law. Our model is based on local symmetries, and the non-accidentalness principle to determine perceptually relevant configurations. A quantitative measure of non-accidentalness is proposed, showing a good correlation with the visibility of a curve of dots. A robust, unsupervised and scale-invariant algorithm for the detection of good continuation of dots is derived. The results of the proposed method are illustrated on various datasets, including data from classic psychophysical studies. An online demonstration of the algorithm allows the reader to directly evaluate the method.


Subject(s)
Form Perception/physiology , Models, Theoretical , Pattern Recognition, Visual/physiology , Algorithms , Gestalt Theory , Humans , Models, Psychological , Psychophysics
11.
IEEE Trans Pattern Anal Mach Intell ; 37(3): 499-512, 2015 Mar.
Article in English | MEDLINE | ID: mdl-26353257

ABSTRACT

In spite of many interesting attempts, the problem of automatically finding alignments in a 2D set of points seems to be still open. The difficulty of the problem is illustrated here by very simple examples. We then propose an elaborate solution. We show that a correct alignment detection depends on not less than four interlaced criteria, namely the amount of masking in texture, the relative bilateral local density of the alignment, its internal regularity, and finally a redundancy reduction step. Extending tools of the a contrario detection theory, we show that all of these detection criteria can be naturally embedded in a single probabilistic a contrario model with a single user parameter, the number of false alarms. Our contribution to the a contrario theory is the use of sophisticated conditional events on random point sets, for which expectation we nevertheless find easy bounds. By these bounds the mathematical consistency of our detection model receives a simple proof. Our final algorithm also includes a new formulation of the exclusion principle in Gestalt theory to avoid redundant detections. Aiming at reproducibility, a source code and an online demo open to any data point set are provided. The method is carefully compared to three state-of-the-art algorithms and an application to real data is discussed. Limitations of the final method are also illustrated and explained.

12.
IEEE Trans Image Process ; 24(10): 3149-61, 2015 Oct.
Article in English | MEDLINE | ID: mdl-26080052

ABSTRACT

Arguably several thousands papers are dedicated to image denoising. Most papers assume a fixed noise model, mainly white Gaussian or Poissonian. This assumption is only valid for raw images. Yet, in most images handled by the public and even by scientists, the noise model is imperfectly known or unknown. End users only dispose the result of a complex image processing chain effectuated by uncontrolled hardware and software (and sometimes by chemical means). For such images, recent progress in noise estimation permits to estimate from a single image a noise model, which is simultaneously signal and frequency dependent. We propose here a multiscale denoising algorithm adapted to this broad noise model. This leads to a blind denoising algorithm which we demonstrate on real JPEG images and on scans of old photographs for which the formation model is unknown. The consistency of this algorithm is also verified on simulated distorted images. This algorithm is finally compared with the unique state of the art previous blind denoising method.

13.
IEEE Trans Image Process ; 24(10): 3162-75, 2015 Oct.
Article in English | MEDLINE | ID: mdl-26080053

ABSTRACT

The camera calibration parameters and the image processing chain which generated a given image are generally not available to the receiver. This happens for example with scanned photographs and for most JPEG images. These images have undergone various nonlinear contrast changes and also linear and nonlinear filters. To deal with remnant noise in such images, we introduce a general nonparametric intensity and frequency-dependent noise model. We demonstrate by simulated and experiments with real images that this model, which requires the estimation of more than 1000 parameters, performs an efficient noise estimation. The proposed noise model is a patch model. Its estimation can therefore be used as a preliminary step to any patch-based denoising method. Our noise estimation method introduces several new tools for performing this complex estimation. One of them is a new sparse patch distance function permitting to find noisy patches with similar underlying geometry. A validation of the noise model and of its estimation method is obtained by comparing its results to ground-truth noise curves for both raw and JPEG-encoded images, and by visual inspection of the denoising results of real images. A fair comparison with the state of the art is also performed.

14.
J Opt Soc Am A Opt Image Sci Vis ; 31(4): 863-71, 2014 Apr 01.
Article in English | MEDLINE | ID: mdl-24695150

ABSTRACT

Optimal denoising works at best on raw images (the image formed at the output of the focal plane, at the CCD or CMOS detector), which display a white signal-dependent noise. The noise model of the raw image is characterized by a function that given the intensity of a pixel in the noisy image returns the corresponding standard deviation; the plot of this function is the noise curve. This paper develops a nonparametric approach estimating the noise curve directly from a single raw image. An extensive cross-validation procedure is described to compare this new method with state-of-the-art parametric methods and with laboratory calibration methods giving a reliable ground truth, even for nonlinear detectors.

15.
J Opt Soc Am A Opt Image Sci Vis ; 29(10): 2134-43, 2012 Oct 01.
Article in English | MEDLINE | ID: mdl-23201661

ABSTRACT

This paper addresses the high-precision measurement of the distortion of a digital camera from photographs. Traditionally, this distortion is measured from photographs of a flat pattern that contains aligned elements. Nevertheless, it is nearly impossible to fabricate a very flat pattern and to validate its flatness. This fact limits the attainable measurable precisions. In contrast, it is much easier to obtain physically very precise straight lines by tightly stretching good quality strings on a frame. Taking literally "plumb-line methods," we built a "calibration harp" instead of the classic flat patterns to obtain a high-precision measurement tool, demonstrably reaching 2/100 pixel precisions. The harp is complemented with the algorithms computing automatically from harp photographs two different and complementary lens distortion measurements. The precision of the method is evaluated on images corrected by state-of-the-art distortion correction algorithms, and by popular software. Three applications are shown: first an objective and reliable measurement of the result of any distortion correction. Second, the harp permits us to control state-of-the art global camera calibration algorithms: it permits us to select the right distortion model, thus avoiding internal compensation errors inherent to these methods. Third, the method replaces manual procedures in other distortion correction methods, makes them fully automatic, and increases their reliability and precision.

16.
IEEE Trans Pattern Anal Mach Intell ; 34(5): 930-42, 2012 May.
Article in English | MEDLINE | ID: mdl-22442122

ABSTRACT

This paper introduces a statistical method to decide whether two blocks in a pair of images match reliably. The method ensures that the selected block matches are unlikely to have occurred "just by chance." The new approach is based on the definition of a simple but faithful statistical background model for image blocks learned from the image itself. A theorem guarantees that under this model, not more than a fixed number of wrong matches occurs (on average) for the whole image. This fixed number (the number of false alarms) is the only method parameter. Furthermore, the number of false alarms associated with each match measures its reliability. This a contrario block-matching method, however, cannot rule out false matches due to the presence of periodic objects in the images. But it is successfully complemented by a parameterless self-similarity threshold. Experimental evidence shows that the proposed method also detects occlusions and incoherent motions due to vehicles and pedestrians in nonsimultaneous stereo.

17.
J Physiol Paris ; 106(5-6): 266-83, 2012.
Article in English | MEDLINE | ID: mdl-22343519

ABSTRACT

Gestalt theory gives a list of geometric grouping laws that could in principle give a complete account of human image perception. Based on an extensive thesaurus of clever graphical images, this theory discusses how grouping laws collaborate, and conflict toward a global image understanding. Unfortunately, as shown in the bibliographical analysis herewith, the attempts to formalize the grouping laws in computer vision and psychophysics have at best succeeded to compute individual partial structures (or partial gestalts), such as alignments or symmetries. Nevertheless, we show here that a never formalized clever Gestalt experimental procedure, the Nachzeichnung suggests a numerical set up to implement and test the collaboration of partial gestalts. The new computational procedure proposed here analyzes a digital image, and performs a numerical simulation that we call Nachtanz or Gestaltic dance. In this dance, the analyzed digital image is gradually deformed in a random way, but maintaining the detected partial gestalts. The resulting dancing images should be perceptually indistinguishable if and only if the grouping process was complete. Like the Nachzeichnung, the Nachtanz permits a visual exploration of the degrees of freedom still available to a figure after all partial groups (or gestalts) have been detected. In the new proposed procedure, instead of drawing themselves, subjects will be shown samples of the automatic Gestalt dances and required to evaluate if the figures are similar. Several numerical preliminary results with this new Gestaltic experimental setup are thoroughly discussed.


Subject(s)
Gestalt Theory , Mathematics , Psychophysics , Vision, Ocular/physiology , Visual Perception/physiology , Algorithms , Humans , Models, Biological
18.
J Opt Soc Am A Opt Image Sci Vis ; 28(2): 203-9, 2011 Feb 01.
Article in English | MEDLINE | ID: mdl-21293523

ABSTRACT

The color histogram (or color cloud) of a digital image displays the colors present in an image regardless of their spatial location and can be visualized in (R,G,B) coordinates. Therefore, it contains essential information about the structure of colors in natural scenes. The analysis and visual exploration of this structure is difficult. The color cloud being thick, its more dense points are hidden in the clutter. Thus, it is impossible to properly visualize the cloud density. This paper proposes a visualization method that also enables one to validate a general model for color clouds. It argues first by physical arguments that the color cloud must be essentially a two-dimensional (2D) manifold. A color cloud-filtering algorithm is proposed to reveal this 2D structure. A quantitative analysis shows that the reconstructed 2D manifold is strikingly close to the color cloud and only marginally depends on the filtering parameter. Thanks to this algorithm, it is finally possible to visualize the color cloud density as a gray-level function defined on the 2D manifold.

19.
IEEE Trans Image Process ; 20(1): 257-67, 2011 Jan.
Article in English | MEDLINE | ID: mdl-20550995

ABSTRACT

This paper explores the mathematical and algorithmic properties of two sample-based texture models: random phase noise (RPN) and asymptotic discrete spot noise (ADSN). These models permit to synthesize random phase textures. They arguably derive from linearized versions of two early Julesz texture discrimination theories. The ensuing mathematical analysis shows that, contrarily to some statements in the literature, RPN and ADSN are different stochastic processes. Nevertheless, numerous experiments also suggest that the textures obtained by these algorithms from identical samples are perceptually similar. The relevance of this study is enhanced by three technical contributions providing solutions to obstacles that prevented the use of RPN or ADSN to emulate textures. First, RPN and ADSN algorithms are extended to color images. Second, a preprocessing is proposed to avoid artifacts due to the nonperiodicity of real-world texture samples. Finally, the method is extended to synthesize textures with arbitrary size from a given sample.

20.
IEEE Trans Image Process ; 19(11): 2825-37, 2010 Nov.
Article in English | MEDLINE | ID: mdl-20442050

ABSTRACT

In 1964 Edwin H. Land formulated the Retinex theory, the first attempt to simulate and explain how the human visual system perceives color. His theory and an extension, the "reset Retinex" were further formalized by Land and McCann. Several Retinex algorithms have been developed ever since. These color constancy algorithms modify the RGB values at each pixel to give an estimate of the color sensation without a priori information on the illumination. Unfortunately, the Retinex Land-McCann original algorithm is both complex and not fully specified. Indeed, this algorithm computes at each pixel an average of a very large set of paths on the image. For this reason, Retinex has received several interpretations and implementations which, among other aims, attempt to tune down its excessive complexity. In this paper, it is proved that if the paths are assumed to be symmetric random walks, the Retinex solutions satisfy a discrete screened Poisson equation. This formalization yields an exact and fast implementation using only two FFTs. Several experiments on color images illustrate the effectiveness of the Retinex original theory.


Subject(s)
Algorithms , Color Perception/physiology , Image Processing, Computer-Assisted/methods , Models, Neurological , Humans , Poisson Distribution
SELECTION OF CITATIONS
SEARCH DETAIL
...