ABSTRACT
We address the problem of localizing waste objects from a color image and an optional depth image, which is a key perception component for robotic interaction with such objects. Specifically, our method integrates the intensity and depth information at multiple levels of spatial granularity. Firstly, a scene-level deep network produces an initial coarse segmentation, based on which we select a few potential object regions to zoom in and perform fine segmentation. The results of the above steps are further integrated into a densely connected conditional random field that learns to respect the appearance, depth, and spatial affinities with pixel-level accuracy. In addition, we create a new RGBD waste object segmentation dataset, MJU-Waste, that is made public to facilitate future research in this area. The efficacy of our method is validated on both MJU-Waste and the Trash Annotation in Context (TACO) dataset.
ABSTRACT
Recently the extended projection neural network was proposed to solve constrained monotone variational inequality problems and a class of constrained nonmonotontic variational inequality problems. Its exponential convergence was developed under the positive definiteness condition of the Jacobian matrix of the nonlinear mapping. This note proposes new results on the exponential convergence of the output trajectory of the extended projection neural network under the weak conditions that the Jacobian matrix of the nonlinear mapping may be positive semidefinite or not. Therefore, new results further demonstrate that the extended projection neural network has a fast convergence rate when solving a class of constrained monotone variational inequality problems and nonmonotonic variational inequality problems. Illustrative examples show the significance of the obtained results.