Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
PLoS Comput Biol ; 20(1): e1011796, 2024 Jan.
Article in English | MEDLINE | ID: mdl-38285716

ABSTRACT

Naturally occurring collective motion is a fascinating phenomenon in which swarming individuals aggregate and coordinate their motion. Many theoretical models of swarming assume idealized, perfect perceptual capabilities, and ignore the underlying perception processes, particularly for agents relying on visual perception. Specifically, biological vision in many swarming animals, such as locusts, utilizes monocular non-stereoscopic vision, which prevents perfect acquisition of distances and velocities. Moreover, swarming peers can visually occlude each other, further introducing estimation errors. In this study, we explore necessary conditions for the emergence of ordered collective motion under restricted conditions, using non-stereoscopic, monocular vision. We present a model of vision-based collective motion for locust-like agents: elongated shape, omni-directional visual sensor parallel to the horizontal plane, and lacking stereoscopic depth perception. The model addresses (i) the non-stereoscopic estimation of distance and velocity, (ii) the presence of occlusions in the visual field. We consider and compare three strategies that an agent may use to interpret partially-occluded visual information at the cost of the computational complexity required for the visual perception processes. Computer-simulated experiments conducted in various geometrical environments (toroidal, corridor, and ring-shaped arenas) demonstrate that the models can result in an ordered or near-ordered state. At the same time, they differ in the rate at which order is achieved. Moreover, the results are sensitive to the elongation of the agents. Experiments in geometrically constrained environments reveal differences between the models and elucidate possible tradeoffs in using them to control swarming agents. These suggest avenues for further study in biology and robotics.


Subject(s)
Grasshoppers , Motion Perception , Humans , Animals , Vision, Ocular , Models, Theoretical , Computer Simulation , Motion , Depth Perception
2.
Front Neurorobot ; 17: 1215085, 2023.
Article in English | MEDLINE | ID: mdl-37520677

ABSTRACT

Swarming or collective motion is ubiquitous in natural systems, and instrumental in many technological applications. Accordingly, research interest in this phenomenon is crossing discipline boundaries. A common major question is that of the intricate interactions between the individual, the group, and the environment. There are, however, major gaps in our understanding of swarming systems, very often due to the theoretical difficulty of relating embodied properties to the physical agents-individual animals or robots. Recently, there has been much progress in exploiting the complementary nature of the two disciplines: biology and robotics. This, unfortunately, is still uncommon in swarm research. Specifically, there are very few examples of joint research programs that investigate multiple biological and synthetic agents concomitantly. Here we present a novel research tool, enabling a unique, tightly integrated, bio-inspired, and robot-assisted study of major questions in swarm collective motion. Utilizing a quintessential model of collective behavior-locust nymphs and our recently developed Nymbots (locust-inspired robots)-we focus on fundamental questions and gaps in the scientific understanding of swarms, providing novel interdisciplinary insights and sharing ideas disciplines. The Nymbot-Locust bio-hybrid swarm enables the investigation of biology hypotheses that would be otherwise difficult, or even impossible to test, and to discover technological insights that might otherwise remain hidden from view.

3.
Front Artif Intell ; 4: 737327, 2021.
Article in English | MEDLINE | ID: mdl-35156009

ABSTRACT

Recently, we are seeing the emergence of plan- and goal-recognition algorithms which are based on the principle of rationality. These avoid the use of a plan library that compactly encodes all possible observable plans, and instead generate plans dynamically to match the observations. However, recent experiments by Berkovitz (Berkovitz, The effect of spatial cognition and context on robot movement legibility in human-robot collaboration, 2018) show that in many cases, humans seem to have reached quick (correct) decisions when observing motions which were far from rational (optimal), while optimal motions were slower to be recognized. Intrigued by these findings, we experimented with a variety of rationality-based recognition algorithms on the same data. The results clearly show that none of the algorithms reported in the literature accounts for human subject decisions, even in this simple task. This is our first contribution. We hypothesize that humans utilize plan-recognition in service of goal recognition, i.e., match observations to known plans, and use the set of recognized plans to conclude as to the likely goals. To test this hypothesis, a second contribution in this paper is the introduction of a novel offline recognition algorithm. While preliminary, the algorithm accounts for the results reported by Berkovitz significantly better than the existing algorithms. Moreover, the proposed algorithm marries rationality-based and plan-library based methods seamlessly.

4.
Artif Life ; 23(3): 343-350, 2017.
Article in English | MEDLINE | ID: mdl-28786728

ABSTRACT

Asimov's three laws of robotics, which were shaped in the literary work of Isaac Asimov (1920-1992) and others, define a crucial code of behavior that fictional autonomous robots must obey as a condition for their integration into human society. While, general implementation of these laws in robots is widely considered impractical, limited-scope versions have been demonstrated and have proven useful in spurring scientific debate on aspects of safety and autonomy in robots and intelligent systems. In this work, we use Asimov's laws to examine these notions in molecular robots fabricated from DNA origami. We successfully programmed these robots to obey, by means of interactions between individual robots in a large population, an appropriately scoped variant of Asimov's laws, and even emulate the key scenario from Asimov's story "Runaround," in which a fictional robot gets into trouble despite adhering to the laws. Our findings show that abstract, complex notions can be encoded and implemented at the molecular scale, when we understand robots on this scale on the basis of their interactions.


Subject(s)
Computational Biology , Robotics/instrumentation
SELECTION OF CITATIONS
SEARCH DETAIL
...