RESUMO
We study practical approximations of Kolmogorov prefix complexity (K) using IMP2, a high-level programming language. Our focus is on investigating the optimality of the interpreter for this language as the reference machine for the Coding Theorem Method (CTM). This method is designed to address applications of algorithmic complexity that differ from the popular traditional lossless compression approach based on the principles of algorithmic probability. The chosen model of computation is proven to be suitable for this task, and a comparison to other models and methods is conducted. Our findings show that CTM approximations using our model do not always correlate with the results from lower-level models of computation. This suggests that some models may require a larger program space to converge to Levin's universal distribution. Furthermore, we compare the CTM with an upper bound on Kolmogorov complexity and find a strong correlation, supporting the CTM's validity as an approximation method with finer-grade resolution of K.
RESUMO
Peptides modulate many processes of human physiology targeting ion channels, protein receptors, or enzymes. They represent valuable starting points for the development of new biologics against communicable and non-communicable disorders. However, turning native peptide ligands into druggable materials requires high selectivity and efficacy, predictable metabolism, and good safety profiles. Machine learning models have gradually emerged as cost-effective and time-saving solutions to predict and generate new proteins with optimal properties. In this chapter, we will discuss the evolution and applications of predictive modeling and generative modeling to discover and design safe and effective antimicrobial peptides. We will also present their current limitations and suggest future research directions, applicable to peptide drug design campaigns.
Assuntos
Peptídeos Antimicrobianos , Produtos Biológicos , Humanos , Inteligência Artificial , Aprendizado de Máquina , Desenho de FármacosRESUMO
[This corrects the article DOI: 10.3389/frobt.2023.1140901.].
RESUMO
The present work revisits how artificial intelligence, as technology and ideology, is based on the rational choice theory and the techno-liberal discourse, supported by large corporations and investment funds. Those that promote using different algorithmic processes (such as filter bubbles or echo chambers) create homogeneous and polarized spaces that reinforces people's ethical, ideological, and political narratives. These mechanisms validate bubbles of choices as statements of fact and contravene the prerequisites for exercising deliberation in pluralistic societies, such as the distinction between data and values, the affirmation of reasonable dissent, and the relevance of diversity as a condition indispensable for democratic deliberation.
RESUMO
The objective of this study was to understand how gig-workers interpret the effects of their laboral activity on their wellbeing. We developed a grounded theory based on interviews with 57 Brazilian gig-workers. The results show that (1) workers and gig-work organizations have preferences for work relationships with more autonomy or security; (2) when there is a congruence of preferences, the worker experiences greater wellbeing, and when the preferences diverge, there are episodes of preference violations that, when repeated, reduce worker wellbeing; and (3) however, not everything is a matter of fit: when both individuals and organizations have the same preference (for example, for more autonomy and less security), worker wellbeing may be vulnerable to abuse, for example, in terms of an unsustainable workload. Our study draws attention to an integrated discussion of the benefits and harms of algorithmic management, which allows overcoming a polarized view in which it would be seen only as beneficial or harmful to workers.
RESUMO
Resumo Racismo e tecnologia são importantes mediadores societários, hierarquizando grupos e reproduzindo privilégios e exclusões. Podem, contudo, inviabilizar denúncias de desigualdades, seja pelo "mito da democracia racial" ou pela ideia de neutralidade da tecnologia. Discutiremos a eficácia da articulação entre racismo e tecnologia por conta de uma dupla opacidade: negação do racismo e a negação política da tecnologia. Trazemos o reconhecimento facial como aparato sociotécnico que, articulado aos corpos negros e a realidades brasileiras, ora produz invisibilidades, ora reacentua visibilidades. A pesquisa teórica reúne conceitos do pensamento social brasileiro, versando sobre relações raciais e criminologia marginal, bem como autores do campo da Ciência, Tecnologia e Sociedade (CTS), que nos auxiliam a explicitar a não neutralidade da tecnologia e a politização da gestão algorítmica. Concluímos pela necessária ampliação das vozes dissonantes que denunciam o racismo na produção de técnicas pretensamente neutras, numa proposição cosmopolítica, de modo a poder "decidir com" as pessoas que são reconhecidas ou invisibilizadas.
Resumen El racismo y la tecnología son importantes mediadores sociales, jerarquizan grupos, reproducen privilegios y exclusiones. Sin embargo, pueden hacer inviable denunciar las desigualdades, ya sea por el "mito de la democracia racial" o por la "neutralidad de la tecnología". Discutiremos la efectividad de la articulación entre racismo y tecnología debido a una doble opacidad: la negación del racismo y la negación política de la tecnología. Presentamos el reconocimiento facial como un aparato sociotécnico que, articulado con los cuerpos negros y las realidades brasileñas, a veces produce invisibilidades y as veces vuelve a enfatizar las visibilidades. La investigación teórica reúne conceptos del pensamiento social brasileño que abordan las relaciones raciales y la criminología marginal, así como autores del campo de la ciencia, tecnología y sociedad (CTS), que nos ayudan a explicar la no neutralidad de la tecnología y la politización de la gestión algorítmica. Concluimos por la necesaria amplificación de voces disonantes que denuncian el racismo en la producción de técnicas supuestamente neutrales, una propuesta cosmopolítica, para poder "decidir con" las personas reconocidas o invisibilizadas.
Abstract Racism and technology are important societal mediators, hierarchizing groups and reproducing privileges and exclusions. They can, however, make reports of inequalities unfeasible, due to the "myth of racial democracy" or the idea of technology neutrality. We discuss the effectiveness of the articulation between racism and technology due to a double opacity: denial of racism and political denial of technology. We bring facial recognition as a sociotechnical apparatus that, articulated with black bodies and Brazilian realities, sometimes produces invisibilities, sometimes re-emphasizes visibilities. The theoretical research brings together concepts from Brazilian social thought, dealing with racial relations and marginal criminology, as well as authors from the field of Science, Technology and Society (STS), who help us to explain the non-neutrality of technology and the politicization of algorithmic management. We conclude by the necessary amplification of dissonant voices that denounce racism in the production of supposedly neutral techniques, in a cosmopolitical proposition, in order to be able to "decide with" the people who are recognized or made invisible.
RESUMO
Este trabalho tem como objetivo articular as noções de tecnologia, trabalho, saúde e influenciadores digitais e reivindicar essa articulação como objeto de investigação para o campo da comunicação. Especificamente, busca-se entender as particularidades do esgotamento vivido por influenciadores digitais, a partir de revisão bibliográfica e exposição de exemplos (coletados a partir de observação espontânea). Como resultado, propõe-se a noção de 'exaustão algorítmica', uma sensação relatada por influenciadores digitais relacionada aos 'problemas psicológicos' vivenciados por eles e gerados pelo ritmo de trabalho que vem sendo ditado pelo que reconhecem como 'o algoritmo'. A 'exaustão'caracteriza-se por um sentimento permanente de insatisfação, desânimo e esgotamento, ausência de criatividade, medo de penalidades das plataformas e de 'não dar conta'.
This work aims to articulate the notions of technology, work, health and digital influencers and claim this articulation as an object of investigation for the field of communication. Specifically, we seek to understand the particularities of the exhaustion experienced by digital influencers from a literature review and exposition of examples (obtained by spontaneous observation). As a result, the notion of 'algorithmic exhaustion' is proposed, a sensation reported by digital influencers that one is going through 'psychological problems' generated by the pace of work dictated by what they recognize as 'the algorithm'. 'Exhaustion' is characterized by a permanent feeling of dissatisfaction, discouragement and exhaustion, lack of creativity, fear of platform penalties and 'not getting it done'.
Este trabajo tiene como objetivo articular las nociones de tecnología, trabajo, salud e influencers digitales y reivindicar esa articulación como objeto de investigación para el campo de la comunicación. En concreto, buscamos comprender las particularidades del agotamiento que experimentan los influencers digitales a partir de una revisión bibliográfica y exposición de ejemplos (obtenidos por observación espontánea). Como resultado, se propone la noción de 'agotamiento algorítmico', sensación reportada por influencersdigitales de que se está pasando por 'problemas psicológicos' generados por el ritmo de trabajo dictado por lo que reconocen como 'el algoritmo'. El agotamiento se caracteriza por un sentimiento permanente de insatisfacción, desánimo y agotamiento, falta de creatividad, miedo a las sanciones de la plataforma y al 'no hacerlo'.
Assuntos
Humanos , Internet , Categorias de Trabalhadores , Saúde Mental , Saúde Ocupacional , Mídias Sociais , Estresse Ocupacional , Esgotamento PsicológicoRESUMO
Resumo: Os sistemas algorítmicos são instrumentos utilizados nas políticas sociais para acesso aos serviços. Essa ferramenta tecnológica invade as instituições públicas, controlando os processos de trabalho e o acesso de usuários. O pressuposto é o de que a utilização dos sistemas algorítmicos modifica não apenas a rotina institucional, como também é instrumento de poder pela quantidade gigantesca de dados armazenados, contribuindo à lucratividade do capital. O artigo reflete sobre essa dinâmica com base no estudo bibliográfico.
Abstract: Algorithmic systems are instruments used in social policies to access services. This technological tool invades public institutions controlling work processes and user access. The assumption is that algorithmic management not only modifies institutional routine but is also an instrument of power due to the gigantic amount of stored data which contributes to the profitability of capital. The article reflects on this dynamic by means of bibliographical study.
RESUMO
Resumo Vivemos numa sociedade em que a existência está diretamente associada a visibilidade dos indivíduos. As construções narrativas que validam este processo trabalham com imagens e vídeos que projetam e constroem as nossas vivências em múltiplas plataformas digitais. Através delas é possível mapear boa parte de nossas ações e interações. Esses dados são valiosos indicativos de nosso comportamento social e emocional diante de variados temas e situações. As plataformas digitais utilizam essas informações na dinâmica do capitalismo de dados, extraindo valor a partir de mecanismos automatizados de coleta e operados por sujeitos algorítmicos. Por meio da organização de Big Data novos padrões de consumo são estimulados através de entregas customizadas para determinados grupos de pessoas interconectadas. Este estudo mostra as características deste processo operado em ambientes heterotópicos em que o espaço-tempo é formatado pela lógica das plataformas. Além disso, será apresentado um panorama sobre como esse artifício se tornou possível por causa da necessidade de relevância em que a autonomia do indivíduo nas redes é proporcional a sua submissão as regras de vigilância e exploração econômica. Através desta premissa este estudo apresenta dados recentes sobre a relação de confiança dos brasileiros nestas plataformas digitais que, paradoxalmente, ocupam lugar de destaque como fonte de informação primordial para boa parte da população no Brasil.
Abstract We live in a society in which the existence is directly associated with the visibility of the individuals. The narrative constructions that validate these processes work with images and videos that project and build our experiences on multiple digital platforms. Through them it is possible to map part of our actions and interactions. These data are valuable indicators of our social and emotional behavior in the face of many themes and situations. Digital platforms use this information in the dynamic of data capitalism, extracting value from automated collection mechanisms operated by algorithmic subjects. Through the Big Data organization, consumption patterns are stimulated through customized deliveries for certain groups of interconnected people. This study shows the characteristics of this process operated in heterotopic environments in which space-time is operated by the platform logic. In addition to that, an overview will be presented on how this artifice became possible because of the need for relevance in which the individuals autonomy in networks is proportional to their submission to the rules of surveillance and economic exploitation. Through this premise, this study also presents recent data on the trust relationship of Brazilians in these digital platforms that, paradoxically, occupy a prominent place as a primary source of information for a large part of the population in Brazil.
Resumen Vivimos en una sociedad en la que la existencia está directamente asociada a la visibilidad de los individuos. Las construcciones narrativas que validan este proceso funcionan con imágenes y vídeos que proyectan y construyen nuestras experiencias en múltiples plataformas digitales. A través de ellos es posible mapear la mayoría de nuestras acciones e interacciones. Estos datos son valiosos indicadores de nuestro comportamiento social y emocional ante diversos temas y situaciones. Las plataformas digitales utilizan esta información en la dinámica del capitalismo de datos, extrayendo valor de los mecanismos de recogida automatizada operados por sujetos algorítmicos. A través de la organización de Big Data se estimulan nuevos patrones de consumo mediante entregas personalizadas a determinados grupos de personas interconectadas. Este estudio muestra las características de este proceso operado en entornos heterotópicos en los que el espacio-tiempo está formateado por la lógica de las plataformas. Además, se presentará una visión general de cómo este artificio fue posible debido a la necesidad de relevancia en la que la autonomía del individuo en las redes es proporcional a su sometimiento a las reglas de vigilancia y explotación económica. A través de esta premisa, este estudio presenta datos recientes sobre la relación de confianza de los brasileños en estas plataformas digitales que, paradójicamente, ocupan un lugar destacado como fuente primaria de información para gran parte de la población en Brasil.
Assuntos
Sistemas Integrados e Avançados de Gestão da Informação/estatística & dados numéricos , Comportamento do Consumidor , Rede Social , Big Data , BrasilRESUMO
One of the challenges of defining emergence is that one observer's prior knowledge may cause a phenomenon to present itself as emergent that to another observer appears reducible. By formalizing the act of observing as mutual perturbations between dynamical systems, we demonstrate that the emergence of algorithmic information does depend on the observer's formal knowledge, while being robust vis-a-vis other subjective factors, particularly: the choice of programming language and method of measurement; errors or distortions during the observation; and the informational cost of processing. This is called observer-dependent emergence (ODE). In addition, we demonstrate that the unbounded and rapid increase of emergent algorithmic information implies asymptotically observer-independent emergence (AOIE). Unlike ODE, AOIE is a type of emergence for which emergent phenomena will be considered emergent no matter what formal theory an observer might bring to bear. We demonstrate the existence of an evolutionary model that displays the diachronic variant of AOIE and a network model that displays the holistic variant of AOIE. Our results show that, restricted to the context of finite discrete deterministic dynamical systems, computable systems and irreducible information content measures, AOIE is the strongest form of emergence that formal theories can attain. This article is part of the theme issue 'Emergent phenomena in complex physical and socio-technical systems: from cells to societies'.
Assuntos
Evolução Biológica , ConhecimentoRESUMO
In this article, we investigate limitations of importing methods based on algorithmic information theory from monoplex networks into multidimensional networks (such as multilayer networks) that have a large number of extra dimensions (i.e., aspects). In the worst-case scenario, it has been previously shown that node-aligned multidimensional networks with non-uniform multidimensional spaces can display exponentially larger algorithmic information (or lossless compressibility) distortions with respect to their isomorphic monoplex networks, so that these distortions grow at least linearly with the number of extra dimensions. In the present article, we demonstrate that node-unaligned multidimensional networks, either with uniform or non-uniform multidimensional spaces, can also display exponentially larger algorithmic information distortions with respect to their isomorphic monoplex networks. However, unlike the node-aligned non-uniform case studied in previous work, these distortions in the node-unaligned case grow at least exponentially with the number of extra dimensions. On the other hand, for node-aligned multidimensional networks with uniform multidimensional spaces, we demonstrate that any distortion can only grow up to a logarithmic order of the number of extra dimensions. Thus, these results establish that isomorphisms between finite multidimensional networks and finite monoplex networks do not preserve algorithmic information in general and highlight that the algorithmic information of the multidimensional space itself needs to be taken into account in multidimensional network complexity analysis.
RESUMO
Visceral adipose tissue (VAT) is associated with various metabolic disorders, and adipokines, secreted by adipose tissue, are involved in their pathogenesis. This study investigated associations between VAT/subcutaneous adipose tissue (SAT) ratio, inflammatory markers, and cardiovascular (CV) risk-score in adults. Plasma levels of adipokines, plasma lipid profile, blood pressure, and body composition (using dual-emission x-ray absorptiometry) were determined. CV risk-score based on the American College of Cardiology and the American Heart Association (ACC/AHA) score was calculated in a sample of 309 Brazilian civil servants aged <60 years. Participants' VAT/SAT ratio were categorized into quartiles. Among males, plasma leptin (2.8 ng/mL) and C reactive protein (CRP) (0.2 mg/dL) (P<0.05) levels were higher at P75 and P50 than P5, and the highest calculated CV risk-score was observed at P75 (7.1%). Among females, higher plasma adiponectin levels were observed at P25 (54.3 ng/mL) compared with P75 (36 ng/mL) (P<0.05). Higher plasma CRP levels were observed at P75 (0.4 mg/dL) compared with P5 (0.1 mg/dL) (P<0.05). Higher CV risk-score was observed at P75 (2.0%) compared with P5 (0.7%). In both sexes, VAT and VAT/SAT ratio were directly associated with plasma leptin, CRP, and CV risk-score, and inversely associated with adiponectin; SAT was directly associated with plasma leptin and CRP (P<0.01); interleukin (IL)-10 and CRP were directly associated with adiponectin and leptin, respectively (P<0.05). Among men only, IL-10 (inversely) and CRP (directly) were associated with CV risk-score (P=0.02). Our results strengthened the relevance of the VAT/SAT ratio in cardiovascular risk.
RESUMO
Resumo Neste artigo, discutimos um novo regime de poder, nomeado de Governamentalidade Algorítmica por Antoinette Rouvroy e Thomas Berns, que, crescentemente, vem operando na condução de nossas condutas. Diferentemente do poder disciplinar e da biopolítica, tal governamentalidade não tem por centro de gravidade os indivíduos ou as populações. Antes, por meio da mineração de dados e da produção de perfis, age tanto no nível infra-individual quanto supra-individual. Para problematizá-la, analisamos e colocamos em questão um dos seus modos de operação, os sistemas de recomendação e, dentre eles, focalizamos o caso da Netflix. Por fim, tensionamos os efeitos que os algoritmos que lhe constituem podem ter em nossos modos de subjetivação, já que, não raro, tendem a excluir de nossas experiências aquilo que é da ordem do imprevisto e que é capaz de fazer com que algo nos aconteça e nos transforme.
Resumen En este artículo, discutimos un nuevo régimen de poder que Antoinette Rouvroy y Thomas Berns llamaron Gobernanza Algorítmica y que, de forma creciente, opera en la conducción de nuestras conductas. A diferencia del poder disciplinario y de la biopolítica, esa gobernanza no tiene por centro de gravedad a los individuos o poblaciones. Más bien, por medio de la minería de datos y de la producción de perfiles, actúa tanto en el plano infraindividual como en el supraindividual. Para problematizarla, analizamos y cuestionamos uno de sus modos de operación, los sistemas de recomendaciones y, entre ellos, nos detenemos en el caso de Netflix. Por último, calibramos los efectos que los algoritmos que la constituyen pueden tener en nuestros modos de subjetivación, pues, a menudo, tienden a excluir de nuestras experiencias lo que es del orden de lo imprevisto y, como tal, puede hacer que algo nos ocurra y nos transforme.
Abstract In this article, we discuss a new power regime, named Algorithmic Governmentality (AG) by Antoinette Rouvroy and Thomas Berns, and that has increasingly been operating in conducting our behavior. Unlike the disciplinary power and biopolitics, such governmentality does not have individuals or populations as its gravity center. Rather, through data mining and profile generation, the AG acts both in the infra-individual level and in the supra-individual one. In order to problematize it, we analyzed and put into question one of its operating modes, the recommendation systems; and, among them, we focused on Netflix's case. Lastly, we tensioned the effects that its constituent algorithms can have in our subjectivation modes, since it is not rare that they tend to exclude from our experiences the things that are unpredictable and could make something happen to us and transforme us.
Assuntos
Poder Psicológico , Mineração de Dados , Comportamento , AlgoritmosRESUMO
Fuzzy logic is an artificial intelligence technique that has applications in many areas, due to its importance in handling uncertain inputs. Despite the great recent success of other branches of AI, such as deep neural networks, fuzzy logic is still a very powerful machine learning technique, based on expert reasoning, that can be of help in many areas of musical creativity, such as composing music, synthesizing sounds, gestural mappings in electronic instruments, parametric control of sound synthesis, audiovisual content generation or sonification. We propose that fuzzy logic is a very suitable framework for thinking and operating not only with sound and acoustic signals but also with symbolic representations of music. In this article, we discuss the application of fuzzy logic ideas to music, introduce the Fuzzy Logic Control Toolkit, a set of tools to use fuzzy logic inside the MaxMSP real-time sound synthesis environment, and show how some fuzzy logic concepts can be used and incorporated into fields, such as algorithmic composition, sound synthesis and parametric control of computer music. Finally, we discuss the composition of Incerta, an acousmatic multichannel composition as a concrete example of the application of fuzzy concepts to musical creation.
RESUMO
We show how complexity theory can be introduced in machine learning to help bring together apparently disparate areas of current research. We show that this model-driven approach may require less training data and can potentially be more generalizable as it shows greater resilience to random attacks. In an algorithmic space the order of its element is given by its algorithmic probability, which arises naturally from computable processes. We investigate the shape of a discrete algorithmic space when performing regression or classification using a loss function parametrized by algorithmic complexity, demonstrating that the property of differentiation is not required to achieve results similar to those obtained using differentiable programming approaches such as deep learning. In doing so we use examples which enable the two approaches to be compared (small, given the computational power required for estimations of algorithmic complexity). We find and report that 1) machine learning can successfully be performed on a non-smooth surface using algorithmic complexity; 2) that solutions can be found using an algorithmic-probability classifier, establishing a bridge between a fundamentally discrete theory of computability and a fundamentally continuous mathematical theory of optimization methods; 3) a formulation of an algorithmically directed search technique in non-smooth manifolds can be defined and conducted; 4) exploitation techniques and numerical methods for algorithmic search to navigate these discrete non-differentiable spaces can be performed; in application of the (a) identification of generative rules from data observations; (b) solutions to image classification problems more resilient against pixel attacks compared to neural networks; (c) identification of equation parameters from a small data-set in the presence of noise in continuous ODE system problem, (d) classification of Boolean NK networks by (1) network topology, (2) underlying Boolean function, and (3) number of incoming edges.
RESUMO
Abstract Aims: To determine lactate threshold (LT) by three different methods (visual inspection, algorithmic adjustment, and Dmax) during an incremental protocol performed in the leg press 45° and to evaluate correlation and agreement among these different methods. Methods: Twenty male long-distance runners participated in this study. Firstly, participants performed the dynamic force tests in one-repetition maximum (1RM). In the next session, completed an incremental protocol consisted of progressive stages of 1 min or 20 repetitions with increments of 10, 20, 25, 30, 35, and 40% 1RM. From 40% 1RM, increments corresponding to 10% 1RM were performed until a load in which the participants could not complete the 20 repetitions. A rest interval of 2 min was observed between each stage for blood collection and adjustment of the workloads for the next stage. Results: Our results showed no significant difference in relative load (% 1RM), good correlations, and high intraclass correlation coefficients (ICC) between algorithmic adjustment and Dmax (p = 0.680, r = 0.92; ICC = 0.959), algorithmic adjustment and visual inspection (p = 0.266, r = 0.91; ICC = 0.948), and Dmax and visual inspection (p = 1.000, r = 0.88; ICC = 0.940). In addition, the Bland-Altman plot and linear regression showed agreement between algorithmic adjustment and Dmax (r2 = 0.855), algorithmic adjustment and visual inspection (r2 = 0.834), and Dmax and visual inspection (r2 = 0.781). Conclusion: The good correlation and high agreement among three methods suggest their applicability to determine LT during an incremental protocol performed in the leg press 45°. However, the best agreement found between mathematical methods suggests better accuracy.
Assuntos
Humanos , Corrida , Limiar Anaeróbio , Treino Aeróbico , Algoritmos , AntropometriaRESUMO
The goal of our research is the development of algorithmic tools for the analysis of chemical reaction networks proposed as models of biological homochirality. We focus on two algorithmic problems: detecting whether or not a chemical mechanism admits mirror symmetry-breaking; and, given one of those networks as input, sampling the set of racemic steady states that can produce mirror symmetry-breaking. Algorithmic solutions to those two problems will allow us to compute the parameter values for the emergence of homochirality. We found a mathematical criterion for the occurrence of mirror symmetry-breaking. This criterion allows us to compute semialgebraic definitions of the sets of racemic steady states that produce homochirality. Although those semialgebraic definitions can be processed algorithmically, the algorithmic analysis of them becomes unfeasible in most cases, given the nonlinear character of those definitions. We use Clarke's system of convex coordinates to linearize, as much as possible, those semialgebraic definitions. As a result of this work, we get an efficient algorithm that solves both algorithmic problems for networks containing only one enantiomeric pair and a heuristic algorithm that can be used in the general case, with two or more enantiomeric pairs.
RESUMO
Natural selection explains how life has evolved over millions of years from more primitive forms. The speed at which this happens, however, has sometimes defied formal explanations when based on random (uniformly distributed) mutations. Here, we investigate the application of a simplicity bias based on a natural but algorithmic distribution of mutations (no recombination) in various examples, particularly binary matrices, in order to compare evolutionary convergence rates. Results both on synthetic and on small biological examples indicate an accelerated rate when mutations are not statistically uniform but algorithmically uniform. We show that algorithmic distributions can evolve modularity and genetic memory by preservation of structures when they first occur sometimes leading to an accelerated production of diversity but also to population extinctions, possibly explaining naturally occurring phenomena such as diversity explosions (e.g. the Cambrian) and massive extinctions (e.g. the End Triassic) whose causes are currently a cause for debate. The natural approach introduced here appears to be a better approximation to biological evolution than models based exclusively upon random uniform mutations, and it also approaches a formal version of open-ended evolution based on previous formal results. These results validate some suggestions in the direction that computation may be an equally important driver of evolution. We also show that inducing the method on problems of optimization, such as genetic algorithms, has the potential to accelerate convergence of artificial evolutionary algorithms.
RESUMO
Las ciencias de la computación se han convertido en uno de los pilares fundamentales sobre el que se erige actualmente el desarrollo social. La sociedad del conocimiento requiere formar profesionales competentes con un desarrollo integral de la personalidad, que conceda significación tanto a lo intelectual como a lo social. Se analizaron las particularidades de socialización de cinco estudiantes talentos de tercer y cuarto años de la carrera ciencias de la computación. Para esto se empleó un diseño mixto, con un predominio de la metodología cualitativa. El estudio de casos se realizó mediante una entrevista en profundidad a cada uno de los estudiantes seleccionados. Para caracterizar el fenómeno de la socialización en la carrera, como técnica de recogida de información se utilizó la entrevista a expertos y el cuestionario. El modelo de socialización que ha construido la carrera, y al que se encuentran expuestos los estudiantes talentos de esta especialidad, privilegia lo intelectual por encima de lo social en la formación y el desempeño profesional y llega a formar un prejuicio en relación con la socialización. La tendencia a la proyección lógico-algorítmica del intelecto de los estudiantes talentos analizados predispone la construcción del mapa cognitivo de lo social y es responsable de las particularidades y del sentido de su socialización. Además, la socialización informática, producto del establecimiento de relaciones sociales por medio de las nuevas tecnologías de la información, no compensa la insuficiencia de la socialización interpersonal.
Computer Sciences have become one of the fundamental milestones of social development. The knowledge society requires to train competent professionals with an integral personality development, which gives importance to the social and intellectual aspects. We analyzed the particularities of socialization of five talented students from third and fourth year of Computer Sciences. To this end, it was used a mixed design with predominance of the qualitative methodology. The case- study was developed by applying an in-depth interview to every selected students. To characterize the phenomenon of socialization in the career, as techniques to collect information, we used the interview to experts and the questionnaire. The socialization pattern that have been built in the career and to which the talented students of this specialty are exposed, gives preference to the intellectual aspect over the social aspect in the professional training and performance, becoming a prejudice to socialization. The tendency to the logical-algorithmic projection of the intellect in the talented students analyzed predisposes the construction of the cognitive map of the social aspect, and it is responsible for the peculiarities and for the sense of their socialization. Besides, the informatics socialization, which is the result of the establishment of social relations by means of the new information technologies, does not compensate the insufficiency of interpersonal socialization.