ABSTRACT
The COVID-19 pandemic has hit hard the Indonesian economy. Many businesses had to close because they could not cover operational costs, and many workers were laid off creating an unemployment crisis. Unemployment causes people's productivity and income to decrease, leading to poverty and other social problems, making it a crucial problem and great concern for the nation. Economic conditions during this pandemic have also provided an unusual pattern in economic data, in which outliers may occur, leading to biased parameter estimation results. For that reason, it is necessary to deal with outliers in research data appropriately. This study aims to find within-group estimators for unbalanced panel data regression model of the Open Unemployment Rate (OUR) in East Kalimantan Province and the factors that influence it. The method used is the within transformation with mean centering and median centering processing methods. The results of this study may provide advice on factors that can increase and decrease the OUR of East Kalimantan Province. The results show that the best model for estimating OUR data in East Kalimantan Province is the within-transformation estimation method using median centering. According to the best model, the Human Development Index (HDI) and Gross Regional Domestic Product (GRDP) are two factors that influence the OUR of East Kalimantan Province (GRDP). © 2023, International Association of Engineers. All rights reserved.
ABSTRACT
Social media platforms such as Twitter provide opportunities for governments to connect to foreign publics and influence global public opinion. In the current study, we used social and semantic network analysis to investigate China's digital public diplomacy campaign during COVID-19. Our results show that Chinese state-affiliated media and diplomatic accounts created hashtag frames and targeted stakeholders to challenge the United States or to cooperate with other countries and international organizations, especially the World Health Organization. Telling China's stories was the central theme of the digital campaign. From the perspective of social media platform affordance, we addressed the lack of attention paid to hashtag framing and stakeholder targeting in the public diplomacy literature. [ FROM AUTHOR] Copyright of Journal of Information Technology & Politics is the property of Taylor & Francis Ltd and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full . (Copyright applies to all s.)
ABSTRACT
Clinical trial patient recruitment is arguably the most difficult aspect of pharmaceutical development, because it involves a variety of factors beyond study sponsors' control. The aggregation of data across 80 hospitals and 20 systems, for the purpose of understanding patients, doing feasibility studies, or engaging in decentralized recruitment, is the trend we're seeing." Nimita Limaye, PhD, is the vice president of research for the life sciences R&D strategy and technology division at the International Data Corporation (IDC), a market research and advisory firm specializing in the technology industry and headquartered in Boston, Mass. Limaye says the rise of social media-based patient recruitment has opened the door for sponsors and investigators to mine real-world data and to give patients a more central focus in research.
ABSTRACT
To combat the ongoing Covid-19 pandemic, many new ways have been proposed on how to automate the process of finding infected people, also called contact tracing. A special focus was put on preserving the privacy of users. Bluetooth Low Energy as base technology has the most promising properties, so this survey focuses on automated contact tracing techniques using Bluetooth Low Energy. We define multiple classes of methods and identify two major groups: systems that rely on a server for finding new infections and systems that distribute this process. Existing approaches are systematically classified regarding security and privacy criteria.Copyright © 2021 ACM.
ABSTRACT
Currently, the volume of sensitive content on the Internet, such as pornography and child pornography, and the amount of time that people spend online (especially children) have led to an increase in the distribution of such content (e.g., images of children being sexually abused, real-time videos of such abuse, grooming activities, etc.). It is therefore essential to have effective IT tools that automate the detection and blocking of this type of material, as manual filtering of huge volumes of data is practically impossible. The goal of this study is to carry out a comprehensive review of different learning strategies for the detection of sensitive content available in the literature, from the most conventional techniques to the most cutting-edge deep learning algorithms, highlighting the strengths and weaknesses of each, as well as the datasets used. The performance and scalability of the different strategies proposed in this work depend on the heterogeneity of the dataset, the feature extraction techniques (hashes, visual, audio, etc.) and the learning algorithms. Finally, new lines of research in sensitive-content detection are presented.
ABSTRACT
Contradictions as a data quality indicator are typically understood as impossible combinations of values in interdependent data items. While the handling of a single dependency between two data items is well established, for more complex interdependencies, there is not yet a common notation or structured evaluation method established to our knowledge. For the definition of such contradictions, specific biomedical domain knowledge is required, while informatics domain knowledge is responsible for the efficient implementation in assessment tools. We propose a notation of contradiction patterns that reflects the provided and required information by the different domains. We consider three parameters (α, ß, θ): the number of interdependent items as α, the number of contradictory dependencies defined by domain experts as ß, and the minimal number of required Boolean rules to assess these contradictions as θ. Inspection of the contradiction patterns in existing R packages for data quality assessments shows that all six examined packages implement the (2,1,1) class. We investigate more complex contradiction patterns in the biobank and COVID-19 domains showing that the minimum number of Boolean rules might be significantly lower than the number of described contradictions. While there might be a different number of contradictions formulated by the domain experts, we are confident that such a notation and structured analysis of the contradiction patterns helps to handle the complexity of multidimensional interdependencies within health data sets. A structured classification of contradiction checks will allow scoping of different contradiction patterns across multiple domains and effectively support the implementation of a generalized contradiction assessment framework.
Subject(s)
COVID-19 , Data Accuracy , HumansABSTRACT
Comprehensively identifying and monitoring health facilities where care is delivered is critical to care coordination as well as public health. This became poignantly clear during the COVID-19 pandemic. Currently, few sources exist which can provide canonical identification of healthcare facilities. Furthermore, quantifying facility-specific services and infrastructure in a standard manner ranges from insufficient to nonexistent. A health facility registry provides a central authority to store, manage, and share health facility identification, services, and resources data with a wide range of stakeholders. Such universal collection and standardization of these data may support care coordination, public health responsiveness, quality improvement, health services research, health service planning, and health policy development. This chapter introduces the concept of a facility registry and provides scenarios in which stakeholders would benefit from facility data. The chapter further discusses unique identifiers, data collection, and the metadata necessary for establishing and maintaining a facility registry. © 2023 Elsevier Inc. All rights reserved.
ABSTRACT
According to the authors, the digital transformation of the global economic system, which has affected all areas of business and sectors of the economy, has led to the formation of a new business model aimed at creating a single financial and economic space without borders, contributing to new forms of obtaining added value and "digital dividends” by combining various technologies (for example, cloud technologies, sensors, big data, 3D printing), as well as the development of markets for goods and services, labor reserves and capital through transformations at all social levels. The authors believe that all of the above opens up expanded opportunities for organizing and doing business and allows increasing the potential for creating radically new products, services and innovative business models focused on sustainable business development in the new conditions of digitization of the economic system. In this regard, the paper explores key approaches to the definition of the term "digital transformation of business.” The trends of business digitalization and, accordingly, the factors that are inhibitors and drivers of the development of a new business model of cooperation and cooperation of modern organizations were identified. In the process of analysis, the authors determined the vector of development of business models in the context of the digital transformation of the global economic system. © 2023, The Author(s), under exclusive license to Springer Nature Switzerland AG.
ABSTRACT
Mental health is a critical societal issue and early screening is vital to enabling timely treatment. The rise of text-based communications provides new modalities that can be used to passively screen for mental illnesses. In this paper we present an approach to screen for anxiety and depression through reply latency of text messages. We demonstrate that by constructing machine learning models with reply latency features. Our models screen for anxiety with a balanced accuracy of 0.62 and F1 of 0.73, a notable improvement over prior approaches. With the same participants, our models likewise screen for depression with a balanced accuracy of 0.70 and F1 of 0.80. We additionally compare these results to those of models trained on data collected prior to the COVID-19 pandemic. Finally, we demonstrate generalizability for screening by combining datasets which results in comparable accuracy. Latency features could thus be useful in multimodal mobile mental illness screening. © 2022 ACM.
ABSTRACT
Intro: The COVID-19 pandemic highlighted a need for an open-source repository of line-list case data for infectious disease surveillance and research efforts. Global.health was launched in January 2020 as a global resource for public health data research. Here, we describe the data and systems underlying the Global.health datasets and summarize the project's 2.5 years of operations and the curation of the COVID-19 and monkeypox repositories. Method(s): The COVID-19 repository is curated daily through an automated system, verified by a team of researchers. The monkeypox dataset is curated manually by a team of researchers, Monday-Friday. Both repositories include metadata fields on demographics, symptomology, disease confirmation date, and others1,2. Data is de-identified and ingested from trusted sources, such as government public health agencies, trusted media outlets, and established openaccess repositories. Finding(s): The Global.health COVID-19 dataset is the largest repository of publicly available validated line-list data in the world, with over 100 million cases from more than 100 countries, including 60+ fields of metadata, comprising over 1 billion unique data points. The monkeypox dataset has over 35,000 data entries, from 100 different countries. 7,325 users accessed the COVID-19 repository and 3,005 accessed the monkeypox repository. Conclusion(s): The Global.health repositories provide verified, de-identified case data for two global outbreaks and are used by CDC, WHO, and other national public health organizations for surveillance and forecasting efforts. The repositories were utilized to share insights into the COVID-19 pandemic and track the monkeypox outbreak using real-time data3-6. We are collaborating with WHO Hub for Pandemic and Epidemic Intelligence to improve coordination, data schemas, and downstream use of data to inform and evaluate public health policy7. Future work will focus on creating a 'turnkey' data system to be used in future outbreaks for quicker infectious disease surveillance.Copyright © 2023
ABSTRACT
COVID-19 has been a worldwide emergency and continues to spread in the environment. It is crucial to keep following up on current solutions to this pandemic and think about future epidemic prevention. Herein, a comprehensive bibliometric analysis was performed to examine different facets of research output on the environmental response against COVID-19. The relevant bibliographic dataset was queried in PubMed for literature published since the COVID-19 outbreak. Python program was used to extract the metadata information from the dataset toward the research production in environmental response to the pandemic. Key points covered in the analysis included contribution of authorship and country to the scientific output, strength of collaborative network, and main topics of research themes. Regarding contributions, the USA was the most productive country in terms of publications and authorships, followed by China, the UK, Italy, and India. Using activity index as a relative indicator for research reactivity, Pakistan, Saudi Arabia, and India, followed by the USA and the UK, were highly reactive to the environmental and COVID-19 studies. For research collaboration, the USA demonstrated the highest level of domestic independence and Saudi Arabia had an extremely high level of international collaborations. The global research production could be covered in 20 major topics and grouped into four themes as control and prevention, public healthcare, disease research, and COVID-19 impacts. Overall, this study visualized global research reactivity and interactive networks in environmental response to COVID-19 and provided a basis of utilizing Python program in rapid literature review for strategizing scientific solutions to future epidemic prevention.Copyright © 2023 John Libbey Eurotext. All rights reserved.
ABSTRACT
Since their proposal in 2016, the FAIR principles have been largely discussed by different communities and initiatives involved in the development of infrastructures to enhance support for data findability, accessibility, interoperability, and reuse. One of the challenges in implementing these principles lies in defining a well-delimited process with organized and detailed actions. This paper presents a workflow of actions that is being adopted in the VODAN BR pilot for generating FAIR (meta)data for COVID-19 research. It provides the understanding of each step of the process, establishing their contribution. In this work, we also evaluate potential tools to (semi)automatize (meta)data treatment whenever possible. Although defined for a particular use case, it is expected that this workflow can be applied for other epidemical research and in other domains, benefiting the entire scientific community.
ABSTRACT
Companies are investing in big data analytics capabilities as they look for ways to understand and innovate their business models by leveraging digital transformation. We explore this phenomenon from the perspective of retail grocery business where evolving consumer attitudes and behaviors, rapid technological advances, new competitive pressures, laser thin margins, and the COVID-19 pandemic have accelerated the pace of digital transformation. We specifically analyze the role of big data analytics capabilities of the top five grocery companies in the United States in light of their digital transformation initiatives. We find that retailers are making major investments in big data analytics capabilities to power all aspects of their digital ecosystem-the online shopping experience for the digital consumer, digital store operations, pickup and delivery mechanisms-to enhance shopping experience, customer loyalty, revenue, and ultimately profit. © 2022 IEEE Computer Society. All rights reserved.
ABSTRACT
This paper presents the Coronavirus Disease Ontology (CovidO), a superset of the available Coronavirus (COVID-19) ontologies, including all the possible dimensions. CovidO consists of an ontological network of thriving distinct dimensions for storing coronavirus information. CovidO has 175 classes, 169 properties, 4141 triples, 645 individuals with 264 nodes and 308 edges. CovidO is based on standard input of coronavirus disease data sources, activities, and related sources, which collects and validates records for decision-making used to set guidelines and recommend resources. We present CovidO to a growing community of artificial intelligence project developers as pure metadata and illustrate its importance, quality, and impact. The ontology developed in this work addresses grouping the existing ontologies to build a global data model. © 2023, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
ABSTRACT
BACKGROUND: Biomedical researchers are strongly encouraged to make their research outputs more Findable, Accessible, Interoperable, and Reusable (FAIR). While many biomedical research outputs are more readily accessible through open data efforts, finding relevant outputs remains a significant challenge. Schema.org is a metadata vocabulary standardization project that enables web content creators to make their content more FAIR. Leveraging Schema.org could benefit biomedical research resource providers, but it can be challenging to apply Schema.org standards to biomedical research outputs. We created an online browser-based tool that empowers researchers and repository developers to utilize Schema.org or other biomedical schema projects. RESULTS: Our browser-based tool includes features which can help address many of the barriers towards Schema.org-compliance such as: The ability to easily browse for relevant Schema.org classes, the ability to extend and customize a class to be more suitable for biomedical research outputs, the ability to create data validation to ensure adherence of a research output to a customized class, and the ability to register a custom class to our schema registry enabling others to search and re-use it. We demonstrate the use of our tool with the creation of the Outbreak.info schema-a large multi-class schema for harmonizing various COVID-19 related resources. CONCLUSIONS: We have created a browser-based tool to empower biomedical research resource providers to leverage Schema.org classes to make their research outputs more FAIR.
Subject(s)
Biomedical Research , COVID-19 , Humans , MetadataABSTRACT
Considering how many materials and formats can fall under the rubric of "special collections,” it seems like a daunting endeavor to compile a single handbook which covers all their management and care, but Alison Cullingford has done so with great finesse. The book is patently a product of its time: in the introduction the author addresses the impact of the COVID-19 pandemic and how "the rapid digital pivot or shift meant remote access to collections and metadata became more important than ever, for staff and users” (xix). In addition, the "voices for Black Lives Matter” have made the special collections community reexamine practices where "Special Collections have been shaped by legacies of empire, colonialism and slavery” (xix). Throughout the text the impact of this zeitgeist can be seen.
ABSTRACT
Optimising HVAC operations towards human wellness and energy efficiency is a major challenge for smart facilities management, especially amid COVID situations. Although IoT sensors and deep learning were applied to support HVAC operations, the loss of forecasting accuracy in recursive prediction largely hinders their applications. This study presents a data-driven predictive control method with time-series forecasting (TSF) and reinforcement learning (RL), to examine various sensor metadata for HVAC system optimisation. This involves the development and validation of 16 Long Short-Term Memory (LSTM) based architectures with bi-directional processing, convolution, and attention mechanisms. The TSF models are comprehensively evaluated under independent, short-term recursive, and long-term recursive prediction scenarios. The optimal TSF models are integrated with a Soft Actor-Critic RL agent to analyse sensor metadata and optimise HVAC operations, achieving 17.4% energy savings and 16.9% thermal comfort improvement in the surrogate environment. The results show that recursive prediction leads to a significant reduction in model accuracy, and the effect is more pronounced in the temperature-humidity prediction model. The attention mechanism significantly improves prediction performance in both recursive and independent prediction scenarios. This study contributes new data-driven methods for smart HVAC operations in IoT-enabled intelligent buildings towards a human-centric built environment. © 2023 The Authors
ABSTRACT
When modeling and fitting various kinds of epidemic outbreaks, the value of parameters has always been an important practical problem for many scholars. In the existing studies, most of the authors select a fixed parameter by referring to the relevant literature or combined with medical experiments. With the help of Euler difference transformation and the characteristics of the solution of linear equations, we innovatively propose a dynamic update strategy of epidemic diffusion parameters based on data-driven in this study in order to overcome the above limitation. The method can help decision-makers to calculate the optimal parameters of epidemic spread by combining the real-time update data. A case study is conducted with the COVID-19 data of Wuhan. The results show that the dynamic parameter update strategy designed in this paper can effectively improve the accuracy of the evolution prediction of epidemic outbreaks, which provides an important decision support for the accurate allocation of government emergency resources. © 2023 Northeast University. All rights reserved.
ABSTRACT
The impact of technology on people's lives has grown continuously. The consumption of online news is one of the important trends as the share of population with internet access grows rapidly over time. Global statistics have shown that the internet and social media usage has an increasing trend. Recent developments like the Covid 19 pandemic have amplified this trend even more. However, the credibility of online news is a very critical issue to consider since it directly impacts the society and the people's mindsets. Majority of users tend to instinctively believe what they encounter and come into conclusions based upon them. It is essential that the consumers have an understanding or prior knowledge regarding the news and its source before coming into conclusions. This research proposes a hybrid model to predict the accuracy of a particular news article in Sinhala text. The model combines the general news content based analysis techniques using machine learning/ deep learning classifiers with social network related features of the news source to make predictions. A scoring mechanism is utilized to provide an overall score to a given news item where two independent scores- Accuracy Score (by analyzing the news content) and Credibility Score (by a scoring mechanism on social network features of the news source) are combined. The hybrid model containing the Passive Aggressive Classifier has shown the highest accuracy of 88%. Also, the models containing deep neural netWorks has shown accuracy around 75-80%. These results highlight that the proposed method could efficiently serve as a Fake News Detection mechanism for news content in Sinhala Language. Also, since there's no publicly available dataset for Fake News detection in Sinhala, the datasets produced in this work could also be considered as a contribution from this research. © 2022 IEEE.