Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Main subject
Language
Publication year range
1.
Sci Rep ; 12(1): 13468, 2022 08 05.
Article in English | MEDLINE | ID: mdl-35931710

ABSTRACT

We approach the task of detecting the illicit movement of cultural heritage from a machine learning perspective by presenting a framework for detecting a known artefact in a new and unseen image. To this end, we explore the machine learning problem of instance classification for large archaeological images datasets, i.e. where each individual object (instance) is itself a class that all of the multiple images of that object belongs. We focus on a wide variety of objects in the Durham Oriental Museum with which we build a dataset with over 24,502 images of 4332 unique object instances. We experiment with state-of-the-art convolutional neural network models, the smaller variations of which are suitable for deployment on mobile applications. We find the exact object instance of a given image can be predicted from among 4332 others with ~ 72% accuracy, showing how effectively machine learning can detect a known object from a new image. We demonstrate that accuracy significantly improves as the number of images-per-object instance increases (up to ~ 83%), with an ensemble of classifiers scoring as high as 84%. We find that the correct instance is found in the top 3, 5, or 10 predictions of our best models ~ 91%, ~ 93%, or ~ 95% of the time respectively. Our findings contribute to the emerging overlap of machine learning and cultural heritage, and highlights the potential available to future applications and research.


Subject(s)
Deep Learning , Artifacts , Machine Learning , Neural Networks, Computer
2.
PeerJ Comput Sci ; 8: e974, 2022.
Article in English | MEDLINE | ID: mdl-35721409

ABSTRACT

Bilinear pooling (BLP) refers to a family of operations recently developed for fusing features from different modalities predominantly for visual question answering (VQA) models. Successive BLP techniques have yielded higher performance with lower computational expense, yet at the same time they have drifted further from the original motivational justification of bilinear models, instead becoming empirically motivated by task performance. Furthermore, despite significant success in text-image fusion in VQA, BLP has not yet gained such notoriety in video question answering (video-QA). Though BLP methods have continued to perform well on video tasks when fusing vision and non-textual features, BLP has recently been overshadowed by other vision and textual feature fusion techniques in video-QA. We aim to add a new perspective to the empirical and motivational drift in BLP. We take a step back and discuss the motivational origins of BLP, highlighting the often-overlooked parallels to neurological theories (Dual Coding Theory and The Two-Stream Model of Vision). We seek to carefully and experimentally ascertain the empirical strengths and limitations of BLP as a multimodal text-vision fusion technique in video-QA using two models (TVQA baseline and heterogeneous-memory-enchanced 'HME' model) and four datasets (TVQA, TGif-QA, MSVD-QA, and EgoVQA). We examine the impact of both simply replacing feature concatenation in the existing models with BLP, and a modified version of the TVQA baseline to accommodate BLP that we name the 'dual-stream' model. We find that our relatively simple integration of BLP does not increase, and mostly harms, performance on these video-QA benchmarks. Using our insights on recent work in BLP for video-QA results and recently proposed theoretical multimodal fusion taxonomies, we offer insight into why BLP-driven performance gain for video-QA benchmarks may be more difficult to achieve than in earlier VQA models. We share our perspective on, and suggest solutions for, the key issues we identify with BLP techniques for multimodal fusion in video-QA. We look beyond the empirical justification of BLP techniques and propose both alternatives and improvements to multimodal fusion by drawing neurological inspiration from Dual Coding Theory and the Two-Stream Model of Vision. We qualitatively highlight the potential for neurological inspirations in video-QA by identifying the relative abundance of psycholinguistically 'concrete' words in the vocabularies for each of the text components (e.g., questions and answers) of the four video-QA datasets we experiment with.

SELECTION OF CITATIONS
SEARCH DETAIL
...