Conscious multisensory integration: Introducing a universal contextual field in biological and deep artificial neural networks
Abstract
Conscious awareness plays a major role in human cognition and adaptive behaviour, though its function in multisensory integration is not yet fully understood, hence, questions remain: How does the brain integrate the incoming multisensory signals with respect to different external environments? How are the roles of these multisensory signals defined to adhere to the anticipated behavioural-constraint of the environment? This work seeks to articulate a novel theory on conscious multisensory integration that addresses the aforementioned research challenges. Specifically, the well-established contextual field (CF) in pyramidal cells and coherent infomax theory [1][2] is split into two functionally distinctive integrated input fields: local contextual field (LCF) and universal contextual field (UCF). LCF defines the modulatory sensory signal coming from some other parts of the brain (in principle from anywhere in space-time) and UCF defines the outside environment and anticipated behaviour (based on past learning and reasoning). Both LCF and UCF are integrated with the receptive field (RF) to develop a new class of contextually-adaptive neuron (CAN), which adapts to changing environments. The proposed theory is evaluated using human contextual audio-visual (AV) speech modelling. Simulation results provide new insights into contextual modulation and selective multisensory information amplification/suppression. The central hypothesis reviewed here suggests that the pyramidal cell, in addition to the classical excitatory and inhibitory signals, receives LCF and UCF inputs. The UCF (as a steering force or tuner) plays a decisive role in precisely selecting whether to amplify/suppress the transmission of relevant/irrelevant feedforward signals, without changing the content e.g., which information is worth paying more attention to? This, as opposed to, unconditional excitatory and inhibitory activity in existing deep neural networks (DNNs), is called conditional amplification/suppression.Citation
Adeel, A. (2020) Conscious multisensory integration: Introducing a universal contextual field in biological and deep artificial neural networks, Frontiers in Computational Neuroscience, 14:15Publisher
Frontiers Media S.A.Journal
Frontiers in Computational NeuroscienceAdditional Links
https://www.frontiersin.org/articles/10.3389/fncom.2020.00015/fullType
Journal articleLanguage
enDescription
© 2020 The Authors. Published by Frontiers Media. This is an open access article available under a Creative Commons licence. The published version can be accessed at the following link on the publisher’s website: https://doi.org/10.3389/fncom.2020.00015ISSN
1662-5188ae974a485f413a2113503eed53cd6c53
10.3389/fncom.2020.00015
Scopus Count
Collections
Except where otherwise noted, this item's license is described as https://creativecommons.org/licenses/by/4.0/
Related items
Showing items related by title, author, creator and subject.
-
Contextual deep learning-based audio-visual switching for speech enhancement in real-world environmentsAdeel, Ahsan; Gogate, Mandar; Hussain, Amir (Elsevier BV, 2019-08-19)Human speech processing is inherently multi-modal, where visual cues (lip movements) help better understand speech in noise. Our recent work [1] has shown lip-reading driven, audio-visual (AV) speech enhancement can significantly outperform benchmark audio-only approaches at low signal-to-noise ratios (SNRs). However, consistent with our cognitive hypothesis, visual cues were found to be relatively less effective for speech enhancement at high SNRs or low levels of background noise, whereas audio-only cues worked well enough. Therefore, a more cognitively-inspired, context-aware AV approach is required, that contextually utilises both visual and noisy audio features, and thus more effectively accounts for different noisy conditions. In this paper, we introduce a novel context-aware AV framework that contextually exploits AV cues with respect to different operating conditions to estimate clean audio, without requiring any prior SNR estimation. The switching module is developed by integrating a convolutional neural network (CNN) and long-short-term memory (LSTM) network, that learns to switch between visual-only (V-only), audio-only (A-only), and both audio-visual cues at low, high and moderate SNR levels, respectively. For testing, the estimated clean audio features are utilised using an enhanced visually-derived Wiener filter (EVWF) for noisy speech filtering. The context-aware AV speech enhancement framework is evaluated under dynamic real-world scenarios (including cafe, street, bus, and pedestrian) at different SNR levels (ranging from low to high SNRs), using benchmark Grid and ChiME3 corpora. For objective testing, perceptual evaluation of speech quality (PESQ) is used to evaluate the quality of the restored speech. For subjective testing, the standard mean-opinion-score (MOS) method is used. Comparative experimental results show the superior performance of our context-aware AV approach, over A-only, V-only, spectral subtraction (SS), and log-minimum mean square error (LMMSE) based speech enhancement methods, at both low and high SNRs. These preliminary findings demonstrate the capability of our proposed approach to deal with spectro-temporal variations in any real-world noisy environment, by contextually exploiting the complementary strengths of audio and visual cues. In conclusion, our deep learning-driven AV framework is posited as a benchmark resource for the multi-modal speech processing and machine learning communities.
-
A contextual AR model based system on-site construction planningHeesom, David; Moore, Nigel Jonathan (University of Wolverhampton, 2013)The creation of an effective construction schedule is fundamental to the successful completion of a construction project. Effectively communicating the temporal and spatial details of this schedule are vital, however current planning approaches often lead to multiple or misinterpretations of the schedule throughout the planning team. Four Dimensional Computer Aided Design (4D CAD) has emerged over the last twenty years as an effective tool during construction project planning. In recent years Building Information Modelling (BIM) has emerged as a valuable approach to construction informatics throughout the whole lifecycle of a building. Additionally, emerging trends in location-aware and wearable computing provide a future potential for untethered, contextual visualisation and data delivery away from the office. The purpose of this study was to develop a novel computer-based approach, to facilitate on-site 4D construction planning through interaction with a 3D construction model and corresponding building information data in outdoor Augmented Reality (AR). Based on a wide ranging literature review, a conceptual framework was put forward to represent software development requirements to support the sequencing of construction tasks in AR. Based on this framework, an approach was developed that represented the main processes required to plan a construction sequence using an onsite model based 4D methodology. Using this proposed approach, a prototype software tool was developed, 4DAR. The implemented tool facilitated the mapping of elements within an interactive 3D model with corresponding BIM data objects to provide an interface for two way communication with the underlying Industry Foundation Class (IFC) data model. Positioning data from RTK-GPS and an electronic compass enabled the geo-located 3D model to be registered in world coordinates and visualised using a head mounted display fitted with a ii forward facing video camera. The scheduling of construction tasks was achieved using a novel interactive technique that negated the need for a previous construction schedule to be input into the system. The resulting 4D simulation can be viewed at any time during the scheduling process, facilitating an iterative approach to project planning to be adopted. Furthermore, employing the IFC file as a central read/write repository for schedule data reduces the amount of disparate documentation and centralises the storage of schedule information, while improving communication and facilitating collaborative working practices within a project planning team. Post graduate students and construction professionals evaluated the implemented prototype tool to test its usefulness for construction planning requirements. It emerged from the evaluation sessions that the implemented tool had achieved the essential requirements highlighted in the conceptual framework and proposed approach. Furthermore, the evaluators expressed that the implemented software and proposed novel approach to construction planning had potential to assist with the planning process for both experienced and inexperienced construction planners. The following contributions to knowledge have been made by this study in the areas of 4D CAD, construction applications of augmented reality and Building Information Modelling; 4D Construction Planning in Outdoor Augmented Reality (AR) The development of a novel 4D planning approach through decomposition The deployment of Industry Foundation Classes (IFC) in AR Leveraging IFC files for centralised data management within real time planning and visualisation environment.
-
The importance of contextualization when developing pressure intervention: An illustration among age-group professional soccer playersDevonport, Tracey; Kent, Sofie; Lane, Andrew; Nicholls, Wendy (Psychreg, 2020-06-01)The need for interventions that help adolescents cope with pressure is widely recognised (Yeager et al., 2018). However, a recent systematic review indicates that contextualising the pressure intervention is often overlooked (Kent et al., 2018) which likely detracts from intervention effectiveness. The focus of contextualisation is to identify from the perspective of intended intervention recipients, pressureinducing incentives, and factors factor facilitative and debilitative of performance under pressure. The present case study illustrates a process of contextualisation among age-group professional soccer players. Thirty-two male academy soccer players (11–12 years, n = 8; 13–14 years, n = 8; 15–16 years, n = 8; 17–18 years, n = 8) participated in one of eight focus groups. Informed by Baumeister and Shower’s (1986) definition of pressure five situational and two personal incentives were deductively identified. Fletcher and Sarkar’s (2012) model of psychological resilience was used to identify perceived protective and debilitative factors of performance under pressure. Supporting contextualisation, recommendation for integrating the identified incentives and protective factors into a pressure training intervention are presented. The resultant understandings are also of value to those working with adolescents.