UH Biocomputation Group - Emil Dmitrukhttp://biocomputation.herts.ac.uk/2023-03-30T11:59:36+01:00Edge-Centric Functional Network Representations of Human Cerebral Cortex Reveal Overlapping System-Level Architecture2023-03-30T11:59:36+01:002023-03-30T11:59:36+01:00Emil Dmitruktag:biocomputation.herts.ac.uk,2023-03-30:/2023/03/30/edge-centric-functional-network-representations-of-human-cerebral-cortex-reveal-overlapping-system-level-architecture.html<p class="first last">Emil Dmitruk's Journal Club session where he will talk about a paper "Edge-Centric Functional Network Representations of Human Cerebral Cortex Reveal Overlapping System-Level Architecture"</p>
<p>This week on Journal Club session Emil Dmitruk will talk about a paper "Edge-Centric Functional Network Representations of Human Cerebral Cortex Reveal Overlapping System-Level Architecture".</p>
<hr class="docutils" />
<p>Network neuroscience has relied on a node-centric network model in which cells,
populations and regions are linked to one another via anatomical or functional
connections. This model cannot account for interactions of edges with one
another. In this study, we developed an edge-centric network model that
generates constructs "edge time series" and "edge functional connectivity"
(eFC). Using network analysis, we show that, at rest, eFC is consistent across
datasets and reproducible within the same individual over multiple scan
sessions. We demonstrate that clustering eFC yields communities of edges that
naturally divide the brain into overlapping clusters, with regions in
sensorimotor and attentional networks exhibiting the greatest levels of
overlap. We show that eFC is systematically modulated by variation in sensory
input. In future work, the edge-centric approach could be useful for
identifying novel biomarkers of disease, characterizing individual variation
and mapping the architecture of highly resolved neural circuits.</p>
<div class="line-block">
<div class="line"><br /></div>
</div>
<p>Papers:</p>
<ul class="simple">
<li>J. Faskowitz, F. Esfahlani, Y. Jo, O. Sporns, R. Betzel, <a class="reference external" href="https://doi.org/10.1038/s41593-020-00719-y">"Edge-Centric Functional Network Representations of Human Cerebral Cortex Reveal Overlapping System-Level Architecture"</a>, 2020, Nature Neuroscience, 23, 1644--1654</li>
<li>J. Faskowitz, R. Betzel, O. Sporns, <a class="reference external" href="https://doi.org/10.1162/netn_a_00204">"Edges in Brain Networks: Contributions to Models of Structure and Function"</a>, 2022, Network Neuroscience, 6, 1--28</li>
<li>L. Novelli, A. Razi, <a class="reference external" href="https://doi.org/10.1038/s41467-022-29775-7">"A Mathematical Perspective on Edge-Centric Brain Functional Connectivity"</a>, 2022, Nature Communications, 13, 2693</li>
</ul>
<p><strong>Date:</strong> 2023/03/31 <br />
<strong>Time:</strong> 14:00 <br />
<strong>Location</strong>: online</p>
Uncovering the Topology of Time-Varying fMRI Data Using Cubical Persistence2022-03-16T22:39:10+00:002022-03-16T22:39:10+00:00Emil Dmitruktag:biocomputation.herts.ac.uk,2022-03-16:/2022/03/16/uncovering-the-topology-of-time-varying-fmri-data-using-cubical-persistence.html<p class="first last">Emil Dmitruk's Journal Club session where he will talk about a paper "Uncovering the Topology of Time-Varying fMRI Data Using Cubical Persistence"</p>
<p>This week on Journal Club session Emil Dmitruk will talk about a paper "Uncovering the Topology of Time-Varying fMRI Data Using Cubical Persistence".</p>
<hr class="docutils" />
<p>Functional magnetic resonance imaging (fMRI) is a crucial technology
for gaining insights into cognitive processes in humans. Data amassed
from fMRI measurements result in volumetric data sets that vary over
time. However, analysing such data presents a challenge due to the
large degree of noise and person-to-person variation in how
information is represented in the brain. To address this challenge, we
present a novel topological approach that encodes each time point in
an fMRI data set as a persistence diagram of topological features,
i.e. high-dimensional voids present in the data. This representation
naturally does not rely on voxel-by-voxel correspondence and is robust
to noise. We show that these time-varying persistence diagrams can be
clustered to find meaningful groupings between participants, and that
they are also useful in studying within-subject brain state
trajectories of subjects performing a particular task. Here, we apply
both clustering and trajectory analysis techniques to a group of
participants watching the movie 'Partly Cloudy'. We observe
significant differences in both brain state trajectories and overall
topological activity between adults and children watching the same
movie.</p>
<div class="line-block">
<div class="line"><br /></div>
</div>
<p>Papers:</p>
<ul class="simple">
<li>B. Rieck, T. Yates, C. Bock, K. Borgwardt, G. Wolf, N. Turk-Browne, S. Krishnaswamy, <a class="reference external" href="http://arxiv.org/abs/2006.07882">"Uncovering the Topology of Time-Varying fMRI Data Using Cubical Persistence"</a>, 2020, arXiv:2006.07882 [cs, eess, math, q-bio, stat],</li>
</ul>
<p><strong>Date:</strong> 2022/03/18 <br />
<strong>Time:</strong> 14:00 <br />
<strong>Location</strong>: online</p>
Brain Network Dynamics during Working Memory Are Modulated by Dopamine and Diminished in Schizophrenia2021-11-03T10:06:29+00:002021-11-03T10:06:29+00:00Emil Dmitruktag:biocomputation.herts.ac.uk,2021-11-03:/2021/11/03/brain-network-dynamics-during-working-memory-are-modulated-by-dopamine-and-diminished-in-schizophrenia.html<p class="first last">Emil Dmitruk's Journal Club session where he will talk about a paper "Brain Network Dynamics during Working Memory Are Modulated by Dopamine and Diminished in Schizophrenia"</p>
<p>This week on Journal Club session Emil Dmitruk will talk about a paper "Brain
Network Dynamics during Working Memory Are Modulated by Dopamine and Diminished
in Schizophrenia".</p>
<hr class="docutils" />
<p>Dynamical brain state transitions are critical for flexible
working memory but the network mechanisms are incompletely understood.
Here, we show that working memory performance entails brain-wide
switching between activity states using a combination of functional
magnetic resonance imaging in healthy controls and individuals with
schizophrenia, pharmacological fMRI, genetic analyses and network
control theory. The stability of states relates to dopamine D1
receptor gene expression while state transitions are influenced by D2
receptor expression and pharmacological modulation. Individuals with
schizophrenia show altered network control properties, including a
more diverse energy landscape and decreased stability of working
memory representations. Our results demonstrate the relevance of
dopamine signaling for the steering of whole-brain network dynamics
during working memory and link these processes to schizophrenia
pathophysiology.</p>
<div class="line-block">
<div class="line"><br /></div>
</div>
<p>Papers:</p>
<ul class="simple">
<li>U. Braun, A. Harneit, G. Pergola, T. Menara, A. Schäfer, R. Betzel, Z. Zang, J. Schweiger, X. Zhang, K. Schwarz, J. Chen, G. Blasi, A. Bertolino, D. Durstewitz, F. Pasqualetti, E. Schwarz, A. Meyer-Lindenberg, D. Bassett, H. Tost, <a class="reference external" href="https://doi.org/10.1038/s41467-021-23694-9">"Brain Network Dynamics during Working Memory Are Modulated by Dopamine and Diminished in Schizophrenia"</a>, 2021, Nature Communications, 12, 3478</li>
<li>J. Kim, D. Bassett, <a class="reference external" href="http://arxiv.org/abs/1902.03309">"Linear Dynamics & Control of Brain Networks"</a>, 2019, arXiv:1902.03309 [physics, q-bio],</li>
</ul>
<p><strong>Date:</strong> 2021/11/05 <br />
<strong>Time:</strong> 14:00 <br />
<strong>Location</strong>: online</p>
Organization of Cell Assemblies in the Hippocampus2021-04-14T15:00:00+01:002021-04-14T15:00:00+01:00Emil Dmitruktag:biocomputation.herts.ac.uk,2021-04-14:/2021/04/14/organization-of-cell-assemblies-in-the-hippocampus.html<p class="first last">Emil Dmitruk's Journal Club session where he will talk about a paper "Organization of Cell Assemblies in the Hippocampus"</p>
<p>This week on Journal Club session Emil Dmitruk will talk about a paper "Organization of Cell Assemblies in the Hippocampus" and will briefly
presnt how this subject can be approached with topological data analysis.</p>
<hr class="docutils" />
<p>Neurons can produce action potentials with high temporal precision. A
fundamental issue is whether, and how, this capability is used in
information processing. According to the "cell assembly" hypothesis,
transient synchrony of anatomically distributed groups of neurons
underlies processing of both external sensory input and internal
cognitive mechanisms. Accordingly, neuron populations should be
arranged into groups whose synchrony exceeds that predicted by common
modulation by sensory input. Here we find that the spike times of
hippocampal pyramidal cells can be predicted more accurately by using
the spike times of simultaneously recorded neurons in addition to the
animals location in space. This improvement remained when the spatial
prediction was refined with a spatially dependent theta phase
modulation. The time window in which spike times are best predicted
from simultaneous peer activity is 10-30,ms, suggesting
that cell assemblies are synchronized at this timescale. Because this
temporal window matches the membrane time constant of pyramidal
neurons, the period of the hippocampal gamma oscillation and the time
window for synaptic plasticity, we propose that cooperative activity
at this timescale is optimal for information transmission and storage
in cortical circuits.</p>
<div class="line-block">
<div class="line"><br /></div>
</div>
<p>Papers:</p>
<ul class="simple">
<li>K. Harris, J. Csicsvari, H. Hirase, G. Dragoi, G. Buzsaki, <a class="reference external" href="https://doi.org/10.1038/nature01834">"Organization of Cell Assemblies in the Hippocampus"</a>, 2003, Nature, 424, 552--556</li>
<li>C. Giusti, R. Ghrist, D. Bassett, <a class="reference external" href="https://doi.org/10.1007/s10827-016-0608-6">"Two's Company, Three (or More) Is a Simplex: Algebraic-Topological Tools for Understanding Higher-Order Structure in Neural Data"</a>, 2016, Journal of Computational Neuroscience, 41, 1--14</li>
</ul>
<p><strong>Date:</strong> 2021/04/16 <br />
<strong>Time:</strong> 14:00 <br />
<strong>Location</strong>: online</p>
Cliques and cavities in human connectome2020-12-02T13:11:34+00:002020-12-02T13:11:34+00:00Emil Dmitruktag:biocomputation.herts.ac.uk,2020-12-02:/2020/12/02/cliques-and-cavities-in-human-connectome.html<p class="first last">Emil Dmitruk's Journal Club session where he will talk about a paper "Cliques and cavities in human connectome".</p>
<p>This week on Journal Club session Emil Dmitruk will talk about a paper "Cliques and cavities in human connectome".</p>
<hr class="docutils" />
<p>Encoding brain regions and their connections as a network of nodes and edges
captures many of the possible paths along which information can be transmitted
as humans order relations, concepts naturally expressed in the language of
algebraic topology. These tools can be used to study mesoscale network
structures that arise from the arrangement of densely connected substructures
called cliques in otherwise sparsely connected brain networks. We detect cliques
(all-to-all connected sets of brain regions) in the average structural
connectomes of 8 healthy adults scanned in triplicate and discover the presence
of more large cliques than expected in null networks constructed via wiring
minimization, providing architecture through which brain network can perform
rapid, local processing. We then locate topological cavities of different
dimensions, around which information may flow in either diverging or converging
patterns. These cavities exist consistently across subjects, differ from those
observed in null model networks, and - importantly - link regions of early and
late evolutionary origin in long loops, underscoring their unique role in
controlling brain function. These results offer a first demonstration that
techniques from algebraic topology offer a novel perspective on structural
connectomics, highlighting loop-like paths as crucial features in the human
brain’s structural architecture.</p>
<div class="line-block">
<div class="line"><br /></div>
</div>
<p>Papers:</p>
<ul class="simple">
<li>Ann E. Sizemore, Chad Giusti, Ari Kahn, Jean M. Vettel, Richard F. Betzel1, Danielle S. Bassett <a class="reference external" href="https://doi.org/10.1007/s10827-017-0672-6">"Cliques and cavities in human connectome"</a> , J Comput Neurosci 44, 115–145 (2018).</li>
</ul>
<p><strong>Date:</strong> 04/12/2020 <br />
<strong>Time:</strong> 16:00 <br />
<strong>Location</strong>: online</p>
Prediction of Electrode Position inside rat brain using NeuromorphicHardware2020-11-18T10:58:33+00:002020-11-18T10:58:33+00:00Emil Dmitruktag:biocomputation.herts.ac.uk,2020-11-18:/2020/11/18/prediction-of-electrode-position-inside-rat-brain-using-neuromorphichardware.html<p class="first last">Shavika Rastogi's Journal Club session where she will talk about her master thesis entitled "Prediction of Electrode Position inside rat brain using Neuromorphic Hardware".</p>
<p>This week on Journal Club session Shavika Rastogi will talk about her master thesis entitled "Prediction of Electrode Position inside rat brain using Neuromorphic Hardware".</p>
<hr class="docutils" />
<p>Neural probes with large number of close packed recording sites each comprising
of 32 electrodes are being developed for large scale neuronal recordings from
multiple brain areas simultaneously to understand complex brain activity in vivo.
By precisely mapping the position of each site inside rat brain, they help us to
characterize neural activity on the basis of cortical depth from where it is
obtained. Their application lies in neurosurgery, where it is important to locate
the target of surgical interest inside the brain in real time. In this work, we
have first compared various methods from literature to analyze extracellular
activity recorded using CMOS neural probes from different cortical depths and
from different locations along same lateral axis of rat brain to find a criterion
on the basis of which recordings can be classified. After finding out the most
promising criterion, we have tried to implement it on neuromorphic hardware
SpiNNaker. We tested single neuron and spiking excitatory-inhibitory network
for implementation and found that excitatory-inhibitory network is more robust to
noise present in signal and its output can be improved by introducing lateral
inhibition. Our results show that SpiNNaker can be used for rough indication of
cortical depth.</p>
<div class="line-block">
<div class="line"><br /></div>
</div>
<p><strong>Date:</strong> 20/11/2020 <br />
<strong>Time:</strong> 16:00 <br />
<strong>Location</strong>: online</p>
Navigation Along Windborne Plumes of Pheromone and Resource-Linked Odors2020-11-06T11:23:21+00:002020-11-06T11:23:21+00:00Emil Dmitruktag:biocomputation.herts.ac.uk,2020-11-06:/2020/11/06/navigation-along-windborne-plumes-of-pheromone-and-resource-linked-odors.html<p class="first last">Samuel Sutton's journal club session where he will talk about the paper "Navigation Along Windborne Plumes of Pheromone and Resource-Linked Odors".</p>
<p>This week on Journal Club session Samuel Sutton will talk about the paper "Navigation Along Windborne Plumes of Pheromone and Resource-Linked Odors".</p>
<hr class="docutils" />
<p>Many insects locate resources such as a mate, a host, or food by flying upwind along the odor plumes that these resources emit to their source. A windborne plume has a turbulent structure comprised of odor filaments interspersed with clean air. As it propagates downwind, the plume becomes more dispersed and dilute, but filaments with concentrations above the threshold required to elicit a behavioral response from receiving organisms can persist for long distances. Flying insects orient along plumes by steering upwind, triggered by the optomotor reaction. Sequential measurements of differences in odor concentration are unreliable indicators of distance to or direction of the odor source. Plume intermittency and the plume's fine-scale structure can play a role in setting an insect's upwind course. The prowess of insects in navigating to odor sources has spawned bioinspired virtual models and even odor-seeking robots, although some of these approaches use mechanisms that are unnecessarily complex and probably exceed an insect's processing capabilities.</p>
<p>Papers:</p>
<ul class="simple">
<li>Carde, Ring T. <a class="reference external" href="https://www.annualreviews.org/doi/pdf/10.1146/annurev-ento-011019-024932#article-denial">"Navigation Along Windborne Plumes of Pheromone and Resource-Linked Odors"</a> , Annual Review of Entomology 2021 66:1</li>
</ul>
<div class="line-block">
<div class="line"><br /></div>
</div>
<p><strong>Date:</strong> 06/11/2020
<strong>Time:</strong> 16:00 <br />
<strong>Location</strong>: online</p>
Spiking Neural Networks Evolved to Perform Multiplicative Operations2020-07-15T18:35:55+01:002020-07-15T18:35:55+01:00Emil Dmitruktag:biocomputation.herts.ac.uk,2020-07-15:/2020/07/15/spiking-neural-networks-evolved-to-perform-multiplicative-operations.html<p class="first last">Amir Khan's journal club session where he will talk about the paper "Spiking Neural Networks Evolved to Perform Multiplicative Operations".</p>
<p>This week on Journal Club session Amir Khan will talk about the paper "Spiking Neural Networks Evolved to Perform Multiplicative Operations".</p>
<hr class="docutils" />
<p>Multiplicative or divisive changes in tuning curves of individual neurons to one stimulus ("input") as another stimulus ("modulation") is applied, called gain modulation, play an important role in perception and decision making. Since the presence of modulatory synaptic stimulation results in a multiplicative operation by proportionally changing the neuronal input-output relationship, such a change affects the sensitivity of the neuron but not its selectivity. Multiplicative gain modulation has commonly been studied at the level of single neurons. Much less is known about arithmetic operations at the network level. In this work we have evolved small networks of spiking neurons in which the output neurons respond to input with non-linear tuning curves that exhibit gain modulation—the best network showed an over 3-fold multiplicative response to modulation. Interestingly, we have also obtained a network with only 2 interneurons showing an over 2-fold response.</p>
<p>Papers:</p>
<ul class="simple">
<li>Steuber, Volker and Davey, Neil and Wrobel, Borys. (2018). <a class="reference external" href="https://www.researchgate.net/publication/327897745_Spiking_Neural_Networks_Evolved_to_Perform_Multiplicative_Operations_27th_International_Conference_on_Artificial_Neural_Networks_Rhodes_Greece_October_4-7_2018_Proceedings_Part_I">"Spiking Neural Networks Evolved to Perform Multiplicative Operations"</a> , 27th International Conference on Artificial Neural Networks, Rhodes, Greece, October 4-7, 2018, Proceedings, Part I. 10.1007/978-3-030-01418-6_31.</li>
</ul>
<div class="line-block">
<div class="line"><br /></div>
</div>
<p><strong>Date:</strong> 17/07/2020 <br />
<strong>Time:</strong> 16:00 <br />
<strong>Location</strong>: online</p>
Towards a new approach to reveal dynamical organization of the brain using topological data analysis2020-07-15T18:35:55+01:002020-07-15T18:35:55+01:00Emil Dmitruktag:biocomputation.herts.ac.uk,2020-07-15:/2020/07/15/towards-a-new-approach-to-reveal-dynamical-organization-of-the-brain-using-topological-data-analysis.html<p class="first last">Emil Dmitruk's journal club session where he will talk about the paper "Towards a new approach to reveal dynamical organization of the brain using topological data analysis".</p>
<p>This week on Journal Club session Emil Dmitruk will talk about the paper "Towards a new approach to reveal dynamical organization of the brain using topological data analysis".</p>
<hr class="docutils" />
<p>Little is known about how our brains dynamically adapt for efficient functioning. Most
previous work has focused on analyzing changes in co-fluctuations between a set of
brain regions over several temporal segments of the data. We argue that by
collapsing data in space or time, we stand to lose useful information about the brain’s
dynamical organization. Here we use Topological Data Analysis to reveal the overall
organization of whole-brain activity maps at a single-participant level—as an
interactive representation—without arbitrarily collapsing data in space or time. Using
existing multitask fMRI datasets, with the known ground truth about the timing of
transitions from one task-block to next, our approach tracks both within- and
between-task transitions at a much faster time scale (~4–9 s) than before. The
individual differences in the revealed dynamical organization predict task
performance. In summary, our approach distills complex brain dynamics into
interactive and behaviorally relevant representations.</p>
<p>Papers:</p>
<ul class="simple">
<li>Saggar, M., Sporns, O., Gonzalez-Castillo, J. et al. <a class="reference external" href="https://www.nature.com/articles/s41467-018-03664-4">"Towards a new approach to reveal dynamical organization of the brain using topological data analysis"</a> ,Nat Commun 9, 1399 (2018). doi.org/10.1038/s41467-018-03664-4</li>
</ul>
<div class="line-block">
<div class="line"><br /></div>
</div>
<p><strong>Date:</strong> 17/07/2020 <br />
<strong>Time:</strong> 16:00 <br />
<strong>Location</strong>: online</p>
Probabilistic Optimization Algorithms for Real-Coded Problems And Its Application in Latin Hypercube Problem2020-06-17T13:30:28+01:002020-06-17T13:30:28+01:00Emil Dmitruktag:biocomputation.herts.ac.uk,2020-06-17:/2020/06/17/probabilistic-optimization-algorithms-for-real-coded-problems-and-its-application-in-latin-hypercube-problem.html<p class="first last">Mohammad Tayarani-Najaran's journal club session where he will talk about the his paper "Probabilistic Optimization Algorithms for Real-Coded Problems And Its Application in Latin Hypercube Problem".</p>
<p>This week on Journal Club session Mohammad Tayarani-Najaran will talk about paper "Probabilistic Optimization Algorithms for Real-Coded Problems And Its Application in Latin Hypercube Problem".</p>
<hr class="docutils" />
<p>This paper proposes a novel optimization algorithm for read-coded problems called the Probabilistic Optimization Algorithm (POA). In the proposed algorithm, rather than a binary or integer, a probabilistic representation is used for the individuals. Each individual in the proposed algorithm is a probability density function and is capable of representing the entire search space simultaneously. In the search process, each solution performs as a local search and climbs the local optima, and at the same time, the interaction among the probabilistic individuals in the population offers a global search. The parameters of the proposed algorithm are studied in this paper and their effect on the search process is presented. A structured population is proposed for the algorithm and the effect of different structures is analyzed. The algorithm is used to solve Latin Hyper-cube problem and experimental studies suggest promising results. Different benchmark functions are also used to test the algorithm and results are presented. The analyses suggest that the improvement is more significant for large scale problems.</p>
<p>Papers:</p>
<ul class="simple">
<li>Mohammad Hassan Tayarani Najaran, Mohammad Reza Akbarzadeh Tootounchi, <a class="reference external" href="http://www.sciencedirect.com/science/article/pii/S0957417420304139">"Probabilistic Optimization Algorithms for Real-Coded Problems And Its Application in Latin Hypercube Problem"</a> , Expert Systems with Applications, 2020, 113589, ISSN 0957-4174, <a class="reference external" href="https://doi.org/10.1016/j.eswa.2020.113589">https://doi.org/10.1016/j.eswa.2020.113589</a>.</li>
</ul>
<div class="line-block">
<div class="line"><br /></div>
</div>
<p><strong>Date:</strong> 19/06/2020 <br />
<strong>Time:</strong> 16:00 <br />
<strong>Location</strong>: online</p>
Extraordinary performance of semiconducting metal oxide gas sensors using dielectric excitation2020-06-10T13:00:53+01:002020-06-10T13:00:53+01:00Emil Dmitruktag:biocomputation.herts.ac.uk,2020-06-10:/2020/06/10/extraordinary-performance-of-semiconducting-metal-oxide-gas-sensors-using-dielectric-excitation.html<p class="first last">Ritesh Kumar's journal club session where he will talk about impedance spectroscopy and its use in the design of electronic tongue and nose systems in general and specifically refer to the paper by Radislav A. Potyrailo et al. along with some of his previous works.</p>
<p>This week on Journal Club session Ritesh Kumar will talk about impedance spectroscopy and its use in the design of electronic tongue and nose systems. Specifically, he will refer to the paper by Radislav A. Potyrailo et al. along with some of his previous works.</p>
<hr class="docutils" />
<p>Impedance spectroscopy is a powerful technique which has been applied to the design of instruments for characterising liquids and solids. The ‘impedance fingerprints’ obtained at various frequencies can be used to classify, define sensitivity, selectivity, linearity of systems. It uses a sweep of sinusoidal frequencies as perturbation signal at low voltage so as to remain in the linear and causal domain. In this talk, I will be presenting about impedance spectroscopy and its use in the design of electronic tongue and nose systems in general and specifically the paper by Radislav A. Potyrailo et al. along with some of our previous works in the design of Electronic Tongue systems. The paper by Radislav A. Potyrailo et al. shows that the run-of-the mill metal oxide gas sensors can act as high performance sensors using the impedance measurements. They show exemplary performance in terms of linearity, limit of detection, cross-sensitivity etc. This can pave way for the design of low cost and efficient electronic nose systems.</p>
<p>Papers:</p>
<ul class="simple">
<li>Potyrailo, R.A., Go, S., Sexton, D. et al. <a class="reference external" href="https://www.nature.com/articles/s41928-020-0402-3">"Extraordinary performance of semiconducting metal oxide gas sensors using dielectric excitation"</a> ,Nat Electron 3, 280–289 (2020). <a class="reference external" href="https://doi.org/10.1038/s41928-020-0402-3">https://doi.org/10.1038/s41928-020-0402-3</a></li>
<li>Ritesh Kumar, Amol P. Bhondekar, Rishemjit Kaur, Saru Vig, Anupma Sharma, Pawan Kapur, <a class="reference external" href="https://www.sciencedirect.com/science/article/pii/S0925400512006065">"A simple electronic tongue"</a> , Sensors and Actuators B: Chemical, Volumes 171–172, 2012, Pages 1046-1053, ISSN 0925-4005, <a class="reference external" href="https://doi.org/10.1016/j.snb.2012.06.031">https://doi.org/10.1016/j.snb.2012.06.031</a>.</li>
<li>Amol P. Bhondekar, Mopsy Dhiman, Anupma Sharma, Arindam Bhakta, Abhijit Ganguli, S.S. Bari, Renu Vig, Pawan Kapur, Madan L. Singla, <a class="reference external" href="https://www.sciencedirect.com/science/article/pii/S0925400510004612">"A novel iTongue for Indian black tea discrimination"</a> , Sensors and Actuators B: Chemical, Volume 148, Issue 2, 2010, Pages 601-609, ISSN 0925-4005, <a class="reference external" href="https://doi.org/10.1016/j.snb.2010.05.053">https://doi.org/10.1016/j.snb.2010.05.053</a>.</li>
</ul>
<div class="line-block">
<div class="line"><br /></div>
</div>
<p><strong>Date:</strong> 12/06/2020 <br />
<strong>Time:</strong> 16:00 <br />
<strong>Location</strong>: online</p>
A Kirchhoff-Nernst-Planck framework for modeling large scale extracellular electrodiffusion surrounding morphologically detailed neurons2020-06-02T10:09:05+01:002020-06-02T10:09:05+01:00Emil Dmitruktag:biocomputation.herts.ac.uk,2020-06-02:/2020/06/02/a-kirchhoff-nernst-planck-framework-for-modeling-large-scale-extracellular-electrodiffusion-surrounding-morphologically-detailed-neurons.html<p class="first last">Reinoud Maex's journal club session where he will talk about the paper "A Kirchhoff-Nernst-Planck framework for modeling large scale extracellular electrodiffusion surrounding morphologically detailed neurons".</p>
<p>This week on Journal Club session Reinoud Maex will talk about the paper "A Kirchhoff-Nernst-Planck framework for modeling large scale extracellular electrodiffusion surrounding morphologically detailed neurons".</p>
<hr class="docutils" />
<p>Many pathological conditions, such as seizures, stroke, and spreading depression, are associated with substantial changes in ion concentrations in the extracellular space (ECS) of the brain. An understanding of the mechanisms that govern ECS concentration dynamics may be a prerequisite for understanding such pathologies. To estimate the transport of ions due to electrodiffusive effects, one must keep track of both the ion concentrations and the electric potential simultaneously in the relevant regions of the brain. Although this is cur- rently unfeasible experimentally, it is in principle achievable with computational models based on biophysical principles and constraints. Previous computational models of extracel- lular ion-concentration dynamics have required extensive computing power, and therefore have been limited to either phenomena on very small spatiotemporal scales (micrometers and milliseconds), or simplified and idealized 1-dimensional (1-D) transport processes on a larger scale. Here, we present the 3-D Kirchhoff-Nernst-Planck (KNP) framework, tailored to explore electrodiffusive effects on large spatiotemporal scales. By assuming electroneu- trality, the KNP-framework circumvents charge-relaxation processes on the spatiotemporal scales of nanometers and nanoseconds, and makes it feasible to run simulations on the spatiotemporal scales of millimeters and seconds on a standard desktop computer. In the present work, we use the 3-D KNP framework to simulate the dynamics of ion concentra- tions and the electrical potential surrounding a morphologically detailed pyramidal cell. In addition to elucidating the single neuron contribution to electrodiffusive effects in the ECS, the simulation demonstrates the efficiency of the 3-D KNP framework. We envision that future applications of the framework to more complex and biologically realistic systems will be useful in exploring pathological conditions associated with large concentration variations in the ECS.</p>
<p>Papers:</p>
<ul class="simple">
<li>A. Solbrå, A.W. Bergersen, J. van den Brink, A. Malthe-Sørenssen, G.T. Einevoll, G. Halnes <a class="reference external" href="https://doi.org/10.1371/journal.pcbi.1006510">"A Kirchhoff-Nernst-Planck framework for modeling large scale extracellular electrodiffusion surrounding morphologically detailed neurons"</a> , (2018) PLoS Comput Biol 14(10): e1006510, <a class="reference external" href="https://doi.org/10.1371/journal.pcbi.1006510">https://doi.org/10.1371/journal.pcbi.1006510</a></li>
</ul>
<div class="line-block">
<div class="line"><br /></div>
</div>
<p><strong>Date:</strong> 05/06/2020 <br />
<strong>Time:</strong> 16:00 <br />
<strong>Location</strong>: online</p>
On Robot Compliance: A Cerebellar Control Approach2020-05-27T13:44:00+01:002020-05-27T13:44:00+01:00Emil Dmitruktag:biocomputation.herts.ac.uk,2020-05-27:/2020/05/27/on-robot-compliance-a-cerebellar-control-approach.html<p class="first last">Volker Steuber's journal club session where he will talk about the paper "On Robot Compliance: A Cerebellar Control Approach".</p>
<p>This week on Journal Club session Volker Steuber will talk about the paper "On Robot Compliance: A Cerebellar Control Approach".</p>
<hr class="docutils" />
<p>The work presented here is a novel biological approach for the compliant control of a robotic arm in real time (RT). We integrate a spiking cerebellar network at the core of a feedback control loop performing torque-driven control. The spiking cerebellar controller provides torque commands allowing for accurate and coordinated arm movements. To compute these output motor commands, the spiking cerebellar controller receives the robot's sensorial signals, the robot's goal behavior, and an instructive signal. These input signals are translated into a set of evolving spiking patterns representing univocally a specific system state at every point of time. Spike-timing-dependent plasticity (STDP) is then supported, allowing for building adaptive control. The spiking cerebellar controller continuously adapts the torque commands provided to the robot from experience as STDP is deployed. Adaptive torque commands, in turn, help the spiking cerebellar controller to cope with built-in elastic elements within the robot's actuators mimicking human muscles (inherently elastic). We propose a natural integration of a bioinspired control scheme, based on the cerebellum, with a compliant robot. We prove that our compliant approach outperforms the accuracy of the default factory-installed position control in a set of tasks used for addressing cerebellar motor behavior: controlling six degrees of freedom (DoF) in smooth movements, fast ballistic movements, and unstructured scenario compliant movements.</p>
<p>Papers:</p>
<ul class="simple">
<li>Abadia I., Naveros F., Garrido JA,. Ros E., Luque N.R., <a class="reference external" href="https://pubmed.ncbi.nlm.nih.gov/31647453/">"On Robot Compliance: A Cerebellar Control Approach"</a> [published online ahead of print, 2019 Oct 23]. IEEE Trans Cybern. 2019;10.1109/TCYB.2019.2945498. doi:10.1109/TCYB.2019.2945498</li>
</ul>
<p><strong>Date:</strong> 29/05/2020 <br />
<strong>Time:</strong> 16:00 <br />
<strong>Location</strong>: online</p>
The control of plastic inhibition over excitatory synaptic plasticity leads to the joint emergence of sensory coding and contrast invariance2020-04-24T14:02:30+01:002020-04-24T14:02:30+01:00Emil Dmitruktag:biocomputation.herts.ac.uk,2020-04-24:/2020/04/24/the-control-of-plastic-inhibition-over-excitatory-synaptic-plasticity-leads-to-the-joint-emergence-of-sensory-coding-and-contrast-invariance.html<p class="first last">Damien Drix's journal club session where he will talk about the paper "The control of plastic inhibition over excitatory synaptic plasticity leads to the joint emergence of sensory coding and contrast invariance".</p>
<p>This week on Journal Club session Damien Drix will talk about the paper "The control of plastic inhibition over excitatory synaptic plasticity leads to the joint emergence of sensory coding and contrast invariance".</p>
<hr class="docutils" />
<p>Visual stimuli are represented by a highly efficient code in the primary visual cortex,
but the development of this code is still unclear. Two distinct factors control coding
efficiency: Representational efficiency, which is determined by neuronal tuning diversity,
and metabolic efficiency, which is influenced by neuronal gain. How these determinants of
coding efficiency are shaped during development, supported by excitatory and inhibitory
plasticity, is only partially understood. We investigate a fully plastic spiking network
of the primary visual cortex, building on phenomenological plasticity rules. Our results
show that inhibitory plasticity is key to the emergence of tuning diversity and accurate
input encoding. Additionally, inhibitory feedback increases the metabolic efficiency by
implementing a gain control mechanism. Interestingly, this led to the spontaneous
emergence of contrast-invariant tuning curves. Our findings highlight the role of
interneuron plasticity during the development of receptive fields and in shaping sensory
representations.</p>
<p>Papers:</p>
<ul class="simple">
<li>René Larisch, Lorenz Gönner, Michael Teichmann, Fred H. Hamker (2020) <a class="reference external" href="https://doi.org/10.1101/2020.04.07.029157">" The control of plastic inhibition over excitatory synaptic plasticity leads to the joint emergence of sensory coding and contrast invariance"</a> , bioRxiv 2020.04.07.029157; doi: <a class="reference external" href="https://doi.org/10.1101/2020.04.07.029157">https://doi.org/10.1101/2020.04.07.029157</a></li>
</ul>
<div class="line-block">
<div class="line"><br /></div>
</div>
<p><strong>Date:</strong> 24/04/2020 <br />
<strong>Time:</strong> 16:00 <br />
<strong>Location</strong>: online</p>
Cyberspecies Proximiy2020-04-01T22:36:53+01:002020-04-01T22:36:53+01:00Emil Dmitruktag:biocomputation.herts.ac.uk,2020-04-01:/2020/04/01/cyberspecies-proximiy.html<p class="first last">Anna Dumituru's journal club session where she will talk about her project named "Cyberspecies Proximity".</p>
<p>This week on Journal Club session Anna Dumituru will talk about her project named "Cyberspecies Proximity" on which she is working together with Alex May.</p>
<hr class="docutils" />
<p>Cyberspecies Proximity explores what it will mean to share our sidewalks, elevators and transport systems in close proximity with mobile intelligent robots. The work challenges audiences to confront the technological, ethical, and societal questions raised by the advent of urban socially-aware robots.</p>
<p>The robotic artwork combines the way-finding technologies of delivery and maintenance robots with an ability to communicate non-verbally, and to manipulate our emotions through body-language. The robot is able to move around an exhibition space using a predefined map created using SLAM technology. It reacts and responds to the body language of audience members through a multi-layered face, skeleton and movement tracking algorithm.</p>
<p>The small and fragile humanoid form is dressed in the clothes of a worker, its frail and insignificant body reminds us of the social groups that will be most affected by future automation. The robot’s head and hands are made from 3D printed grey PLA and it intentionally avoids categorisations of race and gender. The project forces us to consider issues of ownership of public spaces as well as the broader ethical implications of how we design robots and behave towards them.</p>
<p>Cyberspecies Proximity is a project by Anna Dumitriu and Alex May, created in collaboration with the Human Robot Co-Mobility Project of the New Technologies Team at Schindler. The project was supported by a Vertigo STARTS Residency, and the artwork was programmed in C++ and FUGIO, the Open Source Visual Programming System created by Alex May.</p>
<hr class="docutils" />
<p>NOTICE!</p>
<p>Due to current unusual circumstances, this Journal Club will be held online as a conference on 8COM0108-0000-2019 module. In order to attend, you must be a member of that module. The conference room will be created on Friday and all of the module members will be notified about that with an email. Once the conference is created and started, module members will be able to join it by visiting the module website on Canvas, and then selecting "Conferences" section. Any queries should be directed to Emil Dmitruk via email (<a class="reference external" href="mailto:e.dmitruk@herts.ac.uk">e.dmitruk@herts.ac.uk</a>).</p>
<div class="line-block">
<div class="line"><br /></div>
</div>
<p><strong>Date:</strong> 03/04/2020 <br />
<strong>Time:</strong> 16:00 <br />
<strong>Location</strong>: Online conference on 8COM0108-0000-2019 module- more details will be announced on Biocomputation Slack group.</p>
Spatiotemporal network coding of physiological mossy fiber inputs by the cerebellar granular layer2020-03-11T17:34:06+00:002020-03-11T17:34:06+00:00Emil Dmitruktag:biocomputation.herts.ac.uk,2020-03-11:/2020/03/11/spatiotemporal-network-coding-of-physiological-mossy-fiber-inputs-by-the-cerebellar-granular-layer.html<p class="first last">Ohki Katakura's journal club session where he will talk about the paper "Spatiotemporal network coding of physiological mossy fiber inputs by the cerebellar granular layer".</p>
<p>This week on Journal Club session Ohki Katakura will talk about the paper "Spatiotemporal network coding of physiological mossy fiber inputs by the cerebellar granular layer".</p>
<hr class="docutils" />
<p>The granular layer, which mainly consists of granule and Golgi cells, is the first stage of the cerebellar cortex and processes spatiotemporal information transmitted by mossy fiber inputs with a wide variety of firing patterns. To study its dynamics at multiple time scales in response to inputs approximating real spatiotemporal patterns, we constructed a large-scale 3D network model of the granular layer. Patterned mossy fiber activity induces rhythmic Golgi cell activity that is synchronized by shared parallel fiber input and by gap junctions. This leads to long distance synchrony of Golgi cells along the transverse axis, powerfully regulating granule cell firing by imposing inhibition during a specific time window. The essential network mechanisms, including tunable Golgi cell oscillations, on-beam inhibition and NMDA receptors causing first winner keeps winning of granule cells, illustrate how fundamental properties of the granule layer operate in tandem to produce (1) well timed and spatially bound output, (2) a wide dynamic range of granule cell firing and (3) transient and coherent gating oscillations. These results substantially enrich our understanding of granule cell layer processing, which seems to promote spatial group selection of granule cell activity as a function of timing of mossy fiber input.</p>
<p>Papers:</p>
<ul class="simple">
<li>Sudhakar, S. et al. (2017) <a class="reference external" href="https://doi.org/10.1371/journal.pcbi.1005754">"Spatiotemporal network coding of physiological mossy fiber inputs by the cerebellar granular layer"</a> ,
PLOS Computational Biology 15(10): e1007472</li>
</ul>
<div class="line-block">
<div class="line"><br /></div>
</div>
<p><strong>Date:</strong> 13/03/2020 <br />
<strong>Time:</strong> 16:00 <br />
<strong>Location</strong>: C154</p>
Cliques of Neurons Bound into Cavities Provide a Missing Link between Structure and Function2020-02-26T10:32:21+00:002020-02-26T10:32:21+00:00Emil Dmitruktag:biocomputation.herts.ac.uk,2020-02-26:/2020/02/26/cliques-of-neurons-bound-into-cavities-provide-a-missing-link-between-structure-and-function.html<p class="first last">Emil Dmitruk's journal club session where he will talk about the paper "Cliques of Neurons Bound into Cavities Provide a Missing Link between Structure and Function".</p>
<p>This week on Journal Club session Emil Dmitruk will talk about the paper "Cliques of Neurons Bound into Cavities Provide a Missing Link between Structure and Function".</p>
<hr class="docutils" />
<p>The lack of a formal link between neural network structure and its emergent function
has hampered our understanding of how the brain processes information. We have now come
closer to describing such a link by taking the direction of synaptic transmission into
account, constructing graphs of a network that reflect the direction of information
flow, and analyzing these directed graphs using algebraic topology. Applying this
approach to a local network of neurons in the neocortex revealed a remarkably intricate
and previously unseen topology of synaptic connectivity. The synaptic network contains
an abundance of cliques of neurons bound into cavities that guide the emergence of
correlated activity. In response to stimuli, correlated activity binds synaptically
connected neurons into functional cliques and cavities that evolve in a stereotypical
sequence toward peak complexity. We propose that the brain processes stimuli by forming
increasingly complex functional cliques and cavities.</p>
<p>Papers:</p>
<ul class="simple">
<li>Reimann, M. et al. (2017) <a class="reference external" href="http://journal.frontiersin.org/article/10.3389/fncom.2017.00048/full">"Cliques of Neurons Bound into Cavities Provide a Missing Link between Structure and Function"</a> ,
Front. Comput. Neurosci. 11:48. doi: 10.3389/fncom.2017.00048</li>
</ul>
<div class="line-block">
<div class="line"><br /></div>
</div>
<p><strong>Date:</strong> 28/02/2020 <br />
<strong>Time:</strong> 16:00 <br />
<strong>Location</strong>: B200</p>
Autaptic Connections Shift Network Excitability and Bursting2020-02-19T15:26:55+00:002020-02-19T15:26:55+00:00Emil Dmitruktag:biocomputation.herts.ac.uk,2020-02-19:/2020/02/19/autaptic-connections-shift-network-excitability-and-bursting.html<p class="first last">Yaqoob Muhammad's journal club session where he will talk about the paper "Autaptic Connections Shift Network Excitability and Bursting".</p>
<p>This week on Journal Club session Yaqoob Muhammad will talk about the paper "Autaptic Connections Shift Network Excitability and Bursting".</p>
<hr class="docutils" />
<p>We examine the role of structural autapses, when a neuron synapses onto itself, in driving network-
wide bursting behavior. Using a simple spiking model of neuronal activity, we study how autaptic
connections affect activity patterns, and evaluate if controllability significantly affects changes in
bursting from autaptic connections. Adding more autaptic connections to excitatory neurons increased
the number of spiking events and the number of network-wide bursts. We observed excitatory synapses
contributed more to bursting behavior than inhibitory synapses. We evaluated if neurons with high
average controllability, predicted to push the network into easily achievable states, affected bursting
behavior differently than neurons with high modal controllability, thought to influence the network
into difficult to reach states. Results show autaptic connections to excitatory neurons with high average
controllability led to higher burst frequencies than adding the same number of self-looping connections
to neurons with high modal controllability. The number of autapses required to induce bursting was
lowered by adding autapses to high degree excitatory neurons. These results suggest a role of autaptic
connections in controlling network-wide bursts in diverse cortical and subcortical regions of mammalian
brain. Moreover, they open up new avenues for the study of dynamic neurophysiological correlates of
structural controllability.</p>
<p>Papers:</p>
<ul class="simple">
<li>Wiles, L. et al. (2017) <a class="reference external" href="https://www.nature.com/articles/srep44006#Sec23">"Autaptic Connections Shift Network Excitability and Bursting"</a> ,
Sci. Rep. 7, 44006; doi: 10.1038/srep44006</li>
</ul>
<div class="line-block">
<div class="line"><br /></div>
</div>
<p><strong>Date:</strong> 21/02/2020 <br />
<strong>Time:</strong> 16:00 <br />
<strong>Location</strong>: B200</p>
Dynamic Hierarchical Structure for Cloud Computing Job Scheduling Utilizing Artificial Intelligence Technologies2020-02-12T13:25:05+00:002020-02-12T13:25:05+00:00Emil Dmitruktag:biocomputation.herts.ac.uk,2020-02-12:/2020/02/12/dynamic-hierarchical-structure-for-cloud-computing-job-scheduling-utilizing-artificial-intelligence-technologies.html<p class="first last">Na Helian's journal club session where she will talk about the paper "Dynamic Hierarchical Structure for Cloud Computing Job Scheduling Utilizing Artificial Intelligence Technologies".</p>
<p>This week on Journal Club session Na Helian will talk about the paper "Dynamic Hierarchical Structure for Cloud Computing Job Scheduling Utilizing Artificial Intelligence Technologies".</p>
<hr class="docutils" />
<p>Cloud computing is widely used due to its cost effectiveness and starvation
free execution of processes. There has been substantial research done
in job scheduling algorithm in cloud computing to improve scheduling
performance, but little attention has been paid to structure design for
job scheduling. This paper aims to improve job scheduling makespan (max
processing time for given jobs) in a cloud environment. A dynamic
hierarchical structure, which introduces sub-schedulers between scheduler
and servers, is proposed to dynamically change the connection pattern
between sub-schedulers and servers by using artificial intelligence search
algorithms. Due to its dynamic and flexible nature, this design enables
the system to adaptively accommodate the heterogeneity of jobs and resources
in order to make most use of the resources available. Experimental results
demonstrate that a dynamic hierarchical structure can significantly reduce
the total makespan of the heterogeneous tasks allocated to heterogeneous
resources, compared with a one-layer structure. This reduction is
particularly pronounced when resources are scarce.</p>
<p>Papers:</p>
<ul class="simple">
<li>"Dynamic Hierarchical Structure for Cloud Computing Job Scheduling
Utilizing Artificial Intelligence Technologies"</li>
</ul>
<div class="line-block">
<div class="line"><br /></div>
</div>
<p><strong>Date:</strong> 14/02/2020 <br />
<strong>Time:</strong> 16:00 <br />
<strong>Location</strong>: B200</p>
Hyperbolic geometry of the olfactory space2020-02-05T12:56:32+00:002020-02-05T12:56:32+00:00Emil Dmitruktag:biocomputation.herts.ac.uk,2020-02-05:/2020/02/05/hyperbolic-geometry-of-the-olfactory-space.html<p class="first last">Emil Dmitruk's journal club session where he will talk about the paper "Hyperbolic geometry of the olfactory space".</p>
<p>This week on Journal Club session Emil Dmitruk talk about the paper "Hyperbolic geometry of the olfactory space".</p>
<hr class="docutils" />
<p>In the natural environment, the sense of smell, or olfaction, serves to detect
toxins and judge nutritional content by taking advantage of the associations
between compoundsas they are created in biochemical reactions. This suggests
that the nervous system can classify odors based on statistics of their
co-occurrence within natural mixtures rather than from the chemical structures
of the ligands themselves. We show that this statistical perspective makes
it possible to map odors to points in a hyperbolic space. Hyperbolic coordinates
have a long but often underappreciated history of relevance to biology. For
example, these coordinates approximate the distance between species computed
along dendrograms and, more generally, between points within hierarchical
tree–like networks. We find that both natural odors and human perceptual
descriptions of smells can be described using a three-dimensional hyperbolic
space. This match in geometries can avoid distortions that would otherwise
arise when mapping odors to perception.</p>
<p>Papers:</p>
<ul class="simple">
<li>Yuansheng Zhou et al. (2019) <a class="reference external" href="https://advances.sciencemag.org/content/4/8/eaaq1458">"Hyperbolic geometry of the olfactory space"</a> ,
Science Advances 29 Aug 2018: Vol. 4, no. 8.</li>
</ul>
<div class="line-block">
<div class="line"><br /></div>
</div>
<p><strong>Date:</strong> 07/02/2020 <br />
<strong>Time:</strong> 16:00 <br />
<strong>Location</strong>: B200</p>
Towards Discriminative representation learning for speech Emotion recognition2020-01-29T11:03:39+00:002020-01-29T11:03:39+00:00Emil Dmitruktag:biocomputation.herts.ac.uk,2020-01-29:/2020/01/29/towards-discriminative-representation-learning-for-speech-emotion-recognition.html<p class="first last">Yi Sun's journal club session where he will talk about the paper "Towards Discriminative representation learning for speech Emotion recognition".</p>
<p>This week on Journal Club session Yi Sun will talk about the paper "Towards Discriminative representation learning for speech Emotion recognition".</p>
<hr class="docutils" />
<p>In intelligent speech interaction, automatic speech emotion recognition (SER)
plays an important role in understanding user intention. While sentimental
speech has different speaker characteristics but similar acoustic attributes,
one vital challenge in SER is how to learn robust and discriminative
representations for emotion inferring. In this paper, inspired by human
emotion perception, we propose a novel representation learning component (RLC)
for SER system, which is constructed with Multi-head Self-attention and Global
Context-aware Attention Long Short-Term Memory Recurrent Neutral Network
(GCA-LSTM). With the ability of Multi-head Self-attention mechanism in
modeling the element-wise correlative dependencies, RLC can exploit the
common patterns of sentimental speech features to enhance emotion-salient
information importing in representation learning. By employing GCA-LSTM,
RLC can selectively focus on emotion-salient factors with the consideration
of entire utterance context, and gradually produce discriminative representation
for emotion inferring. Experiments on public emotional benchmark database
IEMOCAP and a tremendous realistic interaction database demonstrate the
outperformance of the proposed SER framework, with 6.6% to 26.7% relative
improvement on unweighted accuracy compared to state-of-the-art techniques.</p>
<p>Papers:</p>
<ul class="simple">
<li>Runnan Li et al. (2019) <a class="reference external" href="https://www.ijcai.org/Proceedings/2019/703">"Towards Discriminative Representation Learning for Speech Emotion Recognition"</a> ,
Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligenced, Main track, Pages 5060-5066</li>
</ul>
<div class="line-block">
<div class="line"><br /></div>
</div>
<p><strong>Date:</strong> 31/01/2020 <br />
<strong>Time:</strong> 16:00 <br />
<strong>Location</strong>: B200</p>
What an Ehm Leaks About You: Mapping Fillers into Personality Traits with Quantum Evolutionary Feature Selection Algorithms2019-11-29T15:49:37+00:002019-11-29T15:49:37+00:00Emil Dmitruktag:biocomputation.herts.ac.uk,2019-11-29:/2019/11/29/what-an-ehm-leaks-about-you-mapping-fillers-into-personality-traits-with-quantum-evolutionary-feature-selection-algorithms.html<p class="first last">Mohammad Tayarani-Najaran's journal club where he discusses the paper <a class="reference external" href="https://ieeexplore.ieee.org/abstract/document/8770161">What an "Ehm" Leaks About You: Mapping Fillers into Personality Traits with Quantum Evolutionary Feature Selection Algorithms</a>.</p>
<p>Mohammad Tayarani-Najaran's journal club where he discusses the paper <a class="reference external" href="https://ieeexplore.ieee.org/abstract/document/8770161">What an "Ehm" Leaks About You: Mapping Fillers into Personality Traits with Quantum Evolutionary Feature Selection Algorithms</a>. The abstract is below:</p>
<hr class="docutils" />
<p>This work shows that fillers - short utterances like "ehm" and "uhm" - allow one to predict whether someone is above median along the Big-Five personality traits. The experiments have been performed over a corpus of 2,988 fillers uttered by 120 different speakers in spontaneous conversations. The results show that the prediction accuracies range between 74% and 82% depending on the particular trait. The proposed approach includes a feature selection step - based on Quantum Evolutionary Algorithms - that has been used to detect the personality markers, i.e., the subset of the features that better account for the prediction outcomes and, indirectly, for the personality of the speakers. The results show that only a relatively few features tend to be consistently selected, thus acting as reliable personality markers</p>
<div class="line-block">
<div class="line"><br /></div>
</div>
<p><strong>Date:</strong> 06/12/2019 <br />
<strong>Time:</strong> 16:00 <br />
<strong>Location</strong>: D449</p>
Efficient codes and balanced networks2019-11-21T10:14:04+00:002019-11-21T10:14:04+00:00Emil Dmitruktag:biocomputation.herts.ac.uk,2019-11-21:/2019/11/21/efficient-codes-and-balanced-networks.html<p class="first last">Samuel Sutton journal club session where he will talk about the paper "Efficient codes and balanced networks".</p>
<p>This week on Journal Club session Samuel Sutton will talk about the paper "Efficient codes and balanced networks".</p>
<hr class="docutils" />
<p>Recent years have seen a growing interest in inhibitory interneurons and their circuits. A striking property of cortical inhibition
is how tightly it balances excitation. Inhibitory currents not only match excitatory currents on average, but track them on a
millisecond time scale, whether they are caused by external stimuli or spontaneous fluctuations. We review, together with
experimental evidence, recent theoretical approaches that investigate the advantages of such tight balance for coding and
computation. These studies suggest a possible revision of the dominant view that neurons represent information with firing rates
corrupted by Poisson noise. Instead, tight excitatory/inhibitory balance may be a signature of a highly cooperative code, orders of
magnitude more precise than a Poisson rate code. Moreover, tight balance may provide a template that allows cortical neurons to
construct high-dimensional population codes and learn complex functions of their inputs.</p>
<p>Papers:</p>
<ul class="simple">
<li>Sophie Denève & Christian K Machens <a class="reference external" href="https://www.nature.com/articles/nn.4243">"Efficient codes and balanced networks"</a> Nat Neurosci 19, 375–382 (2016) doi:10.1038/nn.4243</li>
</ul>
<div class="line-block">
<div class="line"><br /></div>
</div>
<p><strong>Date:</strong> 22/11/2019 <br />
<strong>Time:</strong> 16:00 <br />
<strong>Location</strong>: D449</p>
Unsupervised Feature Selection for Large Data sets2019-11-13T10:07:56+00:002019-11-13T10:07:56+00:00Emil Dmitruktag:biocomputation.herts.ac.uk,2019-11-13:/2019/11/13/unsupervised-feature-selection-for-large-data-sets.html<p class="first last">Deepak Panday's journal club session where he will talk about the paper "Unsupervised Feature Selection for Large Data sets".</p>
<p>This week on Journal Club session Deepak Panday will talk about the paper "Unsupervised Feature Selection for Large Data sets".</p>
<hr class="docutils" />
<p>The last decade saw a considerable increase in the availability
of data. Unfortunately, this increase was overshadowed by various
technical difficulties that arise when analysing large data sets.
These include long processing times, large requirements for data
storage, and other technical issues related to the analysis of
high-dimensional data sets. By consequence, reducing the cardinality
of data sets (with minimum information loss) has become of interest
to virtually any data scientist. Many feature selection algorithms
have been introduced in the literature, however, there are two main
issues with these. First, the vast majority of such algorithms
require labelled samples to learn from. One should note it is often
too expensive to label a meaningful amount of data, particularly
when dealing with large data sets. Second, these algorithms were
not designed to deal with the volume of data we have nowadays.
This paper introduces a novel unsupervised feature selection
algorithm designed specifically to deal with large data sets.
Our experiments demonstrate the superiority of our method.</p>
<p>Papers:</p>
<ul class="simple">
<li>RenatoCordeiro de Amorim (2019) <a class="reference external" href="https://www.sciencedirect.com/science/article/pii/S0167865518304963">"Unsupervised feature selection for large data sets"</a> ,
Pattern Recognition Letters, vol 128, Pages 183-189.</li>
</ul>
<div class="line-block">
<div class="line"><br /></div>
</div>
<p><strong>Date:</strong> 15/11/2019 <br />
<strong>Time:</strong> 16:00 <br />
<strong>Location</strong>: D449</p>
Towards Explainable Artificial Intelligence2019-11-06T11:00:58+00:002019-11-06T11:00:58+00:00Emil Dmitruktag:biocomputation.herts.ac.uk,2019-11-06:/2019/11/06/towards-explainable-artificial-intelligence.html<p class="first last">Muhammad Yaqoob's journal club session where he will talk about the paper "Towards Explainable Artificial Intelligence".</p>
<p>This week on Journal Club session Muhammad Yaqoob will talk about the paper "Towards Explainable Artificial Intelligence".</p>
<hr class="docutils" />
<p>In recent years, machine learning (ML) has become a key enabling
technology for the sciences and industry. Especially through
improvements in methodology, the availability of large databases
and increased computational power, today's ML algorithms are able
to achieve excellent performance (at times even exceeding the
human level) on an increasing number of complex tasks. Deep
learning models are at the forefront of this development. However,
due to their nested non-linear structure, these powerful models
have been generally considered "black boxes", not providing
any information about what exactly makes them arrive at their
predictions. Since in many applications, e.g., in the medical
domain, such lack of transparency may be not acceptable, the
development of methods for visualizing, explaining and
interpreting deep learning models has recently attracted
increasing attention. This introductory paper presents recent
developments and applications in this field and makes a plea
for a wider use of explainable learning algorithms in practice.</p>
<p>Papers:</p>
<ul class="simple">
<li>Samek W., Müller KR. (2019) <a class="reference external" href="https://link.springer.com/chapter/10.1007/978-3-030-28954-6_1">"Towards Explainable Artificial Intelligence"</a> ,
In: Samek W., Montavon G., Vedaldi A., Hansen L., Muller KR. (eds)
Explainable AI: Interpreting, Explaining and Visualizing Deep Learning.
Lecture Notes in Computer Science, vol 11700. Springer, Cham.</li>
</ul>
<div class="line-block">
<div class="line"><br /></div>
</div>
<p><strong>Date:</strong> 08/11/2019 <br />
<strong>Time:</strong> 16:00 <br />
<strong>Location</strong>: D449</p>
Clique topology reveals intrinsic geometric structure in neural correlations2019-10-30T17:04:05+00:002019-10-30T17:04:05+00:00Emil Dmitruktag:biocomputation.herts.ac.uk,2019-10-30:/2019/10/30/clique-topology-reveals-intrinsic-geometric-structure-in-neural-correlations.html<p class="first last">Shabnam Kadir's journal club session where she will present the paper "Clique topology reveals intrinsic geometric structure in neural correlations" (C. Giusti et al., 2015).</p>
<p>This week on Jurnal Club session Shabnam Kadir will present the paper
"Clique topology reveals intrinsic geometric structure in neural correlations",
by C. Giusti, E. Pastalkova, C. Curto and V. Itskov, Arxiv Prepr., pp. 1–29, 2015.</p>
<hr class="docutils" />
<p>Detecting meaningful structure in neural activity and connectivity data is
challenging in the presence of hidden nonlinearities, where traditional
eigenvalue-based methods may be misleading. We introduce a novel approach
to matrix analysis, called clique topology, that extracts features of the
data invariant under nonlinear monotone transformations. These features
can be used to detect both random and geometric structure, and depend only
on the relative ordering of matrix entries. We then analyzed the activity
of pyramidal neurons in rat hippocampus, recorded while the animal was
exploring a two-dimensional environment, and confirmed that our method
is able to detect geometric organization using only the intrinsic pattern
of neural correlations. Remarkably, we found similar results during
non-spatial behaviors such as wheel running and REM sleep. This suggests
that the geometric structure of correlations is shaped by the underlying
hippocampal circuits, and is not merely a consequence of position coding.
We propose that clique topology is a powerful new tool for matrix analysis
in biological settings, where the relationship of observed quantities to
more meaningful variables is often nonlinear and unknown.</p>
<div class="line-block">
<div class="line"><br /></div>
</div>
<p><strong>Date:</strong> 01/11/2019 <br />
<strong>Time:</strong> 16:00 <br />
<strong>Location</strong>: D449</p>
Neuronal modelling of cerebellar Purkinje cell2019-10-23T10:04:59+01:002019-10-23T10:04:59+01:00Emil Dmitruktag:biocomputation.herts.ac.uk,2019-10-23:/2019/10/23/neuronal-modelling-of-cerebellar-purkinje-cell.html<p class="first last">Ohki Katakura's journal club session where he will talk about neuronal modelling of cerebellar Purkinje cell while referencing various papers.</p>
<p>This week on Journal Club session Ohki Katakura's will talk about neuronal modelling of cerebellar Purkinje cell while referencing various papers.</p>
<hr class="docutils" />
<p>Cerebellar Purkinje cell is the hugest and the most complicated neuron
in the cerebellum. As well as cerebellar cortex network, some
researchers think that the cell has rich computational capacity because
of its complexity. Based on experimental studies, it was realistically
modelled in 1994 (De Schutter & Bower, 1994ab), and then the model has
been recently updated (Masoli et al., 2015; Zang et al., 2018). In this
session, I will introduce details of the models (i.e., morphology and
embedded ion channels) and the differences between all three of them.</p>
<p>Papers:</p>
<ul class="simple">
<li>De Schutter, E., Bower, J.M., 1994a. <a class="reference external" href="https://doi.org/10.1152/jn.1994.71.1.375">"An active membrane model of the
cerebellar Purkinje cell. I. Simulation of current clamps in slice"</a>.
Journal of Neurophysiology 71, 375–400.</li>
<li>De Schutter, E., Bower, J.M., 1994b. <a class="reference external" href="https://doi.org/10.1152/jn.1994.71.1.401">"An active membrane model of the
cerebellar Purkinje cell II. Simulation of synaptic responses"</a>. Journal
of Neurophysiology 71, 401–419.</li>
<li>Masoli, S., Solinas, S., D’Angelo, E., 2015. <a class="reference external" href="https://doi.org/10.3389/fncel.2015.00047">"Action potential
processing in a detailed Purkinje cell model reveals a critical role for
axonal compartmentalization"</a>. Front. Cell. Neurosci. 9.</li>
<li>Zang, Y., Dieudonné, S., De Schutter, E., 2018. <a class="reference external" href="https://doi.org/10.1016/j.celrep.2018.07.011">"Voltage- and
Branch-Specific Climbing Fiber Responses in Purkinje Cells"</a>. Cell Reports
24, 1536–1549.</li>
</ul>
<div class="line-block">
<div class="line"><br /></div>
</div>
<p><strong>Date:</strong> 25/10/2019 <br />
<strong>Time:</strong> 16:00 <br />
<strong>Location</strong>: D449</p>
Re-uptake of potassium by neurons and glial cells2019-10-14T11:38:08+01:002019-10-14T11:38:08+01:00Emil Dmitruktag:biocomputation.herts.ac.uk,2019-10-14:/2019/10/14/re-uptake-of-potassium-by-neurons-and-glial-cells.html<p class="first last">Reinoud Maex's journal club session where he will present the paper "Computer simulations of neuron-glia interactions mediated by ion flux (Somjen et al., 2008)".</p>
<p>This week on Jurnal Club session Reinoud Maex will present the paper <a class="reference external" href="https://link.springer.com/article/10.1007%2Fs10827-008-0083-9">"Computer simulations of neuron-glia interactions mediated by ion flux"</a>
, by Somjen, Kager and Wadman(2008) J. Comput. Neurosci. 25, 349-365.</p>
<hr class="docutils" />
<p>This paper resolves two questions that have bothered me since long:</p>
<ul class="simple">
<li>How can (neuronal or glial) potassium channels contribute to the re-uptake of potassium, hence how can the outward flow of potassium become an inward flow?</li>
<li>Why is the sodium-potassium pump electrogenic, hence why are three sodium ions exchanged for only two potassium ions?</li>
</ul>
<div class="line-block">
<div class="line"><br /></div>
</div>
<p><strong>Date:</strong> 18/10/2019 <br />
<strong>Time:</strong> 16:00 <br />
<strong>Location</strong>: D449</p>
Variational AutoEncoders for fragrant molecule data2019-09-25T15:14:12+01:002019-09-25T15:14:12+01:00Emil Dmitruktag:biocomputation.herts.ac.uk,2019-09-25:/2019/09/25/variational-autoencoders-for-fragrant-molecule-data.html<p class="first last">Vinesh Bhunjun's journal club session, where he will present the effects of his cooperation with our laboratory.</p>
<p>Vinesh Bhunjun's journal club session, where will he present the effects of his cooperation with our laboratory.</p>
<hr class="docutils" />
<p>In this talk, I will explore a summary of the work I have done over the past 4 weeks on Variational AutoEncoders (VAE) as applied to fragrant molecule data. VAEs provide a compact latent representation of the input from which it is possible to reconstruct the input.</p>
<p><strong>Date:</strong> 27/09/2019 <br />
<strong>Time:</strong> 16:00 <br />
<strong>Location</strong>: D449</p>