UH Biocomputation Group - Deepak Pandayhttp://biocomputation.herts.ac.uk/2024-03-02T19:04:26+00:00CEPS: An Open Access MATLAB Graphical User Interface (GUI) for the Analysis of Complexity and Entropy in Physiological Signals2024-03-02T19:04:26+00:002024-03-02T19:04:26+00:00Deepak Pandaytag:biocomputation.herts.ac.uk,2024-03-02:/2024/03/02/ceps-an-open-access-matlab-graphical-user-interface-gui-for-the-analysis-of-complexity-and-entropy-in-physiological-signals.html<p class="first last">Deepak Panday's Journal Club session where he will talk about his paper "CEPS: An Open Access MATLAB Graphical User Interface (GUI) for the Analysis of Complexity and Entropy in Physiological Signals".</p>
<p>On this week's Journal Club session, Deepak Panday will talk about his paper in the presentation entitled "CEPS: An Open Access MATLAB Graphical User Interface (GUI) for the Analysis of Complexity and Entropy in Physiological Signals".</p>
<hr class="docutils" />
<p>Background: We developed CEPS as an open access MATLAB® GUI (graphical user interface) for
the analysis of Complexity and Entropy in Physiological Signals (CEPS), and demonstrate
its use with an example data set that shows the effects of paced breathing (PB) on
variability of heart, pulse and respiration rates. CEPS is also sufficiently adaptable to
be used for other time series physiological data such as EEG (electroencephalography),
postural sway or temperature measurements. Methods: Data were collected from a convenience
sample of nine healthy adults in a pilot for a larger study investigating the effects on
vagal tone of breathing paced at various different rates, part of a development programme
for a home training stress reduction system. Results: The current version of CEPS focuses
on those complexity and entropy measures that appear most frequently in the literature,
together with some recently introduced entropy measures which may have advantages over
those that are more established. Ten methods of estimating data complexity are currently
included, and some 28 entropy measures. The GUI also includes a section for data pre-
processing and standard ancillary methods to enable parameter estimation of embedding
dimension m and time delay τ (‘tau’) where required. The software is freely available
under version 3 of the GNU Lesser General Public License (LGPLv3) for non-commercial
users. CEPS can be downloaded from Bitbucket. In our illustration on PB, most complexity
and entropy measures decreased significantly in response to breathing at 7 breaths per
minute, differentiating more clearly than conventional linear, time- and frequency-domain
measures between breathing states. In contrast, Higuchi fractal dimension increased during
paced breathing. Conclusions: We have developed CEPS software as a physiological data
visualiser able to integrate state of the art techniques. The interface is designed for
clinical research and has a structure designed for integrating new tools. The aim is to
strengthen collaboration between clinicians and the biomedical community, as demonstrated
here by using CEPS to analyse various physiological responses to paced breathing.</p>
<div class="line-block">
<div class="line"><br /></div>
</div>
<p>Papers:</p>
<ul class="simple">
<li>D. Mayor, D. Panday, H. Kandel, T. Steffert, D. Banks, <a class="reference external" href="https://doi.org/10.3390/e23030321">"CEPS: An Open Access MATLAB Graphical User Interface (GUI) for the Analysis of Complexity and Entropy in Physiological Signals"</a>, 2021, Entropy, 23, 321</li>
</ul>
<p><strong>Date:</strong> 2024/03/08 <br />
<strong>Time:</strong> 14:00 <br />
<strong>Location</strong>: C258 & online</p>
Minkowski Metric, Feature Weighting and Anomalous Cluster Initializing in K-Means Clustering2021-03-10T15:01:00+00:002021-03-10T15:01:00+00:00Deepak Pandaytag:biocomputation.herts.ac.uk,2021-03-10:/2021/03/10/minkowski-metric-feature-weighting-and-anomalous-cluster-initializing-in-k-means-clustering.html<p class="first last">Deepak Panday's Journal Club session where he will talk about a paper "Minkowski Metric, Feature Weighting and Anomalous Cluster Initializing in K-Means Clustering"</p>
<p>This week on Journal Club session Deepak Panday will talk about a paper "Minkowski Metric, Feature Weighting and Anomalous Cluster Initializing in K-Means Clustering".</p>
<hr class="docutils" />
<p>This paper represents another step in overcoming a drawback of
K-Means, its lack of defense against noisy features, using feature
weights in the criterion. The Weighted K-Means method by Huang et al.
(2008, 2004, 2005) [5, 7] is extended to the corresponding Minkowski
metric for measuring distances. Under Minkowski metric the feature
weights become intuitively appealing feature rescaling factors in a
conventional K-Means criterion. To see how this can be used in
addressing another issue of K-Means, the initial setting, a method to
initialize K-Means with anomalous clusters is adapted. The Minkowski
metric based method is experimentally validated on datasets from the
UCI Machine Learning Repository and generated sets of Gaussian
clusters, both as they are and with additional uniform random noise
features, and appears to be competitive in comparison with other
K-Means based feature weighting algorithms.</p>
<div class="line-block">
<div class="line"><br /></div>
</div>
<p>Papers:</p>
<ul class="simple">
<li>R. Cordeiro de Amorim, B. Mirkin, <a class="reference external" href="https://doi.org/10.1016/j.patcog.2011.08.012">"Minkowski Metric, Feature Weighting and Anomalous Cluster Initializing in K-Means Clustering"</a>, 2012, Pattern Recognition, 45, 1061--1075</li>
</ul>
<p><strong>Date:</strong> 2021/03/10 <br />
<strong>Time:</strong> 14:00 <br />
<strong>Location</strong>: online</p>
Reverse Nearest Neighbor Queries: emergence and implementation issues2018-06-11T10:39:01+01:002018-06-11T10:39:01+01:00Deepak Pandaytag:biocomputation.herts.ac.uk,2018-06-11:/2018/06/11/on-reverse-nearest-neighbor-queries-emergence-and-implementation-issues.html<p class="first last">Deepak Panday's journal club session on "Reverse Nearest Neighbor Queries: emergence and implementation issues".</p>
<p>K-Nearest Neighbor, KNN is well-known tool in data mining. KNN query returns objects in a data set that are nearest to a query object-q. One of the shortcomings of KNN is that it is asymmetric. That is, the fact that a query point q has a data point p as its nearest neighbor does not imply that p’s nearest neighbor is q. There may be the case in decision support system where we concern on finding influence set for a query object q. For example, opening a new outlet of company A at particular location is more based on the segment of customers of company B ( competitor of A) who are likely to find new outlet more convenient that the location of B. Such segments of customers can loosely refer as influence set and the reverse relationship is addressed by reverse nearest neighbor, RNN.</p>
<p>In this talk, Deepak will go through some of the papers on the implementation of RNN queries.</p>
<p><strong>Date:</strong> 15/06/2018 <br />
<strong>Time:</strong> 16:00 <br />
<strong>Location</strong>: LB252</p>
Feature weighting as a tool for clustered based imputation model2017-02-16T10:50:50+00:002017-02-16T10:50:50+00:00Deepak Pandaytag:biocomputation.herts.ac.uk,2017-02-16:/2017/02/16/feature-weighting-as-a-tool-for-clustered-based-imputation-model.html<p class="first last">Deepak Panday's journal club session where he discusses feature weighting.</p>
<p>Imputation of missing attribute-values is an important data pre-processing
step. Missing of the attribute values are an inevitable problem in real-world
data collection. However, most of the data processing models don’t have any
mechanism to deal with missing values. One of the solutions to this problem is
to have an imputation step added in data pre-processing. Imputation model
relays on the initial prediction of the missing values and has no mechanism to
distinguish the observed and imputed values. But, the imputed values are only
as good as the assumption used to create them. In this research, we introduce
an unsupervised cluster-dependent feature weighing imputation model. This model
uses the feature weighing factor to rescaled the data to nullify the effect of
initial prediction of the missing attribute-values.</p>
<p><strong>Date:</strong> 17/02/2017 <br />
<strong>Time:</strong> 16:00 <br />
<strong>Location</strong>: LB252</p>
Removing noisy features via feature weights: preliminary results in mixed-model Gaussian distributions2016-09-08T11:12:11+01:002016-09-08T11:12:11+01:00Deepak Pandaytag:biocomputation.herts.ac.uk,2016-09-08:/2016/09/08/removing-noisy-features-via-feature-weight-preliminary-results-in-mixed-model-gaussian-distribution.html<p class="first last">Deepak Panday's journal club session on removing noisy features via feature weights.</p>
<p>In this article, we purpose an unsupervised feature selection algorithm that removes uniform noisy features encapsulated
in the mixed model Gaussian distribution. The method is based on feature weighting principle and assumes that the noisy
features have least feature weight and therefore have less or no contribution to cluster recovery. Experiments show that the proposed feature selection algorithm is more efficient in identifying noisy features compare to other similar algorithms like feature selection based on feature similarity (FSFS) or the intelligent K-Means feature selection(iKFS).</p>
<p><strong>Date:</strong> 09/09/2016 <br />
<strong>Time:</strong> 16:00 <br />
<strong>Location</strong>: LB252</p>