您当前的位置: 首页 > 网页快照
Distribution Optimization: An evolutionary algorithm to separate G.ussian mixtures
Abstract.Finding subgroups in biomedical data is a key task in biomedical research and precision medicine. Already one-dimensional data, such as many different readouts from cell experiments, preclinical or human laboratory experiments or clinical signs, often reveal a more complex distribution than a single mode. G.ussian mixtures play an important role in the multimodal distribution of one-dimensional data. However, although fitting of G.ussian mixture models (G.M) is often aimed at obtaining the separate modes composing the mixture, current technical implementations, often using the Expectation Maximization (EM) algorithm, are not optimized for this task. T.is occasionally results in poorly separated modes that are unsuitable for determining a distinguishable group structure in the data. Here, we introduce “Distribution Optimization” an evolutionary algorithm to G.M fitting that uses an adjustable error function that is based on chi-square statistics and the probability density. T.e algorithm can be directly targeted at the separation of the modes of the mixture by employing additional criterion for the degree by which single modes overlap. T.e obtained G.M fits were comparable with those obtained with classical EM based fits, except for data sets where the EM algorithm produced unsatisfactory results with overlapping G.ussian modes. T.ere, the proposed algorithm successfully separated the modes, providing a basis for meaningful group separation while fitting the data satisfactorily. T.rough its optimization toward mode separation, the evolutionary algorithm proofed particularly suitable basis for group separation in multimodally distributed data, outperforming alternative EM based methods.Download PDF Introduction.Finding subgroups in biomedical data is a key task in biomedical research and precision medicine. In complex multifactorial data, this is achieved by various methods such as cluster analysis, principal component analysis or several implementations of unsupervised machine learning. However, simpler one-dimensional data, such as many different readouts from cell experiments, preclinical or human laboratory experiments or clinical signs also often reveal a distribution that is more complex than a single mode. Indeed, various tests for multimodal distributions of one-dimensional data have been proposed (for a review, see1), emphasizing the relevance of the problem.G.ven the frequency of normally distributed biomedical data, G.ussian mixtures play a particularly important role in the multimodal distribution of one-dimensional data that are composed of different subgroups generated by the influence of various biological factors on the research readout. A G.ussian mixture model (G.M), expressed as \(p(x\T.eta )\) , is defined as the weighted sum of M >1 components \(p(x{\theta }_{m})\) in \({\mathbb{R}}\) ,$$p(x\T.eta )={\sum }_{m=1}^{M}{\alpha }_{m}p(x{\theta }_{m}),$$ (1) where \(x\in X=\{{x}_{1},\ldots ,{x}_{N}\}\) is a data sample and \(\T.eta =({\alpha }_{1},\ldots ,{\alpha }_{M};{\theta }_{1},\ldots ,{\theta }_{M})\) the parameterization of the G.M with am corresponding to the weight of each component \(m=1,\ldots ,M\) . Each component \(p(x{\theta }_{m})\) is represented as a normal distribution with parameters \({\theta }_{m}=({\mu }_{m},{\sigma }_{m})\) , i.e., mean value und the standard deviation.However, although G.M fitting is often aimed at obtaining the separate modes composing the mixture, current technical implementations are not optimized for this task. T.e common approach to model G.M is maximizing the likelihood for a given dataset, using some variant of Expectation Maximization (EM)2,3 or Markov-chain Monte-Carlo algorithm4. However, one problem with maximizing only the model’s likelihood is that there may be multiple models with high likelihood that pass statistical tests but are structurally different and may be more suited for modelling classes. A possible solution has been proposed as restricted EM5, which can lead to different solutions, but tends to be complicated and restrains are limited to relations within the parameters of a G.M, instead of general properties of the G.M as whole. T.us, maximizing the likelihood, or an equivalent measure, of the G.M is a common approach to G.M fitting that has several limitations that have been incompletely addressed. Moreover, it does not directly address the separation of the modes of which the G.M is composed.T.erefore, we propose an evolutionary algorithm for G.M fitting that uses an adjustable error function that is based on χ2 statistics and the probability density function. T.e algorithm can be directly targeted at the separation of the modes of the mixture by employing additional criterion for the degree by which single modes overlap. Using B.yesian decision borders6, a simple and effective classifier can be created that allows a valid association of data set members to distinct groups defined by the mode of the G.ussian mixture.Methods.Distribution optimization algorithm.“Distribution Optimization” minimizes a given distribution error by utilizing an evolutionary algorithm. Hence, a “population” of G.Ms is processed through many iterations with multiple phases, which mutate, filter (select) and recombine the G.Ms (Fig. 1). In general, after randomly initializing a first “generation” of G.M components, in iteration G.M parameters are randomly changed and recombined and individuals with better fitness are chosen, to finally form a new “generation” of G.Ms.Figure 1Workflow diagram of the “Distribution Optimization evolutionary algorithm.Full size image T.e first phase, the initialization step, creates the population of G.Ms and draws a value for every parameter of every G.M. It therefore depends on the minima and maxima of every structural model parameter. T.e individuals are drawn from a uniform distribution within those limits. T.e mean and standard deviations are drawn within the minimum and maximum of the original data. T.e weights of each component are randomly drawn from [0,1] and divided by their sum so that they add up to a value of 1.T.e core of the “Distribution Optimization algorithm is its error function, which in its negated form also serves as the fitness function. It is defined as the χ2 value, which can be used to compare a theoretical and an empiric distribution. T.e data space is split into B.adjacent intervals (B.being derived from standard deviation7,8 for our datasets) with \({k}_{j}\) being the fraction of empirical data points falling into the jth interval and Pj being the density of the G.M on the same interval. T.e error is then defined as:$${{\rm{\chi }}}^{2}={\sum }_{j=1}^{B.\frac{{k}_{j}-{p}_{j}N}{{p}_{j}N}$$ (2) In addition to the χ2 value, a second error term is included as the so-called “overlap error”. It approximates the fraction of the area overlapped between modes, relative to the total area under the total density curve. T.e overall overlap error of a G.M is the highest overlap error among all components, given as$$OverlapErro{r}_{m}={\sum }_{i=1}^{N}\min (\frac{\mathop{max}\limits_{j\in \{1..M\}\backslash m}({x}_{i},{\theta }_{j})\ast {\alpha }_{j}}{p({x}_{i}{\theta }_{m})\ast {a}_{m}},1)$$ (3) $$OverlapError=\mathop{{ma}{{x}}_{m=1...M}}\limits_{\,}(OverlapErro{r}_{m})$$ (4) During the selection step, the individuals with high fitness values are chosen. T.e selection is not limited to just the best individuals but uses the tournament selection algorithm9 that allows a broader choice of individuals. A random number of individuals form a tournament to that one of them will win at random but weighted with the individual’s rank of fitness. T.erefore, some less-fit individuals will be drawn into the next generation to enforce broader diversity of potential solutions. T.e selection will be repeated until the size of a generation is reached. T.e same individual can be selected repeatedly.T.e mutation and recombination steps modify the individuals in each generation. During the mutation step, random individuals are selected and their parameters changed. Here, a random uniform mutation is employed. T.erefore, a random number of parameters of an individual are chosen and then randomly drawn again within their respective minimum and maximum values. T.is process introduces the variety of solutions. T.e recombination step modifies individuals by choosing two random individuals and replacing them with their “children”. A pair of children \({c}_{1},{c}_{2}\) is defined by the weighted average of their parents \({p}_{1},{p}_{2}\) with a uniformly drawn weight \(\alpha \) out of [0,1].$${c}_{1}=\alpha {p}_{1}+(1-\alpha ){p}_{2}$$ (5) $${c}_{2}=(1-\alpha ){p}_{1}+\alpha {p}_{2}$$ (6) T.e recombination operation mainly fulfills the role of local optimization, which is why we have chosen the weighted average operation, while the mutation operation is used for introducing the variance.Finally, the algorithm finishes after a fixed number of iterations by returning the G.M with the highest fitness.Implementation.Implementation of the “Distribution Optimization evolutionary algorithm into the similarly named R library (https://cran.r-project.org/package=DistributionOptimization) makes use of the genetic algorithms provided by the R packageG.” (https://cran.r-project.org/package=G.10). T.e functionality can be accessed by the function of the same name: “DistributionOptimization()”. As input a data vector and the desired number of G.ussians are expected. As a balance between”OverlapError” and χ2 value is desired, a ratio between both in favor of the χ2 value may be set through the hyperparameter “OverlapT.lerance”, which when set at its maximum value of 1 results in pure optimization of fitness. T.e general parameters defining the evolution can be set as provided by the “G.” package. Parameters for population size and the number of iterations may be chosen, depending on the difficulty of the task, and increased manually when necessary. T.e output consists of all parameters necessary to reproduce the final G.M, and of the seed needed to reproduce the evolutionary algorithm. T.e user can monitor the process through the “Monitor” parameter, which either silences the call or outputs stepwise fitness improvements. Further utilities needed for clustering, such as likelihood ratio test or the Akaike information criterion (AIC11), require the R Package “AdaptG.uss” (https://CRAN.R-project.org/package=AdaptG.uss12).Data.sets and analyses.T. assess the suitability of the “Distribution Optimization for automated separation of the components of a G.M, the algorithm was applied on four different data sets available from published reports as specified below.Data.set 1 is used as the main example data set to demonstrate the function of the algorithm and the usage of the associated R library. It comprises cold pain thresholds acquired from in n = 148 healthy volunteers13. Noxious cold stimuli had been applied using a 3 × 3 cm2 thermode placed on the volar forearm. T.e thermode was cooled down by −8 °C/s, starting from 32 °C. Following establishment of the individual cold pain threshold to stimulus with slower cooling (1 °C/s), where the subject had to indicate when the sensation changes from cool to painful, 11 stimuli with fast decreasing temperature were applied, ranging from −5 to +5 °C, in steps of 1 °C, from that threshold. Stimuli were applied at randomized order and the subjects rated each stimulus with respect to its painfulness. T.e “yes”/”no” responses were submitted to binary logistic regression to obtain the phasic cold pain threshold. T.e obtained pain thresholds to fast-cooling (“phasic”) thermal stimuli had been analyzed with respect to modal distribution. Specifically, the parameters of the G.M were optimized using the EM algorithm as implemented in our interactive R packageAdaptG.uss”. T.e analysis had resulted in a trimodal distribution of phasic cold pain thresholds, with G.ussian modes located at mean temperatures of 24.5, 18.1 and 7.5 °C in decreasing order of cold pain sensitivity.T. determine the optimum number of components, model optimization had been done for M = 2 to 4 components, i.e., one mode less or more than in the original analysis. T.e final model was reestablished based on the Akaike information criterion. T. test the robustness of the results, the “Distribution Optimization algorithm was run 100 times using different values of “seed”. T.e 95% confidence intervals of the G.M parameter estimates were obtained as the 2.5th and 97.5th percentiles of the estimates from the 100 runs. T.is was compared to G.M fits in which the EM algorithm was used for model adaptation.Data.set 2 consists of heat pain thresholds that had been used for the original publication of the interactive “AdaptG.uss” R library12. T.e data had been obtained by locating a thermode at the skin of the forearm of n = 127 healthy volunteers and raising the temperature at 1 °C/s until the subject indicated a painful sensation. Interactive G.M analysis optimizing, based on the EM algorithm but with visually guided correction of the fit versus the observed PDE of the data, had identified a distribution pattern with M = 4 G.ussian modes located at temperatures of 32.3, 37.2, 41.4, and 45.4 °C.Data.set 3 has been acquired in a cohort of n = 31 healthy subjects in whom the cortical excitability had been modulated by applying transcranial magnetic stimulation14. Following inhibitory stimulation of the primary motor cortex, the changes in the amplitudes of motor evoked potentials displayed a trimodal distribution with modes located at 69.7, 115.1 and 158.4%.Finally, data set 4 contains a sample of n = 10,000 microarrays derived gene expression values, observed in subjects with and without leukemia15. A total of 7,747 different G.nes where assessed on 554 subjects, of whom 109 subjects where healthy while 445 patients were diagnosed with some kind of leukemia. Every sample represents the logarithmic intensity of a gene expression.T.e automated “Distribution Optimization” G.M algorithm was run on these data sets and the obtained fits were compared with the original fits and with fits using the non-interactive standard EM algorithm implemented in the R library “mclust” (https://cran.r-project.org/package=mclust16). In addition, the quality of the fit was assessed by visual inspection of the fit and of a derived quantile-quantile (Q.) plot of the observed and predicted data distributions.Results.Comparison of G.M fits using DistributionOptimization or the EM algorithm.T.e G.M fit of data set 1 using M = 3 modes as in the original publication13 resulted in mean values of m1,…,4 = 24.8, 14.7, and 8 °C. T.e “Distribution Optimization” algorithm well separated the three G.ussian modes, however, the B.yesian decision limits slightly differed from those obtained by either interactive or non-interactive G.M fit based on the EM algorithm (Fig. 2). In this data set, the interactive EM based fit had provided the best model solution according to an Akaike information criterion of AIC = 988.81. “Distribution Optimization” and non-interactive EM based fits were associated with values of the AIC of 994.96 and 992.36, respectively.Figure 2Fit of a G.M with M = 3 modes to data set 1(pain thresholds to cold stimuli acquired from healthy volunteers13). T.e distribution of the data is shown as probability density function (PDF) estimated by means of the Pareto density estimation (PDE23; black line) and overlaid on a histogram. T.e G.M fit is shown as a red line and the M = 3 single mixes are indicated as differently colored dashed lines (M#1, …, M#3). T.e B.yesian boundaries between the G.ussians are indicated as perpendicular magenta lines. At the right of the distributions, the respective Q.-plots are shown. T.p: Original fit as published previously, obtained with an interactive EM based G.M adaptation13. Middle: Fit obtained with the automated “Distribution Optimization” algorithm. B.ttom: Fit obtained using the EM algorithm without manual interaction. T.e figure has been created using the R software package (version 3.5.3 for L.nux; http://CRAN.R-project.org/24) and the R libraries “AdaptG.uss” (https://cran.r-project.org/package=AdaptG.uss12) and “DistributionOptimization” (https://cran.r-project.org/package=DistributionOptimization).Full size image Fitting various G.Ms to data set 1 showed that the “Distribution Optimization algorithm always aims at least overlap among G.ussian modes (Fig. 3). T.e present experiment also indicted that reassessment of the originally trimodal distribution of cold pain thresholds verified the original M = 3 modes since this provided the lowest value of the AIC (Fig. 3 lower right panel). T.e focus of the “Distribution Optimization algorithm on the separation of the G.ussian modes, disfavoring overlaps, became more evident in data set 2, where the non-interactive algorithm produced a solution with a counterintuitive separation of the subjects into clusters of the heat pain thresholds (Fig. 4). T.is is indicated by the location of the B.yesian decision limits that indicated a narrow second cluster not corresponding to the displayed data distribution. B. contrast, the “Distribution Optimization algorithm produced a G.M with means close to the original result obtained with the interactive EM based G.M fit, located at temperatures of 35, 37.2, 41 and 44.2 °C. Of note, while the solution provided by the non-interactive EM algorithm was unusable for topical interpretation, it provided nevertheless the lowest AIC criterion of AIC = 651.01, while the better grouping obtained with the interactive EM or “Distribution Optimization” based fits were associated with slightly higher values of AIC of 654.64 and 659.2, respectively.Figure 3Assessment of a possible the number of modes in the distribution of data set 1. Fitting of a G.ussian (mixture) model (G.M) with M = 2, …, 4 modes (similar panels throughout the figure) to the distribution, shown as PDE (black line) and overlaid on a histogram. T.e G.M fit is shown as a red line and the M = 2, …, 4 single mixes are indicated as differently colored dashed lines (M#1, …, M#4). T.e B.yesian boundaries between the G.ussians are indicated as perpendicular magenta lines. T.e best fit was obtained with M = 3 modes (dots) as indicated by the lowest value of the Akaike in formation criterion. T.e figure has been created using the R software package (version 3.5.3 for L.nux; http://CRAN.R-project.org/24) and the R libraries “AdaptG.uss” (https://cran.r-project.org/package=AdaptG.uss12) and “DistributionOptimization” (https://cran.r-project.org/package=DistributionOptimization).Full size image Figure 4Fit of a G.M with M = 4 modes to data set 2. (pain thresholds to heat stimuli acquired in healthy volunteers12). T.e distribution of the data is shown as probability density function (PDF) estimated by means of the Pareto density estimation (PDE23; black line) and overlaid on a histogram. T.e G.M fit is shown as a red line and the M = 4 single mixes are indicated as differently colored dashed lines (M#1, …, M#4). T.e B.yesian boundaries between the G.ussians are indicated as perpendicular magenta lines. At the right of the distributions, the respective Q.-plots are shown. T.p: Original fit as published previously, obtained with an interactive EM based G.M adaptation12. Middle: Fit obtained with the automated “Distribution Optimization” algorithm. B.ttom: Fit obtained using the EM algorithm without manual interaction. T.e figure has been created using the R software package (version 3.5.3 for L.nux; http://CRAN.R-project.org/24) and the R libraries “AdaptG.uss” (https://cran.r-project.org/package=AdaptG.uss12) and “DistributionOptimization” (https://cran.r-project.org/package=DistributionOptimization).Full size image Fits of data set 3 again showed that the “Distribution Optimization algorithm produced results comparable to those obtained with the alternatives tested in this analysis (Fig. 5). For data set 4, using M = 3 modes and aiming at their separation, “Distribution Optimization” terminated with G.ussian modes located at −44.1, −3.2 and 29% relative gene expression, whereas the raw EM based fit found almost superimposed modes with means at −1.1, −0.2 and 3% relative expression, and non-meaningful B.yesian decision limits (Fig. 6).Figure 5Fit of a G.M with M = 3 modes to data set 3 (amplitudes of muscle potential evoked in healthy volunteers14). T.e distribution of the data is shown as probability density function (PDF) estimated by means of the Pareto density estimation (PDE23; black line) and overlaid on a histogram. T.e G.M fit is shown as a red line and the M = 3 single mixes are indicated as differently colored dashed lines (M#1, …, M#3). T.e B.yesian boundaries between the G.ussians are indicated as perpendicular magenta lines. At the right of the distributions, the respective Q.-plots are shown. T.p: Original fit as published previously, obtained with an interactive EM based G.M adaptation14. Middle: Fit obtained with the automated “Distribution Optimization” algorithm. B.ttom: Fit obtained using the EM algorithm without manual interaction. T.e figure has been created using the R software package (version 3.5.3 for L.nux; http://CRAN.R-project.org/24) and the R libraries “AdaptG.uss” (https://cran.r-project.org/package=AdaptG.uss12) and “DistributionOptimization” (https://cran.r-project.org/package=DistributionOptimization).Full size image Figure 6Fit of a G.M with M = 3 modes to data set 4. (microarray derived gene expression data form patients with leukemia and controls15). T.e distribution of the data is shown as probability density function (PDF) estimated by means of the Pareto density estimation (PDE23; black line) and overlaid on a histogram. T.e G.M fit is shown as a red line and the M = 3 single mixes are indicated as differently colored dashed lines (M#1, …, M#3). T.e B.yesian boundaries between the G.ussians are indicated as perpendicular magenta lines. At the right of the distributions, the respective Q.-plots are shown. T.p: Fit obtained with the automated “Distribution Optimization” algorithm. B.ttom: Fit obtained using the EM algorithm without manual interaction. T.e figure has been created using the R software package (version 3.5.3 for L.nux; http://CRAN.R-project.org/24) and the R libraries “AdaptG.uss” (https://cran.r-project.org/package=AdaptG.uss12) and “DistributionOptimization” (https://cran.r-project.org/package=DistributionOptimization).Full size image .Robustness of the distribution optimization results.When running 100 fits of data set 1 with M = 4 modes and setting the seed parameter at i in the ith run, the “Distribution Optimization algorithm produced parameter values of the G.M that varied between 3.3 and 12.9% (T.ble 1). B. contrast, the EM algorithm produced the same results in every run, i.e., the between runs variance of the G.M parameter values was 0%.T.ble 1 Parameters of the G.M obtained for data set 1, using 100 runs with different values of seed and either the “Distribution Optimization or the EM algorithm (means, standard deviations, SD, and coefficients of variation, CV..Full size table .Discussion.T.e proposed method of fitting G.Ms with a focus on maximum separation of the single mixes while allowing least overlap, was successful in providing satisfactory fits of the probability density distributions of the data. T.e results were comparable with those obtained with classical EM based fits, except for data sets where the EM algorithm produced unsatisfactory results. T.e observation of such results as shown with data set 2 where the G.M provided no meaningful basis for group separation in a biomedical context had been the motive of the development of the present alternative G.M analysis method. Hence, differences to the automated EM based fit are expected. Of note, when using visually guided EM based G.M fitting, results were similar to those obtained with “Distribution Optimization”. Main advantages of the latter algorithm, however, are that it excludes the subjective component of visually guided fitting, is better suited for automated G.M fitting of many data sets, and makes the results better reproducible.T.e present algorithm has been developed to replace algorithms that aim at maximizing the G.M’s likelihood, such as EM2,3, or an equivalent parameter of goodness-of-fit, such as used in Markov-chain Monte-Carlo algorithms4. EM is divided into an expectation and a maximization step, which are repeated to incrementally improve the likelihood of the model. T.e algorithm stops when the improvement falls under a certain threshold. During the expectation step, the priori likelihood is calculated based on the current model parameters. A likelihood function can therefore be constructed, based on a classification of every data point to one of the G.M components with the highest B.yesian probability. T.e maximization step then maximizes the parameters, given this information. For the general problem of local maxima, many different modifications of the algorithm have been proposed. For example in CEM2,17 and ECM18, only a single component is maximized in each iteration. SMEM19, SSMEM19 and Competitive EM20 are variants of EM that include split and merge operations on G.M components. Randomly mutating EM algorithms are the Random Swap EM21 or the genetic based EM. However, these algorithms do not aim at replacing the EM algorithm but rather at reducing one of its weaknesses, the risk of ending in local and not global optima. An alternative to EM are the Markov-chain Monte-Carlo algorithms4. T.ey nevertheless follow the same idea as EM by alternately determining weightings, using B.yesian classification, and deriving expectation values and standard deviations from those weightings. Instead of maximizing those values in each iteration, Monte Carlo sampling is used, and the final G.M is given by the stationary distribution of the Markov chain. T.is, however, equals in a maximization of likelihood as in the EM algorithm. All covered methods ultimately only differ in in their way of searching the maximal likelihood, which we argue is not necessarily the best or most appropriate measure.T.e reproducibility of the results, however, was lower than that obtained when using classical EM based fitting, as shown in the robustness experiment. T.is owes to the genetic algorithm being naturally dependent on a degree of randomness. T.is however allows an overcoming of local maxima and has the advantage of approximating different solutions. T.is allows for an automated generation of multiple significant models, which can offer different interpretations and has the effect that the fits differ from those obtained with EM. Specifically, the effect demonstrated in the sample data sets is mainly caused by the choice of quality measures during the fitting process. We have shown that statistically sound G.M can be reached by optimization of a different quality measure than likelihood and that an additional quality can be introduced, producing a model that is significantly different for classification purposes than what is reached by EM. T.e problem search space for complex problems as G.M is not transparent and there is always explicit or implicit bias introduced through such optimizations, one can never ultimately conclude that there are no models that reach higher performance under the quality measures with a less desirable structure.T.e preference of “Distribution Optimization” to EM based alternative methods for G.M fitting not only depends on the reproducibility of the results, the goodness of fit, and the robustness of the results, but has a clear contextual component. T.is is shown with data set 4. T.e EM fit of a G.M consisted in a mixture of almost superimposed G.ussian modes with nearly identical means but different standard deviations (Fig. 6). B. contrast, “Distribution Optimization” produced three distinct modes with a main mode in the center and two modes of low weights at its margins. In biomedical research such represented in the leukemia derived data set 4, the usual criterion to define an effect is a difference in mean, for example as defined in the effect size measure of Cohen’s d22. Research aims often at the identification of subgroups in the data, and these subgroups are characterized by different central values of the selection parameter, such as different means. With the EM based fit, a useful group separation was impossible whereas the “Distribution Optimization provided this readily. However, when group separation is not a topical focus and by contrast, the data are considered to represent groups with similar means but different variances, “Distribution Optimization” would be unsuitable while EM based G.M fitting may provide the desired result. T.us, the choice of the method should consider both, the statistical soundness and the topical context.B. its optimization toward mode separation, the proposed “Distribution Optimization” evolutionary algorithm for G.M fitting provides a suitable basis for group separation in multimodally distributed data. It introduces choice between multiple significant models, which would not have occurred by pure means of likelihood optimization such as in alternative approaches to G.M fitting. When group separation is intended, the “Distribution Optimization algorithm may outperform alternative EM based methods in some data sets. Data.availability. T.e “DistributionOptimizationR package is freely available at https://cran.r-project.org/package=DistributionOptimization.References.1. Ameijeiras-Alonso, J., Crujeiras, R. M. & Rodríguez-Casal, A. Mode testing, critical bandwidth and excess mass. ArXiv e-prints (2016).2. Dempster, A. P., L.ird, N. M. & Rubin, D. B. Maximum L.kelihood from Incomplete Data./font> via the EM Algorithm. J.urnal of the Royal Statistical Society. Series B.39, 1–38 (1977).MAT.. G.ogle Scholar.3. B.shop, C. Pattern recognition and machine learning. (Springer, 2006).4. Frühwirth-Schnatter, S. Finite Mixture and Markov Switching Models. (Springer New York, 2006).5. Kim, D. K. & J.remy, M. G. T. T.e Restricted EM Algorithm for Maximum L.kelihood Estimation Under L.near Restrictions on the Parameters. J.urnal of the American Statistical Association 90, 708–716 (1995).MathSciNet.Article. G.ogle Scholar.6. B.yes, M. & Price, M. A. E.say towards Solving a Problem in the Doctrine of Chances. B. the L.te Rev. Mr. B.yes, F. R. S. Communicated by Mr. Price, in a L.tter to J.hn Canton, A. M. F. R. S. Philosophical T.ansactions 53, 370–418 (1763).Article. G.ogle Scholar.7. Keating, J. P. & Scott, D. W. A Primer on Density Estimation for the G.eat Home Run Race of 98. Stats #25, 16–22 (1999). G.ogle Scholar.8. Ultsch, A. Optimal density estimation in data containing clusters of unknown structure. T.chnical Report No. 34. (Dept. of Mathematics and Computer Science, University of Marburg, Marburg, G.rmany, 2003).9. G.ldberg, D. & Deb, K. A comparative analysis of selection schemes used in genetic algorithms. Foundations of G.netic Algorithms (1991).10. Scrucca, L. G.: A Package for G.netic Algorithms in R. J.urnal of Statistical Software 53, 1–37 (2013).Article. G.ogle Scholar.11. Akaike, H. A new look at the statistical model identification. IEEE T.ans. Aut. Control 19, 716–723 (1974).ADS.MathSciNet.Article. G.ogle Scholar.12. Ultsch, A., T.run, M. C., Hansen-G.os, O. & L.tsch, J. Identification of Molecular Fingerprints in Human Heat Pain T.resholds by Use of an Interactive Mixture Model R T.olbox (AdaptG.uss). Int. J. Mol. Sci. 16, 25897–25911 (2015).CAS.Article. G.ogle Scholar.13. Weyer-Menkhoff, I., T.run, M. C. & L.tsch, J. Machine-learned analysis of quantitative sensory testing responses to noxious cold stimulation in healthy subjects. Eur. J. Pain 22, 862–874 (2018).CAS.Article. G.ogle Scholar.14. Heidegger, T., Hansen-G.os, O., B.tlaeva, O., Ziemann, U. & L.tsch, J. A data-driven approach to responder subgroup identification after paired continuous theta burst stimulation. Front Human Neurosci 4, 382 (2017).Article. G.ogle Scholar.15. T.run, M. C. & Ultsch, A. Q.ality Measurements of Projections to Evaluate Discontinuous Structures of High-dimensional Data. J.urnal of Machine L.arning Research 17 (2016).16. Scrucca, L., Fop, M., Murphy, T. B. & Raftery, A. E. mclust 5: clustering, classification and density estimation using G.ussian finite mixture models. T.e R. J.urnal 8, 205–233 (2016).Article. G.ogle Scholar.17. Celeux, G., Chretien, S., Forbes, F. & Mkhadri, A. A Component-Wise EM Algorithm for Mixtures. J.urnal of Computational and G.aphical Statistics 10, 697–712 (2001).MathSciNet.Article. G.ogle Scholar.18. Meng, X.-L. & Rubin, D. B. Maximum likelihood estimation via the ECM algorithm: A general framework. B.ometrika 80, 267–278 (1993).MathSciNet.Article. G.ogle Scholar.19. Wang, H. X., L.o, B., Zhang, Q. B. & Wei, S. Estimation for the number of components in a mixture model using stepwise split-and-merge EM algorithm. Pattern Recognition L.tters 25, 1799–1809 (2004).Article. G.ogle Scholar.20. Zhang, B., Zhang, C. & Yi, X. Competitive EM algorithm for finite mixture models. Pattern Recognition 37, 131–144 (2004).Article. G.ogle Scholar.21. Zhao, Q., Hautamki, V., Krkkinen, I. & Frnti, P. Random swap EM algorithm for G.ussian mixture models. Pattern Recognition L.tters 33, 1–27 (2012).Article. G.ogle Scholar.22. Cohen, J. A coefficient of agreement for nominal scales. Educ. Psychol. Meas. 20 (1960).23. Ultsch, A. In Innovations in Classification, Data.Science, and Information Systems - Proceedings 27th Annual Conference of the G.rman Classification Society (G.KL.. (eds. B.ier, D. & Werrnecke, K. D.) (Springer).24. R Development Core T.am. R: A L.nguage and Environment for Statistical Computing. (2008).Download references.Acknowledgements.T.is work has been funded by the L.ndesoffensive zur Entwicklung wissenschaftlich-ökonomischer Exzellenz (L.EWE), L.EWE-Zentrum für T.anslationale Medizin und Pharmakologie (J.). T.e funders had no role in the decision to publish or in the preparation of the manuscript.Author information.Affiliations.Data.font style="background-color:#FFAAFF">B.onics Research G.oup, University of Marburg, Hans - Meerwein - Straße 22, 35032, Marburg, G.rmany Florian L.rch. & Alfred Ultsch.Institute of Clinical Pharmacology, G.ethe-University, T.eodor - Stern - Kai 7, 60590, Frankfurt am Main, G.rmany J.rn L.tsch.Fraunhofer Institute of Molecular B.ology and Applied Ecology - Project G.oup T.anslational Medicine and Pharmacology (IME-T.P), T.eodor - Stern - Kai 7, 60590, Frankfurt am Main, G.rmany J.rn L.tsch.Authors Search for Florian L.rch in: PubMed • . G.ogle Scholar .Search for Alfred Ultsch in: PubMed • . G.ogle Scholar .Search for J.rn L.tsch in: PubMed • . G.ogle Scholar .Contributions.F.L. – Conceptualization and implementation of the algorithm, testing of the algorithm, writing of the manuscript, design and implementation of Fig. 1. A.U. – Conceptualization of the algorithm, testing of the algorithm, critical discussion of the implementation. J.L.T.sting of the algorithm, writing of the manuscript, data analyses and creation of Figs. 2–6.Corresponding author.Correspondence to J.rn L.tsch.Ethics declarations. Competing interests. T.e authors declare no competing interests.Additional information.Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.Rights and permissions. Open Access T.is article is licensed under a Creative Commons Attribution 4.0 International L.cense, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. T.e images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. T. view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.Reprints and Permissions.About this article.Cite this article.L.rch, F., Ultsch, A. & L.tsch, J. Distribution Optimization: An evolutionary algorithm to separate G.ussian mixtures. Sci Rep 10, 648 (2020) doi:10.1038/s41598-020-57432-wDownload citationReceived: 01 September 2019 Accepted: 22 December 2019 Published: 20 J.nuary 2020 DOI: https://doi.org/10.1038/s41598-020-57432-w Comments.B. submitting a comment you agree to abide by our T.rms and Community G.idelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.
From:
系统抽取对象
人物     
(2)
(1)
(1)
(5)
(2)
(1)
(1)
(1)
(1)
(2)
(2)
(1)
(1)
(1)
(3)
(1)
(3)
(1)
(1)
(1)
(1)
(1)
(1)
(1)
(1)
(6)
(1)
(1)
(1)
(1)
(2)
(1)
(1)
(5)
(1)
(13)
(1)
(1)
(1)
(1)
(1)
(2)
(1)
(1)
(1)
(2)
(1)
(1)
(1)
(1)
(2)
(1)
(1)
(1)
(2)
(3)
(1)
(5)
(1)
(1)
(1)
(1)
(1)
(1)
(10)
(1)
(1)
(1)
(1)
(1)
(1)
(1)
(1)
(1)
(2)
科研人员     
(1)
(5)
机构     
(15)
(1)
(2)
(1)
(59)
(1)
(4)
(1)
(2)
(1)
(5)
(1)
(1)
(1)
(1)
(1)
(5)
(1)
(1)
(1)
(13)
(1)
(1)
(1)
(1)
(1)
(1)
(1)
(1)
(1)
(1)
(1)
(1)
(2)
(3)
(1)
(1)
(1)
(2)
(1)
(1)
(1)
(2)
(18)
(5)
(1)
(1)
(1)
(1)
(1)
(1)
(2)
(1)
(7)
(1)
(1)
(1)
(8)
(1)
(68)
(1)
(1)
(1)
(1)
(1)
(1)
(1)
(1)
(1)
(2)
(1)
(2)
(1)
(1)
(1)
(1)
(1)
(1)
(1)
(1)
(1)
(1)
(15)
(1)
(1)
(5)
(1)
(1)
(1)
(1)
(1)
(1)
大学     
(1)
(2)
(4)
出版社     
(4)
主题词
其他     
(1)
(1)
(4)
(3)
(2)
(1)
(4)
(5)
(1)
(18)
(5)
(1)
(1)
(1)
(1)
(5)
(1)
(13)
(2)
(5)
(1)
(2)
(10)
(1)
(2)
(3)
疾病     
(4)
团体     
(23)
(1)
(3)
(1)
(11)
(6)
FACILITIES     
(1)
(1)
(1)
产品     
(3)
(15)
(1)
(1)
系统抽取主题     
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)