Computational Statistics in Data Science. Группа авторов

Чтение книги онлайн.

Читать онлайн книгу Computational Statistics in Data Science - Группа авторов страница 27

Computational Statistics in Data Science - Группа авторов

Скачать книгу

Biometrika, 57 (1), 97–109. doi: 10.1093/biomet/57.1.97

      21 21 Holbrook, A.J., Lemey, P., Baele, G. et al. (2020) Massive parallelization boosts big Bayesian multidimensional scaling. J. Comput. Graph. Stat., 1–34.

      22 22 Holbrook, A.J., Loeffler, C.E., Flaxman, S.R. et al. (2021) Scalable Bayesian inference for self‐excitatory stochastic processes applied to big American gunfire data, Stat. Comput. 31, 4.

      23 23 Seber, G.A. and Lee, A.J. (2012) Linear Regression Analysis, vol. 329, John Wiley & Sons.

      24 24 Trefethen, L.N. and Bau, D. (1997) Numerical linear algebra. Soc. Ind. Appl. Math.

      25 25 Gelman, A., Roberts, G.O., and Gilks, W.R. (1996) Efficient metropolis jumping rules. Bayesian Stat., 5, 42.

      26 26 Van Dyk, D.A. and Meng, X.‐L. (2001) The art of data augmentation. J. Comput. Graph. Stat., 10, 1–50.

      27 27 Neal, R.M. (2011) MCMC using Hamiltonian dynamics, in Handbook of Markov Chain Monte Carlo (eds S. Brooks, A. Gelman, G. Jones and X.L. Meng), Chapman and Hall/CRC Press, 113–162.

      28 28 Holbrook, A., Vandenberg‐Rodes, A., Fortin, N., and Shahbaba, B. (2017) A Bayesian supervised dual‐dimensionality reduction model for simultaneous decoding of LFP and spike train signals. Stat, 6, 53–67.

      29 29 Bouchard‐Côté, A., Vollmer, S.J., and Doucet, A. (2018) The bouncy particle sampler: a nonreversible rejection‐free Markov chain Monte Carlo method. J. Am. Stat. Assoc., 113, 855–867.

      30 30 Murty, K.G. and Kabadi, S.N. (1985) Some NP‐Complete Problems in Quadratic and Nonlinear Programming. Tech. Rep.

      31 31 Kennedy, J. and Eberhart, R. (1995) Particle Swarm Optimization. Proceedings of ICNN'95‐International Conference on Neural Networks, vol. 4, pp. 1942–1948. IEEE.

      32 32 Davis, L. (1991) Handbook of Genetic Algorithms, Van Nostrand Reinhold, New York.

      33 33 Hunter, D.R. and Lange, K. (2004) A tutorial on MM algorithms. Am. Stat., 58, 30–37.

      34 34 Boyd, S., Boyd, S.P., and Vandenberghe, L. (2004) Convex Optimization, Cambridge University Press.

      35 35 Fisher, R.A. (1922) On the mathematical foundations of theoretical statistics. Philos. Trans. R. Soc. London, Ser. A, 222,309–368.

      36 36 Beale, E., Kendall, M., and Mann, D. (1967) The discarding of variables in multivariate analysis. Biometrika, 54, 357–366.

      37 37 Hocking, R.R. and Leslie, R. (1967) Selection of the best subset in regression analysis. Technometrics, 9, 531–540.

      38 38 Tibshirani, R. (1996) Regression shrinkage and selection via the lasso. J. R. Stat. Soc., Ser. B, 58,267–288.

      39 39 Geyer, C. (1991) Markov Chain Monte Carlo Maximum Likelihood. Computing Science and Statistics: Proceedings of 23rd Symposium on the Interface Interface Foundation, Fairfax Station, 156–163.

      40 40 Tjelmeland, H. and Hegstad, B.K. (2001) Mode jumping proposals in MCMC. Scand. J. Stat., 28, 205–223.

      41 41 Lan, S., Streets, J., and Shahbaba, B. (2014) Wormhole Hamiltonian Monte Carlo. Twenty‐Eighth AAAI Conference on Artificial Intelligence.

      42 42 Nishimura, A. and Dunson, D. (2016) Geometrically tempered Hamiltonian Monte Carlo. arXiv preprint arXiv:1604.00872.

      43 43 Mitchell, T.J. and Beauchamp, J.J. (1988) Bayesian variable selection in linear regression. J. Am. Stat. Assoc., 83, 1023–1032.

      44 44 Madigan, D. and Raftery, A.E. (1994) Model selection and accounting for model uncertainty in graphical models using Occam's window. J. Am. Stat. Assoc., 89, 1535–1546.

      45 45 George, E.I. and McCulloch, R.E. (1997) Approaches for Bayesian variable selection. Statistica Sinica, 7, 339–373.

      46 46 Hastie, T., Tibshirani, R., and Wainwright, M. (2015) Statistical Learning with Sparsity: The Lasso and Generalizations, CRC Press.

      47 47 Friedman, J., Hastie, T., and Tibshirani, R. (2010) Regularization paths for generalized linear models via coordinate descent. J. Stat. Softw., 33, 1.

      48 48 Bhattacharya, A., Chakraborty, A., and Mallick, B.K. (2016) Fast sampling with Gaussian scale mixture priors in high‐dimensional regression. Biometrika, 103, 985–991.

      49 49 Suchard, M.A., Schuemie, M.J., Krumholz, H.M. et al. (2019) Comprehensive comparative effectiveness and safety of first‐line antihypertensive drug classes: a systematic, multinational, large‐scale analysis. The Lancet, 394, 1816–1826.

      50 50 Passos, I.C., Mwangi, B., and Kapczinski, F. (2019) Personalized Psychiatry: Big Data Analytics in Mental Health, Springer.

      51 51 Svensson, V., da Veiga Beltrame, E., and Pachter, L. (2019) A curated database reveals trends in single‐cell transcriptomics. bioRxiv 742304.

      52 52 Nott, D.J. and Kohn, R. (2005) Adaptive sampling for Bayesian variable selection. Biometrika, 92, 747–763.

      53 53 Ghosh, J. and Clyde, M.A. (2011) Rao–Blackwellization for Bayesian variable selection and model averaging in linear and binary regression: a novel data augmentation approach. J. Am. Stat. Assoc., 106,1041–1052.

      54 54 Carvalho, C.M., Polson, N.G., and Scott, J.G. (2010) The horseshoe estimator for sparse signals. Biometrika, 97,465–480.

      55 55 Polson, N.G. and Scott, J.G. (2010) Shrink globally, act locally: sparse Bayesian regularization and prediction. Bayesian Stat., 9, 501–538.

      56 56 Polson, N.G., Scott, J.G., and Windle, J. (2013) Bayesian inference for logistic models using Pólya–Gamma latent variables. J. Am. Stat. Assoc., 108, 1339–1349.

      57 57 Nishimura, A. and Suchard, M.A. (2018) Prior‐preconditioned conjugate gradient for accelerated gibbs sampling in “large n & large p” sparse Bayesian logistic regression models. arXiv:1810.12437.

      58 58 Rue, H. and Held, L. (2005) Gaussian Markov Random Fields: Theory and Applications, CRC Press.

      59 59 Hestenes, M.R. and Stiefel, E. (1952) Methods of conjugate gradients for solving linear systems. J. Res. Nat. Bur. Stand., 49, 409–436.

      60 60 Lanczos, C. (1952) Solution of systems of linear equations by minimized iterations. J. Res. Nat. Bur. Stand., 49, 33–53.

      61 61 Van der Vorst, H.A. (2003) Iterative Krylov Methods for Large Linear Systems, vol. 13, Cambridge University Press.

      62 62 Cipra, B.A. (2000) The best of the 20th century: editors name top 10 algorithms. SIAM News, 33, 1–2.

      63 63 Dongarra, J., Heroux, M.A., and Luszczek, P. (2016) High‐performance conjugate‐gradient benchmark: a new metric for ranking high‐performance computing systems. Int. J. High Perform. Comput. Appl., 30, 3–10.

      64 64 Zhang, L., Zhang, L., Datta, A., and Banerjee, S. (2019) Practical Bayesian modeling and inference for massive spatial data sets on modest computing environments. Stat. Anal. Data Min., 12, 197–209.

      65 65 Golub, G.H. and Van Loan, C.F. (2012) Matrix Computations, vol. 3, Johns Hopkins University Press.

      66 66 Pybus, O.G., Tatem, A.J., and Lemey, P. (2015) Virus evolution and transmission in an

Скачать книгу