Johannes Schmidt-Hieber is hoogleraar statistiek op de Universiteit Twente. Link naar persoonlijke webpagina

Expertises

  • Mathematics

    • Gaussian Process
    • Bounds
    • Kriging
    • Linear Models
    • Regularization
    • Parameters
    • Bayesian
    • Square Estimator

Organisaties

Publicaties

Jump to: 2025 | 2024 | 2023

2025

Ordinal pattern-based change point detection (2025)Test, 34(4), 927–980. Betken, A., Micali, G. & Schmidt-Hieber, J.https://doi.org/10.1007/s11749-025-00983-9Regularization through noise: A study of algorithmic randomness in gradient descent training (2025)[Thesis › PhD Thesis - Research UT, graduation UT]. University of Twente. Clara, G.https://doi.org/10.3990/1.9789036568289Statistical machine learning beyond standard supervised learning (2025)[Thesis › PhD Thesis - Research UT, graduation UT]. University of Twente. Wen, H.https://doi.org/10.3990/1.9789036568203Challenges and Opportunities for Statistics in the Era of Data Science (2025)Harvard Data Science Review, 7(2). Kirch, C., Lahiri, S., Binder, H., Brannath, W., Cribben, I., Dette, H., Doebler, P., Feng, O., Gandy, A., Greven, S., Hammer, B., Harmeling, S., Hotz, T., Kauermann, G., Krause, J., Krempl, G., Nieto-Reyes, A., Okhrin, O., Ombao, H., … Lederer, J.https://doi.org/10.1162/99608f92.abf14c9dOn the expressivity of deep Heaviside networks (2025)[Working paper › Preprint]. ArXiv.org. Kong, I., Chen, J., Langer, S. & Schmidt-Hieber, J.https://doi.org/10.48550/arXiv.2505.00110Training Diagonal Linear Networks with Stochastic Sharpness-Aware Minimization (2025)[Working paper › Preprint]. ArXiv.org. Clara, G., Langer, S. & Schmidt-Hieber, J.https://doi.org/10.48550/arXiv.2503.11891Transfer learning, generative modelling, and nonparametric regression (2025)[Thesis › PhD Thesis - Research UT, graduation UT]. University of Twente. Zamolodtchikov, P.https://doi.org/10.3990/1.9789036564649Ordinal Patterns Based Change Points Detection (2025)[Working paper › Preprint]. ArXiv.org. Betken, A., Micali, G. & Schmidt-Hieber, J.https://doi.org/10.48550/arXiv.2502.03099

2024

Convergence guarantees for forward gradient descent in the linear regression model (2024)Journal of statistical planning and inference, 233. Article 106174. Bos, T. & Schmidt-Hieber, J.https://doi.org/10.1016/j.jspi.2024.106174Improving the Convergence Rates of Forward Gradient Descent with Repeated Sampling (2024)[Working paper › Preprint]. ArXiv.org. Dexheimer, N. & Schmidt-Hieber, J.https://doi.org/10.48550/arXiv.2411.17567Understanding the Effect of GCN Convolutions in Regression Tasks (2024)[Working paper › Preprint]. ArXiv.org. Chen, J., Schmidt-Hieber, J., Donnat, C. & Klopp, O.https://doi.org/10.48550/arXiv.2410.20068On the VC dimension of deep group convolutional neural networks (2024)[Working paper › Preprint]. ArXiv.org. Sepliarskaia, A., Langer, S. & Schmidt-Hieber, J.https://doi.org/10.48550/arXiv.2410.15800A novel statistical approach to analyze image classification (2024)[Working paper › Preprint]. ArXiv.org. Chen, J., Langer, S. & Schmidt-Hieber, J.https://doi.org/10.48550/arXiv.2206.02151Lower bounds for the trade-off between bias and mean absolute deviation (2024)Statistics & probability letters, 213. Article 110182. Derumigny, A. & Schmidt-Hieber, J.https://doi.org/10.1016/j.spl.2024.110182Local convergence rates of the nonparametric least squares estimator with applications to transfer learning (2024)Bernoulli, 30(3), 1845-1877. Schmidt-Hieber, J. & Zamolodtchikov, P.https://doi.org/10.3150/23-BEJ1655Dropout Regularization Versus $\ell_2$-Penalization in the Linear Model (2024)Journal of machine learning research, 25. Clara, G., Langer, S. & Schmidt-Hieber, J.Dropout Regularization Versus l2-Penalization in the Linear Model (2024)Journal of machine learning research, 25, 1-48. Article 204. Clara, G., Langer, S. & Schmidt-Hieber, J.https://www.jmlr.org/papers/v25/23-0803.htmlCodivergences and information matrices (2024)Information Geometry, 7, 253-282. Derumigny, A. & Schmidt-Hieber, J.https://doi.org/10.1007/s41884-024-00135-2Dropout Regularization Versus l2-Penalization in the Linear Model (2024)[Working paper › Preprint]. ArXiv.org. Clara, G., Langer, S. & Schmidt-Hieber, A. J.https://doi.org/10.48550/arXiv.2306.10529On the inability of Gaussian process regression to optimally learn compositional functions (2024)In NIPS'22: Proceedings of the 36th International Conference on Neural Information Processing Systems (pp. 22341 -22353). Article 1623. Curran Associates Inc.. Giordano, M., Ray, K. & Schmidt-Hieber, J.https://doi.org/10.5555/3600270Johannes Schmidt-Hieber's contribution to the Discussion of 'the Discussion Meeting on Probabilistic and statistical aspects of machine learning' (2024)Journal of the Royal Statistical Society. Series B: Statistical Methodology, 86(2), 329. Schmidt-Hieber, J.https://doi.org/10.1093/jrsssb/qkae007Correction to “Nonparametric regression using deep neural networks with ReLU activation function” (2024)Annals of statistics, 52(1), 413-414. Schmidt-Hieber, J. & Vu, D.https://doi.org/10.1214/24-AOS2351A supervised deep learning method for nonparametric density estimation (2024)Electronic Journal of Statistics, 18(2), 5601-5658. Bos, T. & Schmidt-Hieber, J.https://doi.org/10.1214/24-EJS2332

Onderzoeksprofielen

Scan de QR-code of
Download vCard