Over mij
News (2022):
- 12 December - Our paper "You Can Have Better Graph Neural Networks by Not Training Weights at All: Finding Untrained GNNs Tickets" received the best paper award at the Learning on Graphs (LoG 2022) conference (link)
- 24 November - one paper on sparsity and graph neural networks accepted at the LoG 2022 conference as a spotlight presentation (link)
- 16 November - one paper on human-robot cooperation and reinforcement learning accepted in NCAA journal (link: coming soon)
- 14 September - 2 sparse training papers (on feature selection and time series classification) accepted at NeurIPS 2022 (Links: 1, 2)
- 1 September - Ghada Sokar is doing a five months Internship at Google Brain, Montreal
- 1 September - Aleksandra Nowak from the GMUM group, Jagiellonian University, is visiting us for three months
- 1 September - Shiwei Liu is moving to the University of Texas at Austin as a postdoctoral fellow to continue his research in sparse neural networks
- 15 July - Invited talk during the AI Seminar at the University of Alberta/Alberta Machine Intelligence Institute titled "Sparse training in supervised, unsupervised, and deep reinforcement learning" (link)
- 13 July - We are organising the second edition of the "Sparsity in Neural Networks: Advancing Understanding and Practice" - SNN Workshop 2022 (https://www.sparseneural.net/)
- 5 July - One paper on sparse training for high sparsity regimes accepted in Machine Learning (ECMLPKDD 2022 journal track) (link)
- 14 June - One paper on sparse training and continual learning accepted at ECMLPKDD 2022 (link)
- 10 June - Invited talk at Calgary AI, University of Calgary titled "Sparse training in supervised, unsupervised, and deep reinforcement learning"
- 21 May - I am doing a research visit to the group of Dr. Matthew Taylor at the University of Alberta
- 16 May - One sparse training paper accepted at UAI 2022 (link)
- 10 May - Our paper "Dynamic Sparse Training for Deep Reinforcement Learning" received best paper award at ALA 2022, collocated with AAMAS 2022 (link)
- 25 April - We had the pleasure of hosting Utku Evci, Research Engineer at Google Brain Montreal, to give a very engaging in-person talk (link)
- 20 April - One paper on sparse training and deep reinforcement learning accepted at IJCAI-ECAI 2022 (link)
- 15 April - Our tutorial "Sparse Neural Networks Training" has been accepted at ECMLPKDD 2022 (link)
- 6 April - Shiwei Liu defended his outstanding PhD thesis (link) with cum laude
- 28 Jan - 2 sparse training papers accepted at ICLR 2022 (Links: 1, 2)
Narrative CV:
Decebal Mocanu is Assistant Professor in Artificial Intelligence and Machine Learning within the DMB group, Faculty of Electrical Engineering, Mathematics, and Computer Science at the University of Twente; and Guest Assistant Professor within the Data Mining group, Department of Mathematics and Computer Science at the Eindhoven University of Technology (TU/e).
From September 2017 until February 2020, Decebal was Assistant Professor in Artificial Intelligence and Machine Learning within the Data Mining group, Department of Mathematics and Computer Science, TU/e and a member of TU/e Young Academy of Engineering. In 2017, he received his PhD in Artificial Intelligence and Network Science from TU/e. During his doctoral studies, Decebal undertook three research visits at the University of Pennsylvania (2014), Julius Maximilians University of Wurzburg (2015), and the University of Texas at Austin (2016).
Prior to this, in 2013, he obtained his MSc in Artificial Intelligence from Maastricht University. During his master studies, Decebal also worked as a part time software developer at We Focus BV in Maastricht. In the last year of his master studies, he also worked as an intern at Philips Research in Eindhoven, where he prepared his internship and master thesis projects. Decebal obtained his Licensed Engineer degree from University Politehnica of Bucharest. While in Bucharest, between 2001 and 2010, Decebal started MDC Artdesign SRL (a software house specialized in web development), worked as a computer laboratory assistant at the University Nicolae Titulescu, and as a software engineer at Namedrive LLC.
Onderzoek
Decebal and his co-authors have laid the ground (connected papers) for sparse training in deep learning (training sparse artificial neural networks from scratch), while introducing both static and dynamic sparsity. Besides the expected computational benefits, sparse training achieves in many cases better generalisation than dense training.
- Static sparsity in A topological insight into restricted Boltzmann machines, Machine Learning (2016), preprint https://arxiv.org/abs/1604.05978 (2016)
- Dynamic sparsity in Scalable training of artificial neural networks with adaptive sparse connectivity inspired by network science, Nature Communications (2018), preprint https://arxiv.org/abs/1707.04780 (2017).
Decebal short-term research interest is to conceive scalable deep artificial neural network models and their corresponding learning algorithms using principles from network science, evolutionary computing, optimization and neuroscience. Such models shall have sparse and evolutionary connectivity, make use of previous knowledge, and have strong generalization capabilities to be able to learn, and to reason, using few examples in a continuous and adaptive manner.
Most science carried out throughout human evolution uses the traditional reductionism paradigm, which even if it is very successful, still has some limitations. Aristotle wrote in Metaphysics “The whole is more than the sum of its parts”. Inspired by this quote, in long term, Decebal would like to follow the alternative complex systems paradigm and study the synergy between artificial intelligence, neuroscience, and network science for the benefits of science and society.
Github: http://github.com/dcmocanu/
Google Scholar Link
Onderwijs
Current PhD students:
- Boqian Wu (2022 - present)
- Bram Grooten (2021 - present)
- Qiao Xiao (2021 - present)
- Zahra Atashgahi (2019 - present)
- Ghada Sokar (2019 - present)
Graduated PhD students:
- Shiwei Liu (cum laude), Sparse Neural Network Training with In-Time Over-Parameterization (graduated 2022, Postdoctoral Researcher - UT Austin)
- Anil Yaman, Evolution of biologically inspired learning in artificial neural networks (graduated 2019, Assistant Professor - VU Amsterdam)
PDEng students supervision (graduated)
- Pranav Bhatnagar, Automatic Microscope Alignment via Machine Learning (at Thermo Fisher Scientific, Eindhoven), September 2019
- Eleftherios Koulierakis, Detection of outbreak of infectious diseases : a data science perspective (at GDD, Eindhoven), July 2018
MSc students supervision (graduated)
- Emiel Steerneman, Exploring the effect of merging techniques on the performance of merged sparse neural networks in a highly distributed setting, July 2022
- Anubrata Bhowmick, Markers of Brain Resilience (at Philips Research), July 2021
- Samarjeet Singh Patil (3rd supervisor), Automated Vulnerability Detection in Java Source Code using J-CPG and Graph Neural Network, February 2021
- Mickey Beurskens, Pass the Ball! - Learning Strategic Behavioural Patterns for Distributed Multi Agent Robotic Systems, November 2020
- Sonali Fotedar (cum laude), Information Extraction on Free-Text Sleep Narratives using Natural Language Processing (at Philips Research, Eindhoven), November 2020
- Selima Curci (cum laude), Large Scale Sparse Neural Networks, October 2020
- Manuel Munõz Sánchez (cum laude), Domain Knowledge-based Drivable Space Estimation (at TNO Helmond), September 2020
- Chuan-Bin Huang, Novel Evolutionary Algorithms for Robust Training of Very Small Multilayer Perceptron Models, August 2020
- Jeroen Brouns, Bridging the Domain-Gap in Computer Vision Tasks (at Philips Research, Eindhoven), December 2019
- Daniel Ballesteros Castilla, Deep Reinforcement Learning for Intraday Power Trading (at ENGIE Global Markets, Brussels), December 2019
- Mauro Comi (cum laude), Deep Reinforcement Learning for Light Transport Path Guiding, November 2019
- Saurabh Bahulikar, Unsupervised Learning for Early Detection of Merchant Risk in Payments (at Payvision, Amsterdam), November 2019
- Thomas Hagebols, Block-sparse evolutionary training using weight momentum evolution: training methods for hardware efficient sparse neural networks (at Philips Research, Eindhoven), March 2019
- Bram Linders, Prediction and reduction of MRP nervousness by parameterization from a cost perspective (2nd supervisor, at Prodrive Technologies), February 2019
- Joost Pieterse (cum laude), Evolving sparse neural networks using cosine similarity, July 2018
Conference tutorials
- S Liu, G. Sokar, Z. Atashgahi, D.C. Mocanu, E. Mocanu, Sparse Neural Networks Training, European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECMLPKDD 2022), Materials
- D.C. Mocanu, E. Mocanu, T. Pinto, Z. Vale, Scalable Deep Learning: How far is one billion neurons?, European Conference on Artificial Intelligence (ECAI 2020), Materials
- D.C. Mocanu, E. Mocanu, T. Pinto, Z. Vale, Scalable Deep Learning: How far is one billion neurons?, International Joint Conference on Artificial Intelligence (IJCAI 2020), Materials
- D.C. Mocanu, E. Mocanu, P.H. Nguyen, M. Gibescu, Z. Vale, D. Ernst, Scalable Deep Learning: from theory to practice (T11), International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2019), Link, Materials
- D.C. Mocanu, E. Mocanu, Z. Vale, D. Ernst, Scalable Deep Learning: from theory to practice (T24), International Joint Conference on Artificial Intelligence (IJCAI 2019), Link, Materials
- E. Mocanu, D.C. Mocanu, Scalable Deep Learning: from theory to practice, European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECMLPKDD 2019), Link, Materials
Verbonden aan Opleidingen
Bachelor
Master
Vakken Collegejaar 2022/2023
Vakken Collegejaar 2021/2022
In de Pers
- Numenta, 2020, The Case For Sparsity in Neural Networks, Part 2: Dynamic Sparsity
- TechXplore, 2019, A bio-inspired approach to enhance learning in ANNs
- Nature Collections, 2018, The multidisciplinary nature of machine intelligence
- Towards Data Science, 2018, The Sparse Future of Deep Learning
- Phys.org, 2018, New AI method increases the power of artificial neural networks
- E&T magazine, 2018, Artificial neural networks could run on cheap computers using new method
- Technologist, 2018, AI method increases the power of artificial neural networks
- Elektor magazine, 2018, Nieuw algoritme versnelt kunstmatige intelligentie kwadratisch
- EMERCE, 2018, Nieuwe AI-methode vergroot de kracht van kunstmatige neurale netwerken
- deVolkskrant, 2018, Ook het stroomnet denkt straks na
- Cursor, 2017, De lusten en de lasten van AI (The benefits and burdens of AI)
Contactgegevens
Bezoekadres
Universiteit Twente
Drienerlolaan 5
7522 NB Enschede
Postadres
Universiteit Twente
Postbus 217
7500 AE Enschede