When looking on Quora, reddit etc. people often ask how much math is required in machine learning. In addition some courses on udemy or other online learning platforms require a lot of prerequisites. In this post I try to make the distinction between applied machine learning and machine learning in research.
Undoubtedly there are some basic math requirements in both, applied and research ML. To work with machine learning basic math and linear algebra is required. Vectors, matrices and all related basic operations are a must. Basic statistics and probability theory is also required. But if you have a bachelor degree in computer science, you should already know these things.
When working as a research scientist you need a lot more math and very deep understanding of algorithms and statistical modelling. Research scientists in machine learning invent new techniques and models. The amount of math needed for this is enormous and on a high level. To understand things like stochastic gradient descent, principle component analysis, hidden markov models you need to know about optimization theory, differential equations etc.
However, only a few are really heading for a career as a research scientists. The majority is trying to solve real world problems with all these ml techniques and models. For this applied machine learning, the deep understanding of all details of these different models is helpful but not necessary. In fact it is more important to know a which methods and models are available, how they roughly work and where to apply which model in terms of scaleability, reliability and comprehensibility.I would say that software development, visualization and data analytics is more important then understanding in depth how each model works form the math perspective.
When I think about the majority of machine learning projects about 90% is usual software development or data understanding. Collecting, transforming and managing data flows, dealing with empty or broken data. Only the 10% of the time is about model selection, hyper parameter tuning. Very often it happens, that standard ml approaches solve the problem good enough and libraries offer pipelines to compare different models without much effort. One exception might be working with (deep) neuronal networks, because here its a bit more complicated to find a proper architecture, tune the model and deal with the training times.
Less math != less understanding
This does not mean to just use libaries, copy paste code from stackoverflow and don´t understand what you do. I just want to make clear that there is a difference between research and applied machine learning in terms of how much math is really needed in practice.