Core principle of Machine Learning 

There of course are many, but for someone coming from computer science, and, software engineering, where the environment is relatively clean and certain (deterministic), it usually is a leap to understand that Machine Learning (and other elements of #AI) are not. 

Machine learning, is based on probability theory and deals with stochastic (non-deterministic) elements all the time. Nearly all activities in machine learning, require the ability to factor and more importantly, represent and reason with uncertainty. 

To that end, when designing a system, it is recommended to use a simple but uncertain (with some non-deterministic aspects)  rule, rather than a complex but certain rule. 

For example, having a simple but uncertain  rule saying “most birds fly”, is easier and more effective than a certain rule such as “Birds can fly, except flightless species, or those who are sick, or babies, etc.”

As one starts getting deeper in Machine Learning, a trip down memory lane around Probability distribution, expectation, variance, and covariance won’t hurt. 

Similar posts to check out:

  • March 16, 2017 Neural Networks Of course you heard of Neural Networks! In the context of #AI they are all the buzz of course. You might have heard of some such as DFF […]
  • March 17, 2017 HoloPortation – Limits of Human Kind When it comes to AI and the limits of human kind, what better example that shows the art of the possible than what Microsoft is doing with […]
  • December 2, 2016 Object and scene detection with #AI Continuing the previous #ArtificialIntelligence theme. Wanted to see what and how does Amazon's rekognition work and different from the […]

Leave a Reply