Reliable machine learning
Sunday, October 14, 2018
Wednesday, September 5, 2018
Sunday, September 2, 2018
Sunday, August 26, 2018
Will speak about sequential tests. The objective is not to decide the sample side in advance. https://drive.google.com/file/d/1EX_ZwHvFgl1gRrxcHi9VamyPiEVQub/view?usp=sharing
Thursday, August 23, 2018
Wednesday, August 22, 2018
Ml crash directory
Are you familiar with regression (05.59)? One way to view Ml is regression on steroids...which mean a harder optimization problem (one that does not have a close analytic solution and/or is not convex) with many parameters.
Let's consider supervised learning first. You are given n labeled data points,
( x1,y1),...,(xn,yn). Your objective is to find a function f(x)=y that best predicts y on a new batch of x's. When y is continuous it is called regression and when its discrete it is called classification.
There are two things to notice right away
1. To solve this an optimization problem is defined, e.g., a minimization of square error in our original regression problem
2. Trying to explain the given data completely which is sometimes called extrapolation is actually a pitfall. You may capture random trends and your prediction power may be hindered. This is called overfitting
The basic intuition underlying many approaches to the classification problem is that had we known p(x, y) and given a new x we would have calculated p(x, y) for each y and choose y with the greatest probability. The difficulty is that it is not easy to estimate p(x, y).
View reference two below up to 8.15.
To estimate p(x y) we could proceed as follows. Recall that p(x y) = p(x y)p(y) = p(yx)p(x). Thus, estimating p(x), p(y) and p(xy) from the training data will let us estimate p(x y) and p(yx) and thus decide given a new x its class y. A simplifying independence assumption leads to the naive Bayes approach that is intuitively covered in the first part of Ariel Kleiner's crash course on ML (up to slide 25). This is an instance of what is referred to as generative models.
Yet another approach is to define an optimization that attempts to maximize performance on the training data while keeping f(x) simple. This is done in a varieties of ways.
To deep dive on ML concepts see reference three below. Iterate between reference three and simple ML tutorial in python or R to master the subject.
References
1. Introduction to programmers on why ml is useful to master. Notice that this introduction
ignores the challenges of applying it where it excels and dealing with drift.
2. Nice overview that start with classification only thing to be careful of is the claim that neural network are not statistical models. Estimating a neural network performance should be done using the same standard statistical tools, e.g., cross validation.
3. An intuitive deep dive on the concepts of machine learning is given by Haul Daume III
Are you familiar with regression (05.59)? One way to view Ml is regression on steroids...which mean a harder optimization problem (one that does not have a close analytic solution and/or is not convex) with many parameters.
Let's consider supervised learning first. You are given n labeled data points,
( x1,y1),...,(xn,yn). Your objective is to find a function f(x)=y that best predicts y on a new batch of x's. When y is continuous it is called regression and when its discrete it is called classification.
There are two things to notice right away
1. To solve this an optimization problem is defined, e.g., a minimization of square error in our original regression problem
2. Trying to explain the given data completely which is sometimes called extrapolation is actually a pitfall. You may capture random trends and your prediction power may be hindered. This is called overfitting
The basic intuition underlying many approaches to the classification problem is that had we known p(x, y) and given a new x we would have calculated p(x, y) for each y and choose y with the greatest probability. The difficulty is that it is not easy to estimate p(x, y).
View reference two below up to 8.15.
To estimate p(x y) we could proceed as follows. Recall that p(x y) = p(x y)p(y) = p(yx)p(x). Thus, estimating p(x), p(y) and p(xy) from the training data will let us estimate p(x y) and p(yx) and thus decide given a new x its class y. A simplifying independence assumption leads to the naive Bayes approach that is intuitively covered in the first part of Ariel Kleiner's crash course on ML (up to slide 25). This is an instance of what is referred to as generative models.
Yet another approach is to define an optimization that attempts to maximize performance on the training data while keeping f(x) simple. This is done in a varieties of ways.
To deep dive on ML concepts see reference three below. Iterate between reference three and simple ML tutorial in python or R to master the subject.
References
1. Introduction to programmers on why ml is useful to master. Notice that this introduction
ignores the challenges of applying it where it excels and dealing with drift.
2. Nice overview that start with classification only thing to be careful of is the claim that neural network are not statistical models. Estimating a neural network performance should be done using the same standard statistical tools, e.g., cross validation.
3. An intuitive deep dive on the concepts of machine learning is given by Haul Daume III
Sunday, August 19, 2018
Subscribe to:
Posts (Atom)
Under the Bayesian setting, probabilities represent our brief on the state of the world which we can update incrementally after each experim...

We'll continue with convex optimization  https://drive.google.com/drive/folders/0BzUXUMab8u_ZU0h3ZEc5Z2VrMm8

Bayesian inference recording . For more details see chapter 24 in the understanding book.

Back to Bayesian inference  https://drive.google.com/file/d/1NUioDotuKeA8kKg341qRjyUESnUjxkos/view?usp=sharing