Sunday, July 22, 2018

We'll cover LDA in tw's meeting.   Here is the slide - https://drive.google.com/open?id=1KRoCA4vo9H9oJOl3iD-qRqIHl9qQq9vf

This is part of our deep dive into generative models which will eventually loops us back to BN but will also shade light on GAN approaches.    Here is some background and relevant resources - 


Generative models

Under the generative model approach we attempt to model the joint distribution p(x y). Given x and applying the Bayesian rule to our model we classify as y the y for which p(y | x) is largest.

A straight forward application of the Bayes rule is to attempt the estimation of probabilities in the Bayesian rule p(y | x) p(x) = p(x | y) p(y).   With the typical large number of dimensions of the vector x, density estimation of the required quantiles is really hard.   See the first 30 mins of https://m.youtube.com/watch?v=_m7TMkzZzus#fauxfullscreen for details.

As modeling the joint distribution p(x y) is hard simplifying assumption are introduced leading to different more concrete classification techniques.

LDA

LDA models each p(x | y) as a gaussian distribution.   This stat quest video describes how the average and standard deviation of the distribution are chosen to maximize the separation between the classes over the training set  https://m.youtube.com/watch?v=azXCzI57Yfc

The second 30 mins of this lecture derives LDA and explains what happens if the covariance of all class matrices are I https://m.youtube.com/watch?v=_m7TMkzZzus#
Here the estimation of a covariance matrices of a random vector is explained in detailhttps://en.m.wikipedia.org/wiki/Estimation_of_covariance_matrices 

See chapter 24 of the understanding book for a broader coverage of generation methods - https://www.cs.huji.ac.il/~shais/UnderstandingMachineLearning/understanding-machine-learning-theory-algorithms.pdf

The background required for the Gaussian distribution and the covariance matrix is covered herehttp://cs229.stanford.edu/section/gaussians.pdf 

Tuesday, July 3, 2018

AI is fundamentally concerned with the creation of higher, more abstract representations of the world from simpler representations, automatically by a machine.  Ideally, such representations are required to be associated with statistical guarantees of their correctness.  

Previous attempts to this end identified homomorphism in algebraic structures as a fundamental tool for abstraction.    Early AI attempts applied it to solve simple board games by abstracting the board states.  In addition, more recent advances in image processing suggest that symmetries in groups is a good way to capture abstraction by ignoring of unimportant changes to the imageSome (https://www.microsoft.com/en-us/research/video/symmetry-based-learning/). More concretely, we say s is a symmetry of f(x) = y if f(s(x)) = f(x).   These two notions together suggest focusing on groups augmented with a probability measure to study the question of automatic abstraction.

We thus focus next on representation, symmetry, and homomorphism in groups.
https://m.youtube.com/watch?v=qpGDNKgfHHg# is a nice introduction to the concept of group representations with examples. 

For any set X, the set of all 1-1 onto functions f : X -> X with the composition operation form a group.  As mentioned above a symmetry of f is a s : X -> X such that F(s(x)) = f(x).  The first half of https://m.youtube.com/watch?v=MVoxtgVCo5g# By Alex Flournoy (up to ~32) motivates symmetries over transformation f : X -> X and introduces some relevant language such as continuous, discrete, infinite, compact, local and global symmetries.  The associated lecture notes are here https://inside.mines.edu/~aflourno/Particle/Lecture2Groups%20and%20Representations.pdf.

Some highlights from the symmetry learning work by Pedro Domingos et al https://www.microsoft.com/en-us/research/video/symmetry-based-learning/
1. Symmetries are changes in the data obtained by group operations such as rotation of a chair you want the classifier to be invariant under.
2. Symmetries may reduce the number of features thus we can learn with less data and still achieve the golden ratios of number of features and size of training set
3. Symmetry may reduce a search space
4.  It is not dependent on the ml method being used

Study group meeting slide -
https://drive.google.com/file/d/1SSDcrvE5uCM6J9xsRI2lvwhtVb61S7XP/view?usp=sharing.
Youtube in Hebrew of the ML study group meeting -
https://urldefense.proofpoint.com/v2/url?u=https-3A__www.youtube.com_playlist-3Flist-3DPLRPue8gCw66-2D8mizHl7s0ZQzATzdL8FZJ&d=DwIFAg&c=jf_iaSHvJObTbx-siA1ZOg&r=y_b69HJLyjn8lFwzYVfyxol578OEO4exeFDpgGN6MoQ&m=sxiQbE7RbDZ-BIVy4IS1MeIYDRpGBYK2rg4rlQDFzFQ&s=O8k45AgwGeBLm4lf2K25pZtg5uNVNWE8V7WSv4dlOhY&e=




Some related papers follow.

1. An algebraic abstraction approach to reinforcement learning is given here http://www.cse.iitm.ac.in/~ravi/papers/WALS03.pdf
2. Here is an approximate homomorphism approach https://web.eecs.umich.edu/~baveja/Papers/fp543-jiang.pdf
3. Symmetry based semantic meaning uses the concept of an orbit in a group to represent a set of paraphrases that defines implicitly the Semitic of a sentence https://homes.cs.washington.edu/~pedrod/papers/sp14.pdf.
4. Work on deep symmetry network https://homes.cs.washington.edu/~pedrod/papers/nips14.pdf

Sunday, July 1, 2018

Ml crash directory

Are you familiar with regression - https://m.youtube.com/watch?v=aq8VU5KLmkY?  One way to view Ml is regression on steroids...which mean a harder optimization problem (one that does not have a close analytic solution and/or is not convex) with many parameters.

Let's consider supervised learning first.  You are given n labeled data points,
( x1,y1),...,(xn,yn). Your objective is to find a function f(x)=y that best predicts y on a new batch of x's.   When y is continuous it is called regression and when its discrete it is called classification.

There are two things to notice right away
1. To solve this an optimization problem is defined, e.g., a minimization of square error in our original regression problem
2. Trying to explain the given data completely which is sometimes called extrapolation is actually a pitfall, you may capture random trends and your prediction power may be hindered.  This is called overfitting

The basic intuition underlying many approaches to the classification problem is that had we known p(x, y) and given a new x we would have calculated p(x, y) for each y and choose y with the greatest probability.  The difficulty is that it is not easy to estimate p(x, y). 

A simplifying independence assumption leads to the naive Bayes approach that is intuitively covered in the first part of Ariel Kleiner's  crash course on ML at http://ampcamp.berkeley.edu/wp-content/uploads/2012/06/ariel-kleiner-ampcamp-2012-machine-learning-part-1.pdf.

Yet another approach is to define an optimization that attempts to maximize performance on the training data while keeping f(x) simple.   This is done in a varsities of ways.   

To deep dive on ML concepts see reference three below.   Iterate between reference three and simple ML tutorial in python or R to master the subject.

References

1. Introduction to programmers on why ml is useful to master -
https://m.youtube.com/watch?v=0mK52UsOj-U
Ignores the challenges of applying it where it excels and dealing with drift.
2. Nice overview that start with classification https://m.youtube.com/watch?v=z-EtmaFJieY only thing to be careful of is the claim that neural network are not statistical models. Estimating a neural network performance should be done using the same standard statistical tools, e.g., cross validation.
3.   An intuitive deep dive on the concepts of machine learning is given by Haul Daume III at http://ciml.info/dl/v0_8/ciml-v0_8-all.pdf
Ml crash directory

Are you familiar with regression - https://m.youtube.com/watch?v=aq8VU5KLmkY ?  One way to view Ml is regression on steroids....

Let's consider supervised learning first.  You are given n labeled data points,
( x1,y1),...,(xn,yn). Your objective is to find a function f(x)=y that best predicts y on a new batch of x's.   When y is continuous it is called regression and when its discrete it is called classification. 

There are two things to notice right away
1. To solve this an optimization problem is defined, e.g., a minimization of square error in our original regression problem
2. Trying to explain the given data completely which is sometimes called extrapolation is actually a pitfall, you may capture random trends and your prediction power may be hindered.  This is called overfitting 

The basic intuition underlying many approached to the classification problem is that had we known p(x, y) and given a new x we would have calculated p(x, y) for each y and choose y with the greatest probability.  The difficulty is that it is not easy to estimate p(x, y).  

A simplifying independence assumption leads to the naive Bayes approach that is intuitively covered in the first part of Ariel Kleiner's  crash course on ML at http://ampcamp.berkeley.edu/wp-content/uploads/2012/06/ariel-kleiner-ampcamp-2012-machine-learning-part-1.pdf

References

1. Introduction to programmers on why ml is useful to master -
Ignores the challenges of applying it where it excels and dealing with drift. 
2. Nice overview that start with classification https://m.youtube.com/watch?v=z-EtmaFJieY only thing to be careful of is the claim that neural network are not statistical models. Estimating a neural network performance should be done using the same standard statistical tools, e.g., cross validation.

  Our next ML study group meeting will take place on Monday the 8 th  of October.   I'll cover the contraction theorem.   See relevant s...