Sunday, February 10, 2019

Armed bandits the UCB heuristics - be an optimist -
https://drive.google.com/file/d/1G4ezjBXpJDEQPPfD6tH_XUowOZgvDeck/view?usp=sharing.

1 comment:

  1. From Sam - Here is a good resource on the multi-armed bandit problem, for those who are interested for tomorrow's meeting:
    https://itnext.io/reinforcement-learning-with-multi-arm-bandit-decf442e02d2

    ReplyDelete

  Our next ML study group meeting will take place on Monday the 8 th  of October.   I'll cover the contraction theorem.   See relevant s...