Sequential learning projects

The project consists in reading, understanding and explaining a research paper about a topic related to the class [you don’t need to understand all the proofs when there are some, but you do need to understand the results]. If possible, you are encouraged to implement the algorithm described yourself and report your own findings on close or different benchmarks than those presented in the paper.

What is expected? A short report (8 pages) explaining the topic in your own words (no copy-paste from the paper your study!), relating it to the course, discussing links with other algorithms you know, etc., reporting the findings of your own experiments,our opinion about the paper (possible weaknesses ?). In short any personal thoughts inspired by the paper.

You can team up for the project, groups of two students are welcome. 1 group / project maximum.

Possible format: you may use the NeurIPS template if you are familiar with latex, but any other format is accepted. Using Overleaf allows you to write latex in a navigator with no installation on your computer, and the NeurIPS template is already there: https://fr.overleaf.com/latex/templates/neurips-2022/kxymzbjpwsqx

Deadline: send your report to me by email by January 26nd, 2023 at 11:59pm. You can either submit a PDF file with your report or a zip file with your PDF report and source code to run your experiments (a github link in the report is also OK). In either case, the name of your submission should be SL22_Name1_Name2.pdf

List of possible projects

You can also propose your own, please email me for approval with a short description of the project.

Bandits

Bootstrapping and bandits: researchers have developed alternatives to UCB or Thompson Sampling with an exploration that is more data-driven and directly uses the history of observed reward. This paper reveals that standard statistical bootstrap method cannot work, and propose a fix
https://arxiv.org/abs/1811.05154
Project chosen by: Alexandre Dezalot, Clément Jurat-Pe tiadou

Bandits with delayed rewards: In real-world problems, the reward is often observed after some random delay. This paper presents a way to cope with a known delay distribution: https://arxiv.org/abs/1706.09186. Maybe you want to try your own Thompson Sampling variant?
Project chosen by: Caroline Charreteur, Jean-Paul Duboux

Non-stationary bandits: Assuming the rewards are i.i.d. is often a big limitation in practice, and researchers have studied the so-called piecewise i.i.d. models, in which there can be some breakpoints in which the rewards distributions change abruptly. This paper proposes two ways to cope with non-stationarity, that you can understand and compare :
https://arxiv.org/abs/0805.3415
Project chosen by: Zakaria Farah, Ilyas Ouardi

Thompson Sampling for logistic bandits: a logistic model is particularly well suited to model binary rewards and take features into account and could be useful for contextual recommendation. This paper discusses the implementation of Thompson Sampling in this setting:
http://papers.neurips.cc/paper/7713-pg-ts-improved-thompson-sampling-for-logistic-contextual-bandits
Project chosen by:

RL algorithms, miscellaneous

Conservative policy iteration: How can we do policy iteration if the policies cannot be evaluated exactly?
https://people.eecs.berkeley.edu/~pabbeel/cs287-fa09/readings/KakadeLangford-icml2002.pdf
Project chosen by: Victor Cilleros, Victor Niaussat

Kernel-Based Reinforcement Learning: A stable method of solving MDPs with continuous states when a simulator is available, with asymptotic convergence guarantees.
https://link.springer.com/article/10.1023/A:1017928328829
Project chosen by: Alexandre Antunes, Mounir Guerrab

Transfer in Reinforcement Learning: how to use reinforcement learning to learn several tasks at the same time, and get a more adaptable algorithm?
https://proceedings.neurips.cc/paper/2017/file/350db081a661525235354dd3e19b8c05-Paper.pdf
Project chosen by: Pierre Mauron, Victor Calamy

Improving the DQN algorithm

Deep Reinforcement Learning with Double Q-learning: A way to improve Deep Q Learning.
https://arxiv.org/pdf/1509.06461.pdf
Project chosen by: Mohamed Ben Ali, Mohamed Makni

Prioritized Experience Replay: Another way to improve Deep Q Learning.
https://arxiv.org/abs/1511.05952
Project chosen by:

Dueling Network Architectures for Deep Reinforcement Learning: Again another way to improve Deep Q Learning.
https://arxiv.org/pdf/1511.06581.pdf
Project chosen by:

Policy Optimization / Policy Gradient

Policy optimization via the cross-entropy method: The cross-entropy (CE) method is a Monte Carlo method for importance sampling and optimization, that can be used to optimize the value function in reinforcement learning.
https://www.aaai.org/Papers/ICML/2003/ICML03-068.pdf
Project chosen by: Jeremy Jean

A Natural Policy Gradient: Policy gradient methods perform gradient ascent on the value function to optimize the policy. The natural policy gradient allows us to improve the convergence speed of this approach.
https://papers.nips.cc/paper/2001/file/4b86abe48d358ecf194c56c69108433e-Paper.pdf
Project chosen by:

Exploration in RL

Posterior Sampling for RL: the Thompson Sampling principle can be naturally extended to RL, provided one is able to maintain a posterior on the parameters of a Markov Decision Process. You can try to understand how it’s done here, and what theoretical guarantees the authors are able to give for this new exploration strategy.
https://arxiv.org/abs/1306.0940
Project chosen by: Céline Wang

Bootstrap DQN: the DQN algorithm is often used in conjunction with an epsilon-greedy policy to perform exploration. The Bootstrap DQN alternative introduces some alternative, randomized exploration :
https://arxiv.org/abs/1602.04621
Project chosen by:

Exploration: A Study of Count-Based Exploration for Deep Reinforcement Learning: Improving exploration in Deep RL using bonuses built with hash codes of the states.
https://arxiv.org/pdf/1611.04717.pdf
Project chosen by:

Unifying Count-Based Exploration and Intrinsic Motivation: Improving exploration in RL using bonuses built with pseudo-counts based on a density model.
https://arxiv.org/pdf/1606.01868.pdf
Project chosen by:

AlphaZero:
https://arxiv.org/abs/1712.01815, http://www.dcsc.tudelft.nl/~sc4081/2018/assign/pap/alphago_paper1.pdf
Project chosen by: Kenza Makhlouf

Miscellaneous

From TD-Gammon to DQN : a journey in the history of RL. The goal of this project is to understand the algorithm that lead to the first big success of RL and already used neural networks, and contrast it with the DQN algorithm.
https://bkgm.com/articles/tesauro/TDGammonAchievesMasterLevelPlay.pdf
Project chosen by: Judicael Leger, Alexandre Caumette

Bandits to manage inventory problems: How bandit models can be useful to solve inventory management problems when the demand is unknown.
https://arxiv.org/abs/1905.04337
Project chosen by: Amine Belblidia, Julien Airault