Home » Practical Reinforcement Learning » Why do we need function approximation in RL Why do we need function approximation in RL MC & TD >>> Why do we need function approximation in RL >>> Why do we need function approximation in RL 1. Question 1 Why do we need function approximation in RL? Check all that apply. 1 point Learning with tabular methods is much more unstable compared to learning with function approximation. Because the state and action space may be big or combinatorially enormous, rendering tabular methods impossible to use. Because we want our agent to be memory-, space- and data efficient. Relying on function approximation allows us to achieve greater reward in any environment. ———————————————————————————————- 2. Question 2 Monte-Carlo learning (MC) vs Temporal Difference learning (TD). Select all options that apply. 1 point TD targets have small variance. In TD learning we cannot update the model until the end of an episode is reached. MC targets have small variance. In MC learning we cannot update the model until the end of an episode is reached. In TD learning we can use as few as one step of experience (s,a,r,s’)(s,a,r,s′) to update the model. In MC learning we can use as few as one step of experience (s,a,r,s’)(s,a,r,s′) to update the model. ———————————————————————————————- 3. Question 3 In TD learning we approximate… 1 point Reward function. Policy function. Discount factor \gammaγ. Value function. Expectation of targets. ———————————————————————————————- 4. Question 4 What is correct about Offline and Online methods? 1 point MC is online. MC is offline. TD is offline. TD is online. Other Questions Of This Category What are the main sources of randomness in reinforcement learningHow is a model-free RL algorithm different from a model-based oneWhat are the two main steps in value-based approach to Reinforcement LearningIn broad strokes how do policy based methods workWhat is true about Bellman equationsWhich of the following is true about regret