Role Of Dynamic Programming In Machine Learning

Source – analyticsindiamag.com

Since machine learning (ML) models encompass a large amount of data besides an intensive analysis in its algorithms, it is ideal to bring up an optimal solution environment in its efficacy. This is where dynamic programming comes into the picture. It is specifically used in the context of reinforcement learning (RL) applications in ML. It is also suitable for applications where decision processes are critical in a highly uncertain environment. In this article, we explore the nuances of dynamic programming with respect to ML.

What is Dynamic Programming?

Dynamic Programming (DP) is one of the techniques available to solve self-learning problems. It is widely used in areas such as operations research, economics and automatic control systems, among others. Artificial intelligence is the core application of DP since it mostly deals with learning information from a highly uncertain environment.

Definition And The Underlying Concept

Richard Sutton and Andrew Barto, distinguished computer science professors from the University of Alberta and the University of Massachusetts, Amherst, respectively, gave a concise definition of the concept in their book Reinforcement Learning: An Introduction.

“The term dynamic programming refers to a collection of algorithms which can be used to compute optimal policies given a perfect model of the environment as a Markov decision process.”

Drawing from the definition, DP assumes the input model to be ideal (with all the probability parameters and reward functions known beforehand) to construct it as a Markov decision process (MDP). Based on this, solutions are obtained for problems wherein a set of action spaces and model states are presented, which are again continuous in nature. Altogether, DP finds the optimal and good policies in a right model structure. The authors, however, contend that even though it is theoretically feasible, DP can be computation-intensive.

Ultimately, in any DP problem, maximising returns for reward functions over time forms the core idea. In addition, another aspect DP aims to resolve is at furnishing a learning model environment for reinforcement learning (RL), where a full-blown learning model is generally absent.

DP algorithms can be classified into three subclasses.

  1. Value iteration algorithms
  2. Policy iteration algorithms
  3. Policy search algorithms

All these algorithms have their respective advantages in various learning applications.

As mentioned earlier, DP is also very useful for large and continuous-space problems in real time. It provides insights from complex policy representations through a technique called ‘approximation’. This means providing an estimate of the huge number of possible values achievable through value or policy iterations in DP. Sampling large data can have a great effect on the learning aspect in any of the algorithms.

Approximation technique, again has two classes called parametric and nonparametric approximators. The former considers the mapping of parameter space with respect to problem functions while the latter considers approximation for the data in a problem.

Q-Learning: A Paradigm In Dynamic Programming

In DP, there are a host of algorithms to obtain MDPs. The popular amongst them that finds more implementation in applications is Q-learning. This method relies on “Q-values” which are based on actions and states in RL.

Basically, Q-learning considers three crucial parameters in RL, namely transition function (T), reward function (R) and value function (V), by keeping learning rate α and discount factors in mind. It is best suited for applications where the parameters T and R are unknown (a typical scenario in MDP).

In order to get an optimum solution from a set of actions and states after value iteration, Q-learning is preferred since the maximum values cannot be determined for the unknown parameters. In other words, ‘q-values’ are computed in place of maximum value iterations. A standard formula for is given below:

Qk+1(s,a) = s’T(s,a,s’)[R(s,a,s’) +maxa’Q(s’, a’)]

where, Qk+1(s,a) is the iterated function with states ‘s’ and action ‘a’ by considering the summation values of parameters T and R along with discount factor . The apostrophes denote an update function is incorporated all along the process.

This equation serves as the basis for Q-learning. The update function in the parameters also incorporates an optimal policy for the RL results. The equation can be modified to even RL concepts such as exploration and exploitation for problems with a very uncertain environment.

Conclusion

It can be seen that there is a fine line in DP and RL. Though the approach is similar, DP provides models for representation, which is a striking contrast in RL. A big disadvantage, however, is with the exactness of the solutions it provides for highly complex problems. It all depends on the control systems in context, which are generally dynamic in nature. Ultimately, DP should derive the best values in self-learning systems for AI and ML.

Related Posts

Subscribe
Notify of
guest
7 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
7
0
Would love your thoughts, please comment.x
()
x
Artificial Intelligence