In a stylised Robinson Crusoe economy, we illustrate basic dynamic programing techniques. In a first step, we define state-like and control-like variables. In a second step, we introduce the value-function-like function. While the former step reduces the number of variables that have to be considered when solving the model, the latter step reduces the dimensionality of the Bellman equation associated with the optimisation problem. The model's solution is shown to be saddle-path stable, such that the phase diagram associated with the Bellman equation has two solution branches. The simplicity of our model allows us to state both the stable and the unstable branch explicitly. We also explain the usefulness of logarithmic preferences when studying the continuous-time Hamilton-Jacobi-Bellman equation. In this case, the utility maximisation problem can be transformed into an initial value problem for an ordinary differential equation.
ASJC Scopus subject areas
- Economics, Econometrics and Finance(all)