Title: Game Intelligence Abstract: These lectures provide a practical introduction to game strategy learning with function approximation architectures, covering two main approaches to learning game strategy: evolution (including co-evolution), and temporal difference learning. The lectures will show how the selected input features and the function approximation architecture may have a critical impact on what is learned, as well as how it is interfaced to the game (e.g. as a value estimator or as an action selector). Multi-layer perceptrons are a common choice of function approximator, but we will also study N-Tuple systems and interpolated table-based approximators. These have recently shown great potential to learn quickly and effectively. Each method will be demonstrated with reference to some simple fragments of software, illustrating how the learning algorithm is connected with the game and with the function approximation architecture. Example games will include Othello and a simplified car racing problem. Over the last few years Monte Carlo Tree Search (MCTS) methods have made enormous advances in the field of Computer Go, and there is much on-going research on how MCTS can be applied to a range of other games. The lectures will also include an introduction to MCTS, since this offers an important alternative to techniques mentioned above.