We propose novel ways of solving Reinforcement Learning tasks (that is, stochastic optimal control tasks) by hybridising Evolutionary Algorithms with methods based on value functions. We call our approach Population-Based Reinforcement Learning. The key idea, from Evolutionary Computation, is that parallel interacting search processes (in this case Reinforcement Learning or Dynamic Programming algorithms) can aid each other, and produce improved results in less time than the same number of search processes running independently. This is a new and general direction in RL research, and is complementary to other directions as it can be combined with them. We briefly compare our approach to related ones.