The reader should need no convincing that GBML is a very diverse and active area. Although much integration with mainstream machine learning and has taken place in the last ten years more is needed. The use of multi-objective EAs in GBML is spreading. Integration with ensembles is natural given the population-based nature of EAs but is only just beginning. Other areas which need attention are memetics, meta-learning, hyperheuristics and Estimation of Distribution Algorithms. In addition to further integration with other areas the constituent areas of GBML need more interaction with each other.
Two persistent difficulties for GBML are worth highlighting. First, run-time speed remains an issue as EAs are much slower than most other methods. While this sometimes matters little (e.g. in off-line learning with modest datasets), equally it is sometimes critical (e.g. in stream mining). Various methods to speed up GBML exist (see e.g.  §12.1.3) and more research is warranted, but this may simply remain a weakness. The second difficulty is theory. EA theory is notoriously difficult and when coupled with other processes becomes even less tractable. Nonetheless, substantial progress has been made in the past ten years, most notably with LCS.
Other active research directions will no doubt include meta-learning such as the evolution of bias (e.g. selection of representation), evolving heuristics and learning rules for specific problem classes, and other forms of self-adaptation. In the area of data preparation, Freitas  §12.2.1 argues that attribute construction is a promising area for GBML and that filter methods for feature selection are faster than wrappers and deserve more GBML research. Finally, many specialised learning problems (not to mention specific applications) remain little- or un-explored with GBML, including ranking, semi-supervised learning, transductive learning, inductive transfer, learning to learn, stream mining and no doubt others which have not yet been formulated.