The representation used by a learning algorithm introduces a bias which is more or less well-suited to any given learning problem. It is well known that, across all possible problems, one algorithm is no better than any other. Accordingly, the traditional approach in machine learning is to choose an appropriate representation making use of some domain-specific knowledge, and this representation is then used exclusively during the learning process. To reduce reliance on domain-knowledge and its appropriate use it would be desirable for the learning algorithm to select its own representation for the problem. We investigate this with XCS, a Michigan-style Learning Classifier System. We begin with an analysis of two representations from the literature: hyperplanes and hyperspheres. We then apply XCS with either one or the other representation to two Boolean functions, the well-known multiplexer function and a function defined by hyperspheres, and confirm that planes are better suited to the multiplexer and spheres to the sphere-based function. Finally, we allow both representations to compete within XCS, which learns the most appropriate representation for problem thanks to the pressure against overlapping rules which its niche GA supplies. The result is an ecology in which the representations are species.