From mobile devices to data centres, energy usage in computing continues to rise and is now a significant part of global energy consumption. Increasing the energy efficiency of computation is a major concern in electronic system engineering and high on the research agenda worldwide. While hardware can be designed to save a modest amount of energy, the potential for savings are far greater at the higher levels of abstraction in the system stack. The greatest savings are expected from energy consumption-aware software. This is because, although energy is consumed by the hardware executing computations, the control over the computation ultimately lies within the software, algorithms and data, i.e. the applications running on the hardware. Experts from Intel  expect software that takes full control of the energy-saving features provided by hardware can save three to five times of what conventional software is achieving. Moreover, algorithm selection is critically important — not only does the algorithm need to be the most suitable for solving the problem; it also needs to be a good fit to the hardware . The challenge of energy-efficient computing, therefore, requires understanding the entire system stack, from algorithms and data, down to the computational hardware. Over the last decades, however, software engineering has been moved away from the operation of the hardware through the introduction of several layers of abstraction. While these have many benefits, including portability, increased programmer productivity, and software reuse across hardware platforms, the clear drawback is that many software engineers are now "blissfully unaware" of how algorithms and data, and their respective encoding, influence the energy consumption of a computation when executed on hardware.