Visual-based navigation has been a source of numerous researches in the field of mobile robotics. In this paper we present a topological map building and localization algorithm using wide-angle scenes. Global-appearance descriptors are used in order to optimally represent the visual information. First, we build a topological graph that represents the navigation environment. Each node of the graph is a different position within the area, and it is composed of a collection of images that covers the complete field of view. We use the information provided by a camera that is mounted on the mobile robot when it travels along some routes between the nodes in the graph. With this aim, we estimate the relative position of each node using the visual information stored. Once the map is built, we propose a localization system that is able to estimate the location of the mobile not only in the nodes but also on intermediate positions using the visual information. The approach has been evaluated and shows good performance in real indoor scenarios under realistic illumination conditions.