Recognition of visual attributes in images allows an image's information content to be expressed textually. This has benefits for web search and image archiving, especially since visual attributes transcend language barriers. Classifiers are traditionally trained using manually segmented images, which are expensive and time consuming to produce. The authors propose a method which uses raw, noisy and unsegmented results of web image searches, to learn semantic colour terms. They use probabilistic graphical models on continuous domain, both for weakly supervised learning, and for segmentation of novel images. Experiments show that the authors methods give better results than the current state of the art in colour naming using noisy, weakly labelled training data.