For much of our history, subjective measurements of light made with only the aid of our visual system have been sufficient for survival and risk mitigation. However, demand for better information in making decisions has resulted in the development of measurement tools and the codification of agreed scales. Today the candela, the unit for brightness of a light source, is defined to be one of the seven base units of the International System of Units (SI).
Long before the definition of the candela, scales for brightness of light sources were in use, with evidence of astronomers making note of the relative brightness of stars as long ago as 100 BC [1]. Without any technology for recording that light, however, progress to establish universal scales relied on human observers making subjective judgements against references such as ‘twilight’ and much later against standard lamps such as that defined by the British Metropolitan Gas Act of 1860.
Progress to transfer the scale for source brightness from the human visual system to an objective scale is therefore a relative-ly recent one, driven largely by the availability of new methods of measurement and the commercial needs of electric lighting manufacturers. These two drivers arose in the late 19th Century although it was to be some time before the full potential of the first would be fully realised.