Professor Dr. Martin Reiser


Invited Talk

Affiliation : Institute for Media Innovation, Nanyang Technological University

Country: Singapore

Title: What is Color



We need uniform color across all our many devices and in different viewing conditions such as monitors, projection theaters and reflection prints. Unfortunately, even though the basic technologies are developed, we are far from this state of affairs. This survey talk explains why. We will introduce the classical theory of vision, Trichromaticity and linear mixing. Based on these theories, the Committee Internationale de l' Éclairage, CIE, defined modern colorimetry in 1931. This theory was hugely successful and is behind our sensors, measurement devices, monitors, video walls and projectors.

But when we talk about color, we don't use XYZ coordinates of colorimetry but speak about perceptual quantities hue, saturation and lightness. Munsell defined a color atlas of equally spaced chips in those coordinates. We will show how that the CIELAB color space, defined by CIE in 1974, bridges between colorimetry's XYZ and the perceptual coordinates.

But CIELAB still does not say what color people see under different viewing conditions. Many, partly nonlinear effects of lateral adaptation must be considered to answer this question. CIE came up with an appearance model, CIECAM02, in 2002. Based on colorimetry and CIECAM, the International Color Consortium, ICC, defined an operation system independent way of defining color spaces through ICC profiles and their connection via a Profile Connection Space. We will explain this technology and also show how poorly it is adopted and implemented today.

Extended Abstract
Snapchat usurers share 8100 photos per second, Facebook users 4500. Billions of photos, millions of videos shown on screens, prints or beamers – and hardly any two colors the same. The situation seems hardly better than the joke what NTSC means, namely "never two same colors". And this despite 150 years of color science and 13 years of efforts of the International Color Consortium to:
"… create an open, vendor-neutral color management system which would function transparently across all operating systems and software packages." [Wikipedia]. This talk surveys the laws of color and in the end highlights the difficulty of the task undertaken by the ICC and reasons why ubiquitous consistent color is hard to achieve.

Color is the interpretation of the brain of a light stimulus of the eyes. Photons with wavelengths between 380 and 750 nm are visible. Photons have no color. The power-spectrum of those photons characterize a color stimulus. Color is the interpretation of that stimulus by the brain. The classical theory of color is Trichromaticity, first proposed by George Palmer (1777). Three primary colors, additively mixed, can match all visible colors. Hermann Grassmann formulated the additive mixing theory (1853). Edwald Hering (1892) proposed an alternative to Trichromaticity, called Color Opponent Theory that was, however, shown to be merely a variant of Trichromaticity with sum and difference signals. Finally, Johannes von Kries gave an explanation of chromatic adaptation (1905), the amazing "white balance" of the human visual system responsible for the phenomenon of color constancy.

Based on Trichromaticity, we can measure color. Standardize three primaries, red, green and blue and perform a color matching experiment with the sample to be measured. Then the intensities r, g, and b of those primaries characterize the color. In 1931, the CIE (Commision International de l' Éclairage) standardized colorimetry and defined a Standard Observer. Using Grassman's law, an instrument can be built that makes a color matching experiment without human observers. CIE also defined a linear transform of the R, G, B coefficients, calling the transformed values X, Y and Z.

The classical color theory is enormously successful and serves as the basis for all measurement, capture and display devices. The print process too is crucially influenced by colorimetric principles. So, it seems, we scientifically understand color, at last. But do we? How do we talk about color? We all under stand hue (red, green, blue, yellow…), colorfulness or saturation (vivid, drab, pastel…) and lightness (bright, dark). Artists have a long tradition in understanding color in these terms. Albert Henry Munsell, an artist himself, put this on a scientific basis by constructing an atlas of color chips that are visually equally distant, i.e. the perceived difference in saturation, hue and lightness are constant in these three dimensions. He proposed a color notation on this color system in 1905. Is there a connection between XYZ and this perceptual color space? 1948 Richard S. Hunter proposed the Lab color space that had variables L: lightness, a: red-green axis and b: blue-yellow axis. The hunter color space is a Hering opponent space. The CIE defined CIE La*b* or CIELAB in 1974. The XYZ quadrilateral is transformed into a nonlinear solid in the L, a, b space. The transformation is based on these principles:

  • Sum and difference values of XYZ construct the opponent space
  • The Munsell relation between lightness and luminance makes it homogeneous, i.e. equal distance of the points of two colors are equal perceptive differences
  • The von Kries transform to a given white point adjust colors properly for that illuminant.

  • The CIELAB space remedies deficiencies of XYZ, it allows for easy definitions of hue, saturation, lightness and, most importantly, the distance of two points are a meaningful measure of visual difference. That distance is called deltaE and is used widely in the graphics industry. Now we closed the circle from colorimetry to a perceptual description of color. Do we understand color now? In particular, do two people seeing a patch of color with same La*b* coordinates under different viewing conditions see the same color? The answer is NO.

    In order to answer the last questions, we must understand and quantify many nonlinear and local effects of the visual system. We will show examples of several of these effects. Most importantly, Lateral adaptation means, that color perception is strongly influenced by the surrounding colors of the sample patch and the overall brightness of the viewing condition. A variety of such effects have been quantified and are built into CIECAM02, a color appearance model published by CIE in 2002. The parameters of CIECAM02 are

  • the stimulus (2degree sample)
  • the color of the background (10degree sample)
  • the brightness of the surround field (average, dim, dark)
  • and the color coordinates of the white point of the background

  • Given these inputs, CIECAM yields the perceptive coordinates: lightness, chroma, hue, brightness, colorfulness and saturation.
    Now we are close to answer the key question: what do I see, what do you see if the observers have different viewing conditions. But it is obvious that getting the answer is far from simple. And this brings us to the problem raised in the beginning: how to get uniform color across many devices and many viewing conditions such as monitor, projection in a dark room, viewing a reflection print etc. ICC, the International Color Consortium was founded 2003 by 8 vendors to define an operation system independent way of defining color spaces and transforming them. An ICC profile is a table defining primaries and transformations. Each device must be characterized by such a profile. To avoid an N-square problem, profiles are connected to a common Profile Connection Space (PCS). Thus, to go from a camera output to a display monitor, color is first interpreted in the PCS and then re-formulated for the monitor. The PCS must make assumptions about a standard viewing condition and colors are re-rendered in the PCS. Obviously, this re-rendering is not without its problems, especially if the output device has bigger capability than the PCC.

    Today, ICC color management is available in the standard operation systems. But profiles are not readily available and their creation through colorimetric measurements is difficult and requires expensive hardware. Software implements color management often in poor quality. Consumer print shops do not use color management at all. Automatic profiling and transparent use of those profiles is nowhere in sight. To end with an analogy, I think color is in a similarly bad shape as the area of fonts 20 years ago. Unfortunately, it will still take years to get closer to ubiquitously uniform color.


    Martin Reiser studied electrical engineering at the Swiss Federal Institute of Technology (ETH). He started his career in the IBM Zürich Research Laboratory, where he conducted his doctoral research in large-scale numerical simulations of Field-Effect Transistors. His thesis was awarded the Silver Medal at the ETH.

    1972, he moved to the IBM Thomas J. Watson Research Center, Yorktown Heights, N.Y. where he started work in the area of Performance Evaluation of computer and communication systems. His most important scientific contribution is Mean Value Analysis of Queuing Networks which won the IEEE Koji Kobayashi Award for Computers and Communications in 1991.

    In 1979, Martin Reiser returned to his native Switzerland to lead the Communication and Computer Science department of the IBM Zürich Research Laboratory. Under his leadership, the IBM Token-Ring was developed, the Digital Signal Processor invented and high-speed data modems realized and transferred into successful IBM products. For these innovative accomplishments, he won five IBM Outstanding Innovation Awards.

    1986 he was promoted to the position of Director of the IBM Zürich Research Laboratory, where about 200 scientists worked on projects in computer networks, physics and technology.
    A highlight of his career was the Nobel Prizes for Physics of 1996 and 1997, won under his leadership by Rohrer, Binnig, Müller and Bednorz.

    At the beginning of the 90‘s, Martin Reiser joined Professor Wirth working on the Object-Oriented Oberon system. Reiser was promoted to IBM Executive and held a seat on IBM's Corporate Technical Committee with responsibility for the technical strategy in communications.

    1997, Martin Reiser was appointed Director of the newly-founded Institute for Media Communications (IMK) at GMD, German National Research Center for Information. This gave him the chance to realize the dream of bringing together technology and the arts. Under his term, IMK invented the Responsive Workbench, realized exhibitions in the Art Museum Bonn, invented the idea of Virtual Audio Environment and realized a virtual reality production of Beethoven’s Fidelio, now on display in the Beethovenhaus Bonn. In 1998, he was appointed Honorary Professor of the University of Cologne. In 2000, the GMD was fused wit the Fraunhofergesellschaft, Germany’s leading organization for applied research. Reiser became Director of the Fraunhofer Institute Media Communication.

    In 2007, Martin Reiser was on a sabbatical at Nanyang Technological University (NTU) and the University of Vienna. Since 2008, he has joined NTU as Founding Director of the Institute for Media Innovation where he continues the leadership role in the world of Interactive Digital Media.

    Martin Reiser is a member of the Swiss Academy of Engineering and a Fellow of the IEEE. He has an honorary Doctoral degree from the Technical University of Moscow.