Thanks for sticking with us—this is the sixth and final post about the NCCA Color Project experiment we conducted at METALCON. In the previous five posts, we presented our analyses of the 28 observers’ ratings to see how discerning and consistent they were. We concluded that human observers see color differences differently; some see a lot of difference, some just a little. This was not unexpected. Finally, it’s time to look at the observed color differences plotted against the machine readings for color difference.
Let’s quickly review the previous two posts on NCCA’s “The Color Project”:
- In Part Three, we showed that about 20% of our observers fell into an “extremes” category (i.e., they were either notably far less critical or far more critical); but the majority—80%—of the observers were more or less in agreement.
- In Part Four, we concluded that most of the people, most of the time, were not fooled by the identical-pair panels. Only 9% of the time were there notable color differences declared, and half of the observers saw no difference at all. If you expect to see a color difference, by golly, you will!
Parts One and Two of this series of posts on NCCA’s “The Color Project” discussed why we needed to run a visual assessment experiment and how we structured the study. You may recall that we created 54 panel pairs, and within this set there were 15 repeats (i.e., pairs that were shown to the observers—unbeknownst to them—a second time to see how closely they would rate the pairs), as well as 8 pairs of identical panels (i.e., take a panel, cut it in half, tape the halves together, and call it a color difference pair). I also mentioned the tedium of collecting data for 13 solid hours. And lastly, I teased you with promise of revealing data here in Part Three. So, without further ado, let’s dive in. But first, let’s discuss the visual observations. We’ll talk color data later. Continue reading
In the last post, Part One, we left off with two facts: We depend on a numerical description of color and color difference rather than judging a sample vs. a standard visually; and NCCA began to investigate ΔE2000 to determine how well it might work in the coil industry.
Let’s start Part Two with a short discussion on ΔE2000. It is way more than the usual ΔE with a little “2000” as a subscript. (If only it were that easy.) Our current ΔE is a straightforward root-mean-squared calculation, as shown here:
ΔEHunter = [(L2-L1)2 + (a2-a1)2 + (b2-b1)2]1/2