The percentage is based on how many people have used the thing, so if it is suddenly popular and a lot more runs are completed it can shift the percentage.
The percentage is based on how many people have used the thing, so if it is suddenly popular and a lot more runs are completed it can shift the percentage.
Definitely different depending on your screens. I tried it on my desktop monitor, which is fairly decently calibrated, and I got 192 (quite high - bluer than 98% of pop. at the time). It was basically impossible not to notice any hint of green tinge in the background. On my phone, I got 171 (bluer than 68% pop. at the time). I took the tests back to back.
I moved the window between my two desktop monitors just now, and my second monitor is not only older tech, but it's also not color-calibrated at all (or calibrated poorly). A lot of the greener-looking colors on my more accurate monitor looked sky blue on the other monitor.
Edit: eh... I just took it on my phone again and got 192 this time lol. It's probably because the colors close to the boundary don't fit into blue or green for me, I'd rather be able to select both/none. Being forced the choose one kinda makes it subjective to how I'm feeling about it at the time. But I know that's not really the point of this test.
Shoot, I got significantly different results from two consecutive tests on the same device. I suspect the sequence of colors affects perception, too.
In my first run, I perceived greater differences between the sequential colors and I got a couple that were in the extremes, and got around 65% (I don't recall the hue number). The second run, many colors were only a little different from their predecessors, and I didn't get any really obviously blue or green - they were all subtle variations of turquoise, and I scored at 76% (177).
Yup, maybe because of computer graphics; I tend to consider Cyan, Turquoise and Teal as some kind of synonyms (or really similar to eachother); ususally I call it when there is almost as much green as blue "Cyan"
When looking at definitions, there are not the same colors, but are still all different shades of Green and Blue (I don't personnaly recognize them well, so I consider them with the same name; like people call them Green or Blue here)
My rainbow wheel be like: Red - Orange - Yellow - Green - Cyan - Blue - Purple - Magenta - Red
Like for the pixels on your screen are RGB = Red, Green & Blue; and the paint in Inkjet printers are CYMK (Cyan, Yellow, Magenta, blacK)
J'ai fait le test et je trouverais intéressant de le refaire en changeant la luminosité/clareté des tons. Pour les tons clairs, dans le doute je choisi bleu, mais c'est possible que je fasse pas pareil pour les tons foncés.
Ouais je pense que le calibrage de l'écran joue beaucoup aussi, sur mon pc les trois dernières couleurs étaient les mêmes à mes yeux, à cause de mon écran un peu nul
So I found that I pretty reliably get 185ish if I don’t think about it and just click. Now what happens is the test gives you a blue blue or green green, then throws you color just a bit over the line opposite of your first then throws you colors the opposite but closer to that line back and forth about every time. Seems like my picks tend to be A, B, A, B, A, B pretty consistently, which lead to the same outcomes. Not intentional choices for me, but it feels like the shock of seeing a color variance compared to the last color makes it feel like it’s “more green” or “more blue” than the previous. I feel like they should have a sort of pallet (unintentional pun) cleanser in between each color to give a true test.
our boundary is at hue 176, bluer than 75% of the population. For you, turquoise is green.
Je distingue ce que j'appelle en français, le bleu turquoïse et le vert turquoïse. Dans mon langage, c'est là que ce trouve la frontière entre les bleus et les verts.
Both my runs were higher than the average, but the second run fell about where I would probably put the line if I manually did it (182). I think it depends on the samples you get (along with your device of course). Now, let's do one where we divide things into blue, blue-green, and green. I would think the lines would be a lot similar for most people, but who knows.
I've got 174. As I remember correctly categorizing colors in your mind depends on your native language (or just language used), and some may even categorize the hues on this gradient to more basic colors in their language (like in Russian: зелёный [zʲɪˈlʲɵnɨj], голубой [ɡəɫʊˈboj] and синий [ˈsʲinʲɪj]).
I think the boundary between the basic color equivalents for green, and blue (zielony /ʑɛˈlɔ.nɨ/ and niebieski /ɲɛˈbjɛs.ki/) in Polish are more moved more to the right, and that's why I got 174. But I wonder if I have repeated that test in my native Polish if the results would differ (so they are even more to the right).
Edit: I have manually changed strings on the website from English to Polish, making my mind to "think in Polish". The result is 179 so I think that theory checks out.
i feel like this needs like 5x as many data points before giving a result, also at some point to me the only correct answer would be "neither" because the middle point is just Cyan to me, which isn't blue and definitely isn't green, just like how orange isn't yellow but definitely isn't red.
I think that the difficulty in deciding which to pick when the truth feels to be "neither" is an intended effect of the test. It would be interesting to see the results of a test that allowed for "neither" though
i don't think that's the case, the point of the website seems to quite clearly be seeing what shades people consider either colour, and if a shade is "neither" then that's the answer you want to track rather than polluting your data with forced false answers.