mahsa kazemzadeh
5 min readJan 24, 2022
The structure and limitations of the eyes and their impact on design

Have you ever thought about the effect of visual perception on your design? So let’s talk about eyes.

You might think that it is far from the mind that a designer wants to analyze some points according to the functional mechanism of the eye in designing.

The rest of the article addressed this issue.

Peripheral Pision Affects Our Emotions, And Our Emotions Affect Our Choices.

What is the most valuable communication road with your monitor screen? The eyes.

You start making decisions by watching what is displayed on your screen, all your choices start with watching, whether to leave the page or press the buy button or scroll the page.

The brain receives a great portion of its inputs from the eyes to have a comprehension of the environment. Accordingly, it is obvious that vision influences all our senses.

Vision evaluation by the brain is not simple as a camera in your hand. The central and peripheral part of the vision evaluate differently by the brain. Autonomous perception of the peripheral vision mostly returns to the amygdala, a nucleus within the brain which is a part of the limbic system and is responsible for our basic emotions and decision-making and controls the attention.

In simple terms, the data that is sent to the amygdala is very important in what we are going to do.

Don Norman divides the processing and perception in the human brain into three levels: visceral, behavioral, and reflective.

The lowest level of information analysis by the brain is the visceral level, which is simple and fast, and the peripheral vision perception lives in this level. On the visceral level, you have an overview of the environment. As a result, if you want to incept a good feeling at this level of perception to the user which occur at first glance, you need to know where to show what in the visual field.

Peripheral Vision Is Critical For Understanding The Environment, And Central Vision Is For Understanding The Objects.

In an experiment, Adam Larsen and Lester Laski (2009) showed people, photo of commonplaces, such as the kitchen or hall. They divided photos into two groups; In the first group, the margin of the photo was covered, and in the other group, the center of the photo was covered. The images were displayed for a very short period, and a gray filter was intentionally applied to make comprehension of the photos harder. Then people were asked to describe what they had seen. Larsen and Laski found that if the centerpiece of a photo was covered, people could tell what they were looking at, but in photos that showed only the centerpiece, they could not tell whether they were looking at the kitchen or the living room. They repeated the experiment with other images and concluded that central vision is especially critical for recognizing objects, but peripheral vision is used to understand the world wholistically.

So, for our users to understand the environment properly and also to have a good focus on important elements, we need to handle the central and peripheral vision of the user concurrently and properly.

For more detailed understanding of the mentioned experiment, I highly suggest watching of the following YouTube post.

Peripheral Vision Is analyzed 2 Times Faster Than Central Vision

Let’s start from the first stirrings of life beneath water, about 300,000 years ago, when a human being was shaving his spear, he had to pay attention to his surroundings so that he could notice if a lion is approaching him. So having a faster peripheral vision processing was a competitive advantage. Therefor, as a result of this natural selection pressure, people who had a better view of the environment survived and passed on their genes to us.

Emotional processing of our peripheral vision is faster than central vision and the reason behind it is evolutionary that we inherited from our ancestors.

In an experiment by Dmitri Bile (2009), he placed scary images in the peripheral and central vision of people individually. Then he measured how long it took for the amygdala to respond to the image. When a scary image was placed in a person’s central vision, it took 140–190ms for the amygdala to respond, but when it were placed in the peripheral vision, the reaction time was almost 80ms. This study shown the amygdala responses to peripheral vision almost twice faster than central vision.

Why does this matter?

If you want your user to focus on something on your page, you should not put anything in the surrounding of it because the processing speed of that element placed in the peripheral vision is twice as much as the text analysis that placed in the central part of the vision and the user can not focus on it.

Conversely, if you want to distract users, moving something in the sides of the screen (e.g. a window that appears at the bottom of the screen) can stool vision of the user. In this way, the user is obligated to look at it at least for a moment. This is exactly what site advertising does to us.

Peripheral Vision Is Less Sensitive To Colors

Peripheral vision is mostly made up of rod cells that are insensitive to colors. Color sensitivity is caused by cone cells, which are placed in a part of the eye called fovea (part of the eye which receive central vision).

How does this help us in design?

When choosing a color for an element in peripheral vision of the user, by playing with color luminosity, we can attract the user’s attention so that the user’s eye can analyze it better. The most and least luminescent colors are yellow and indigo, respectively.

mahsa kazemzadeh
mahsa kazemzadeh

Written by mahsa kazemzadeh

hi I'm mahsa and I love learning about the human being and their sense I'm a product designer and trying to learn it :)

No responses yet