Call 0845 485 1219
We love digital - Call and say hello - Mon - Fri, 9am - 5.30pm
by Gemma Holloway on 15th October 2012
In the famous words of James Cromwell:
“Why is it that when robots are stored in an empty space they will group together rather than stand alone?… how do we explain this? Random pieces of code? or is it something else. When does a perceptual schematic become consciousness? When does the difference engine become the search for truth? When does the personality simulation become the bitter mote of a soul?”
Whilst focusing on much more human-like robots than exist in this current date and time, this quote got me thinking about the comparison between the behaviour of robots and humans, and how the search engine robots of today are bound by the same thought processes and behaviours patterns as us.
Ever wonder if search engine optimisation isn’t really just about ticking the right boxes? Do we really need to just follow a set of guidelines just to satisfy the robots? Or should we look at robots as just another being trying to make it in everyday life?
So to answer these questions I’m going to look at the behavioural patterns of robots in the relation to the psychological theories we as humans are bound by.
Social psychological theories suggest an individual’s behaviour is determined by those around them. This particular branch of psychology seemed the obvious choice to begin with due to the recent emphasis placed upon social signals in search engine rankings.
Think back to being at school, you wanted to be part of the “in crowd”, right? Be part of what everyone else was doing? In the same way as humans form groups and communities on social networks to discuss particular topics (often in the form of sharing links), search engines join these groups in their unique way by “talking” about the same sites (displaying in SERPs). They too want to be part of the “in-crowd” and do join in their own way.
Psychological theory suggests that people value themselves how others value them, therefore, being popular is important. And how does someone determine popularity? By being part of the group. People quite often do things to please other members of a community in a bid to gain popularity, a concept commonly known as peer pressure. Robots place sites with stronger social signals higher up the search engine rankings. Could this be classed as a form of peer pressure? I would conclude that when a number of users discuss and share links associated to a particular site, robots feel obliged to display these sites in their SERPs to please the rest of the community (the users), consequently they too succumb to peer pressure.
This would suggest that a community can influence robots in the same way they can humans. Therefore, when designing a social media campaign it is important to encourage members of a community to actively engage, i.e. share links, consequently making the robots want to be part of it too.
An alternative social explanation for the importance robots place on social signals is conformity. Remember back to school, when the teacher asks a question and you panic because you don’t know the answer? You’d agree with anyone that did, wouldn’t you? Asch simulated this situation by drawing three lines on the board, one noticeably shorter than the others. He discovered that participants would say the obviously wrong answer because everyone else did. This is based on the conformity theory that humans will reach a decision based on what others say. Search engine robots also arguably react in accordance with this theory by ranking sites higher if they have higher interaction surrounding them (what people say about them). I.e. They are influenced by what other are saying about sites and favour sites because users are putting emphasis on them.
So how can we use this to our advantage? Robots like humans react to communities in two different ways; they either take part to fit in with others or are pressured into taking part by the community. Either way, we need to continue to build large social communities around our brands, one with lots of interaction from community members.
Let’s make them robots want to be part of our gang, and if not, we’ll make them!
Social Exchange Theory
This is a slightly different angle on social psychology and moves away from the focus on social signals. Social exchange theory focuses on the relationship between two parties from a cost vs. benefit stance. It suggests that once the costs of being in a relationship begins to outweigh the benefits for one party, then that party is likely to opt out of the relationship. As with humans, this theory can be applied to search engines.
Google, for example, places excellent user experience highly on their agenda for achieving high search rankings. It could be said that this is because, if a user visits a site from Google and has a bad user experience as a result, they are likely to place the blame with the search engine for directing them to the site in the first place (cost). On the opposite end of the scale, if a site is visited and a good user experience occurs, the user is likely to place appreciation with Google (benefit). Therefore, when determining how high to rank a site, it is likely that Google weighs up the costs of directing user to your site against the benefits they will gain.
Likewise, this theory could be used to explain why there has been talk of Google favouring sites with Google plus profiles, which again would present the benefit to them of potentially receiving more users to their social network, thus using their products.
This also ties in with the idea of self serving bias, which suggests people are more likely to favour information which portrays them in a better light. Google will choose the sites which are going to make them look good.
For example, a site may have relatively strong social signals coming from Google plus (benefit), however, the site itself may present an extremely bad user experience resulting in a high bounce rate (cost). Therefore, Google will place the site further down the rankings because the cost of their relationship with the site outweighs the benefits.
It is important to consider your relationship with Google when chasing rankings, what can you do to benefit them? How can you reduce the cost to them?
When creating a website, from an SEO perspective, there are always a number of boxes we aim to ‘tick’, i.e. Meta data, H1 headings etc. On the surface this could come across as a purely mechanical process to portray relevancy, but do these things really make a site more relevant? No.
Attention theories suggest that humans pay attention to information which is more readily available, a point also transferable to search engine robots. Robots pay more attention when the previously mentioned boxes are ticked because it means the information is more readily available.
Rational vs. Irrational
I previously believed that robots operated using a rational decision making process, meaning they make assumptions about what websites were being sought after by the user based on how many of the ‘boxes’ were ticked. On the flip side of this, the human decision making process is a very irrational one with the presence of many bias determined by a large number of factors. This means that even when a decision has been made it is often revisited on a regular basis to ensure that conclusion is still correct. It was this that made me realise, perhaps robots aren’t so rational after all.
Search engine robots visit sites on a regular basis, reassessing the information and the rankings of sites – to ensure they reached the right conclusion (just like humans). Additionally, as we discovered above, they are subject to some of the psychological laws which also govern human thought processes, suggesting that perhaps robot decision making isn’t as rational as first perceived.
So, Should we be treating the robots as people?
It has been a growing concept in the industry recently that SEO should now be user focused as opposed to search engine focused. As a robot’s purpose is to deliver what the user is looking for, it would appear that the two go hand in hand as robots adopt our behaviour patterns to simulate the behaviour of the user. Therefore, satisfying the user would also satisfy the robot.
However, as robots become more sophisticated in mimicking the behaviour of humans, it seems like a valid argument to suggest that they will continue to adopt more of our psychological processes and behaviour patterns. So by considering what areas of psychology have not already adopted by the robots, we may be able to predict their behaviour in the future.
Will robots become susceptible to the theory of persuasion? Will be able to manipulate them by using loaded words as in advertising? Or do robots already succumb to their own variation of this through the use of keywords? What do you think?