SEO Testing: In Defense of Informal Experiments and Consensus Views
Is SEO an Art or a Science? No.
The art versus science question is so commonly asked because it applies in many situations. But most questions defy binary answers. The world is often inexact and hard to quantify. Not least of which the world of SEO (search engine optimization). And those that would only apply the most rigid scientific methods, excluding all other (less exacting) methods of data collection in order to understand ‘how Google works’, might be effectively blinding themselves to a broad range of ranking factors, essentially trading practical gains for procedural perfection.
Disclaimer: While the study of search engine ranking logic and how best to optimize a website (aka SEO) is an intense interest of mine (and a revenue-driven one as well), it is not my profession, nor do I consider myself an expert. The algorithms that I have worked with professionally throughout my career are not that of Google, but instead those used by banks, institutional brokers, asset managers and hedge funds. These algorithms perform automated trading of stock, options and futures orders in the global financial markets.
Market Analyst (not Scientist)
This art versus science question could certainly be asked within the finance and trading space. That may seem odd, given that market analysis is such a data-intensive and seemingly quantifiable activity.
But the reason we cannot exclusively apply ‘hard science’ in the markets is because market movements are the result of so many major and minor factors, some disparate and others interrelated. There is simply too much going on to isolate a factor and draw conclusive results through repeatable experiments. Single-variable testing is not really practical or possible in the equity and derivatives markets.
Add to the complexity challenge above the fact that the financial markets are ultimately driven by human or crowd behavior. Taken together, it’s clear that interpreting and predicting market movements — consistently making winning trades — cannot be done using only data and conclusions derived from strict scientific testing.
Doing effective SEO consistently is no less challenging. Given the complexity and ever-changing nature of search algorithms, ranking high in the SERPs — consistently winning at SEO — cannot be fully achieved by applying only the data derived from rigid scientific test methods. Single-variable testing is possible in some cases, but other methods, data, tools and observational inputs are essential in digital marketing.
Also, it is worth emphasizing that just because a hypothesis defies strict scientific testing does not mean there are no benefits to using systematic data collection and analysis methods. Leveraging accurate data and leading-edge tools, and applying rigorous analysis and logic is essential to success in either of the above disciplines. However, doing effective market analysis or SEO over time requires a holistic approach.
Hard to Boil the Sea of SEO
There are suspected ranking factors that can be isolated and tested using strict scientific methods but there are many that cannot. The consensus view puts the number of Google ranking factors at over 200; some suggest that the interplay of these creates distinct composite factors, and the actual number is effectively 500+.
With so many simultaneous and interrelated factors at play, and the data noise resulting from live (not lab) conditions, single-variable testing can only reveal a fraction of what an SEO practitioner needs to do the job. More broadly, the traditional scientific method as it is generally understood (seeking to resolve a testable hypothesis) is not enough.
I would submit that there are three primary reasons why the scientific method alone is inadequate as the single pole of an SEO strategy.
The Algorithm is a Moving Target
The first involves the rapid rate of algorithm changes and the staggered fashion in which they are deployed. By most estimates Google pushes out changes of some sort at least weekly, making any test results potentially stale soon after they are produced.
Also, the oft-seen variance in behavior sometimes attributed to rolling deployments (by region? IP range? something else?) means that there is never just one current ‘version’ of Google to test against. At any given moment one person’s search (or test) outcome might be the result of materially different algo logic than the next person.
There is just no way to know if the few Google servers a test happens to touch is representative of the Google stack that is more broadly deployed.
Search Gets Personal
Which brings us to the second reason…personalized search (the user’s search history and what Google does with it).
Personalized search was introduced in 2005; at this point the vast majority of Google searchers globally are being served results that are different both from one another and from the ‘clean’ SERP. This creates significant variability across the searcher population, potentially reducing or nullifying the practical value of results derived from isolated testing (depending on the factor in question).
Testing on data derived from sanitized (non-personalized) experiments surely does have value. But we should not delude ourselves into thinking that the majority of users see precisely what our SERP trackers see.
Content and Context On The Rise
Lastly, and perhaps most significantly, Google increasingly depends on artificial intelligence agents, and topical and semantic factors to better judge the relative quality of site content, among other things.
As exact-match keywords and backlink counting gives way to more fluid combinations of entity authority, page relevance, user engagement and social signals — all being ‘assessed’ by a learning, evolving machine — the usefulness of isolated, single-variable test results (when they are possible) will inevitably diminish.
Lab Dabbling and Happy Accidents
So, why bother with such testing? Because the information it provides can definitely help. More data is usually better. It is a tool in the toolbox which can be useful if applied correctly.
But it’s just one tool. And anyone that claims that any one method or single input alone is the best or only way to “know” what makes a site rank on Google is deluding themselves or misleading others (or both).
A key point to consider here is that effective experimentation need not be limited to single-variable testing and the so-called scientific method. Many great “scientific” discoveries didn’t start with a theory, a question or a problem. They arose from intellectual flights, laboratory dabbling and sometimes even straight up accidents.
“Right Way” Versus Results Way
Another finance and trading industry concept worth considering here is that of opportunity cost, which is the benefit or gain an investor (a person) misses out on when choosing one alternative over another. When one disqualifies any data simply because it was collected through supposedly non-scientific methods they potentially short-change themselves (and their clients).
Purists that would leave a large number of factors ignored simply because they could not be “scientifically” tested are at a serious disadvantage. Most SEO practitioners seem to agree; they do incorporate into their workflows these suspected factors that defy conclusive, single-variable testing. They experiment, observe, document results and then iterate, when needed.
As to the original question, is SEO a science? It appears to my (non-professional) eyes that many characteristics of the search engine ecosystem are ultimately not testable using strictly scientific methods alone. So, perhaps SEO is not a science, even if certain self-described experts like to say the word “science” a lot.
The Art of SEO
Unfortunately, many people seem to instead approach the task of SEO more like artists, waiting for inspiration (ie the latest rumors or ‘expert’ claims) and feeling their way through the process. To be clear, this is also a bad way to approach ones ranking efforts.
SEO should not be considered an art either. However, creativity and intuition can definitely play a role, and to completely ignore the collective knowledge pool and the stream of information regularly produced by the SEO and online marketing community is foolish. A ship’s captain knows the tides and uses maps, but never stops looking to the horizon for changing weather conditions.
SEO is…Not One Thing
The results from single-variable testing of those factors that do lend themselves to it, and the data derived from other ‘formal’ scientific methods is certainly useful. The value of other systematic data collection and analysis, correlation studies and performance tracking is also undeniable. Finally, the consensus views of thought leaders, algo update ‘field reports’, and first-person observations provide essential context to the rest.
…taken together, this suggests SEO is best performed not by artists nor by scientists…
The correct approach, then, appears to be a blend of systematic and scientific processes, inductive reasoning based on stats and soft data, and observation-based field testing (see what sticks). How these are best weighted and combined into a scalable and reliable process is, of course, a whole other debate.
But, taken together, this suggests SEO is best performed not by artists nor by scientists, but by artisans. SEO is a craft and, as such, practitioners should aspire to be skilled craftspeople that have mastered their trade by both learning from others and by doing. They grow and advance by using the tools, data and best practices available to them. And they create great works by applying these with care and attention to detail (mindful that each site and SERP is unique).