Those who seek the Holy Grail of information-accurate data-search high and low for the perfect source or system which will give the most straightforward, accurate and marketable data sets. If at first we cannot find successful data, we simply test it, trash it, change out methodology in obtaining it, and run the gamut all over again. What we tend not to concentrate on, however, is the importance behind testing data frequently to assure it's perfect. Ready to go.
The skillful trade we'll call A/B testing is something of an art, an internal competition for bragging rights, if you will, to assure that data collected will plug into "x" holes and render "y" results, pitting two known adversaries, David and Goliath, against each other in an epic battle to assure the best marketing pitch or data structure is perfectly intact and operational. Yes, the art in marketing science renders hope for the flowchart reader, especially when considering these eight rules of A/B testing before engaging your internal opponent.
Hypothesize
Ever wanted to start an argument within your head during the formation stages of your A/B testing campaign? You can accomplish this simply by stating factual observations, which can be tested using variables, using proven marketing metrics to either recant or expose your summary as truthful. For example, you could assume infographics placed on my homepage will increase click-thru ratios. This statement also acts as the basis of your test and serves as your measuring tool upon successful battering of your hypothesis. Every A/B testing facility should begin with well-conceived hypothetical statements.
Establish A Singular Variable
Mostly hypothetical statements, including those made in marketing lingo, revolve around one challenged variable. Since the focus of your test will be challenging this variable, every other portion needs to remain intact during your testing. For example, if you are challenging that same infographics placement, you'll need to leave surrounding content, linkage and every other form alone since it doesn't factor into your original line of questioning. Remember, if you are including secondary variables, you will need to form separate hypothetical statements and establish a different variable to challenge.
Form Success Measurement
Since you cannot test a hypothesis against itself, you need to find something 'bar setting' or a clear success measurement by which you'll gauge the overall veracity of the original statement. There will always be one honest way to win this self-purported competition, and your A/B testing variable needs to have a grading system. In the case of your infographics argument, you could make the success measurement pertain to your best converting items on the page, e.g. content, links, sales graphics. Doing this allows your overall hypothetical statements to have something to gauge high or low against.
Significance Through Numerical Measurements
Another important measurement of successful A/B testing is the volume by which you test your hypothesis, whether through impressions, clicks or responses. The numbers should be significant enough to make quantified differentiations and should never contain less than necessary to propagate accuracy for future use. You could run your test against 10,000 visits to find out number of clicks on your infographics, for example, and use the ratios to measure hypothetical indemnity. The smaller your test size, the harder accurate measurements become, which negates the need to test your hypothesis anyway. You should gauge your overall test size in accordance with monthly or daily traffic volumes.
Split, Or Not To Split
Some find running an entire swarm of data testing to one specific control group to be more fruitful than splitting into 50/50 against both an isolated metric and the hypothetical metric. Whichever appeases your test should be your final choice, since choosing the top-gun performing item across your website was your chosen measuring tool for success. You could even break that group into other odd percentages like 60/40 or 80/20. Personally, an accurate test would be expounding one resource, gauging the results, and retesting using the same method against the other test group. Your imminent goal is to dethrone your champion through a witty hypothetical statement; keep the testing field as fair as possible to allow results to naturally choose the actual champ.
Randomly Selected Groups
Fairly selecting the A/B test groups will allow more effective measurements to be taken; although not entirely necessary to prove your hypothesis, selection of random subjects always makes the testing procedure more interesting, and will perhaps open new doors for future tests. Never choose based on bias or by throwing a dart at your wall; choose first letters in names, pick a number in Excel between 1 and 79 and so on. This levels the competition and doesn't pull an exact favorite out of the hat to dethrone your hypothesis. You have the right to be creative here; use that right to hand-pick your randomization techniques for test group selection.
Don't Over-Test
It's excellent to choose variances in testing, yet over-testing your hypothesis could mean inaccurate data reporting or test results altogether. The more you specify dynamics in testing subroutines, including narrowing of actual physical tests that can be performed, the easier your overall testing procedure becomes. Common sense will stop you from making too many critical testing errors, yet you should also pay attention to your single most questioned or challenged data set. Oversaturation of testing can lead to undesirable results when the time comes to measure your tallies against the behemoth you're calling the 'standard.'
Accurately Documenting Results
Fear not, paper pushers: software can ease the load of documenting the final tallies of your A/B testing campaign, although even with software assistance, it appears this is definitely still negligible in some companies. One excellent way to document your results is to write a quick blog post discussing your hypothetical statement, the subjects used to propagate your test, the number of responses set as your level for the test, and the final results. End your blog post with potential remedies, suggested follow-up testing or even open the forum up for debate so as to collect insight into your hypothesis.
Final Words Of Wit
There is little we can call concrete in today's marketing world, especially when variances in campaigns, along with methods of tracking them, seem to keep us on our toes most of the day. Great marketing champs will always run A/B testing on every major level, including search engine PPC advertisements, because having the lowest costs and highest ROIs are the goal of every professional across the world. When testing, keep these eight tips in your back pocket and challenge yourself to a marketing duel, pitting two of your best converting campaigns against each other. Form your hypothesis, bust out your test subjects and control groups and see if you are able to improve what you conceive to be already perfect.