Thursday 29 March 2012

Marketing's belief in social: Out of step?

A survey has just dropped across my desk from Pulse Point Group and The Economist Intelligence Unit, that looks at how effective social media is perceived to be among 329 respondents from US and Canadian companies.

Overall, results point to a strong faith in social. I say faith, because unfortunately there's not much evidence in the survey that anybody can prove their belief that social is tremendously effective.

These respondents have a faith so strong, that to be honest, I'm suspicious of a self selecting panel of respondents; people who believe strongly in social are just the sort of people to volunteer themselves for the survey...

One chart jumped out as badly in need of turning the conclusion around and seeing how it looks with an opposite interpretation.


This chart illustrates the leading current, and expected future advocates of social, by department. It's led by marketing. No surprises there.

C-Suite is senior management for us Brits by the way.

The title of that chart puts a very strong interpretation on the data. HR, Operations and Finance are miles behind marketing in being advocates for social and the headline poses the question,

"Is there a problem with Finance and HR?"

I'm going to look at it the other way.

"Is it a problem, that marketing is so far ahead of the rest of the company in its belief in social effectiveness?"

Asking if there's a problem with HR and Finance implies that Marketing is correct and that the other departments just haven't caught up yet.

Look at it this way.

Marketing's belief in social is more than twice as strong as senior management's belief.

Marketing's belief in social is ten times stronger, than the belief of those whose job it is to understand how much money the company makes and where that money comes from.

At the very least, if marketing are right about social, then they've dramatically failed to persuade finance that they're right.

Wednesday 28 March 2012

Adblock penetration has doubled in three years

Three years ago, I did some back of a fag packet maths to estimate the penetration of Adblock amongst UK internet users. It's something that's not easy to get data on, which surprises me because it's potentially pretty important. Yes, you don't pay for internet ads that are never served to users, but Adblock has the potential to severely limit the reach of an online campaign. You'd think at least the SEO community would want to know how many people run Adblock, because those people don't see paid search ads, making SEO all the more powerful.

For the uninitiated, Adblock sits in your internet browser - usually Firefox or Chrome - and blocks the links that are used to serve advertising. You see a website as normal, just with no ads on it and the mechanics are clever enough that it doesn't leave big white spaces on the page where the advertising would normally be. The site's regular content is flowed into the gaps left by ads that have been removed.

If you ask me, I think it's brilliant. Shhhhhh, don't tell anyone.

Three years ago, my best guess based on Firefox users running Adblock was a total UK internet user penetration of around 2.2%. Not really enough to worry about back then, but what about in 2012?

Since 2009, Firefox and Chrome have continued to grow, to a point where between them, they account for very nearly 50% of browser usage.


Both of these browsers can run a variety of blocking software and so potentially, we're looking at a lot more people avoiding adverts than we were three years ago.

This is where the maths gets a bit vague, but stay with me.

It's hard to get figures for how many users run blocking software, but Mozilla do share daily active user numbers for the Firefox addon.


Globally, we've got 15m daily active users of Adblock on Firefox.

And in mid 2010, we had 120m users of Firefox itself. These are the most recent stats I can find unfortunately. If we project the trend forward to 2012, we should be on about 150m active Firefox users by now.

Globally, that would give a penetration within the Firefox user base of around 10%.

Lets assume Chrome is about the same and is also running at about 10% penetration. We're going to have to assume that one, because I can't find any stats beyond "1,000,000+" users for Adblock on Chrome, via the Chrome webstore.

We saw earlier that Firefox and Chrome together account for 50% of browsers. I'm going to make a further assumption that virtually nobody using Internet Explorer has an ad blocking plugin installed. Plugins do exist, but it you're using Explorer, then you're almost certainly not the sort of user who's found out about Adblock. I'll leave Safari and Opera to one side too, in the interests of our estimate being deliberately aimed towards the low end. Adding to the low end nature of this guess are a variety of other plugins, which do a similar job to Adblock and which I haven't included.

So, 50% of browsers are Firefox or Chrome, and 10% of those users have Adblock installed = 5% total penetration. Ish.

Using a similar method to 2009, we've got a more than doubling of usage, from 2.2% then, to 5% now. It's still not so big that you'd panic, but is starting to become a significant minority of internet users, who don't see any of the paid-for advertising that brands throw at the web.

See you back here in another three years for more fag packet maths and the conclusion that it's broken the 10% mark?

Monday 26 March 2012

It's never been about the data

A lot of people are getting very excited about data again. Journalists particularly, seem to think they've spotted a new source of stories and are jumping on the 'Data Journalism' bandwagon.

The latest article to drop onto my Twitter feed - and the one that's prompted this post - is 'Data is the new black', accompanied by the now obligatory infographic that's not an infographic.

This latest surge of interest in (Big) Data is slightly different to the ones we've seen before. In the past, we've heard huge promises for what data analysis could deliver and then very often, nothing at all was delivered. To pick one example from the marketing world, 'Project Apollo' was rather expensive and never really made much progress. Data analysis projects often bogged down in the data assembly phase (they still do), without managers ever seeing much beyond PowerPoint decks that prophesied the arrival of data nirvana. Data nirvana being permanently around six months away, once we've sorted out a few teething problems. And could we have £40k for another database analyst to fix those teething problems please.

This time around, data is delivering some output. Recently, The Guardian had a very pretty interactive illustration of poverty rankings across UK regions, that would have been hard for a newspaper to put together even a couple of years ago. Tools like Google Fusion Tables and Tableau are making that data assembly phase more accessible and quicker to throw output at an audience. It looks like we might be getting somewhere.


The Guardian's work is showing exactly where we're getting though and it's not quite the brave new world that some have promised. When you complete a major piece of analysis, you very often prove the answer that you were expecting in the first place. This isn't just true of social science, it works for classical scientific research too.

Think about what a scientist does, away from the media spotlight of a genuine breakthrough:
Is this a cure for the common cold? No. Is this? No. What about this? No. This one? Still no. It's not that we should stop looking but you can be 99% sure what the answer's going to be before you start.

The same happens with social science data like economic statistics and population demographics. When you examine them, largely, you prove what you already suspect. The Guardian's proved that the North of England is more deprived that the South. We knew that.

Examples of genuine revelations from marketing databases are hard to find and those that do surface are often dubious. The legendary nappies and beer example (diapers and beer if you're American) states that database analysts working for a major retailer noticed nappies and beer were often sold together. The story goes that young male parents often buy nappies on a Friday night and pick up a pack of beer at the same time, so cross marketing these two products is extremely effective. Take your pick on which retailer came up with it - Wal-mart, Tesco, ASDA - it's not actually true.

What data does let you do is to make a case more strongly. Data analysis helps us to move the foundations of a discussion from opinion, to fact, so that the discussion can move on to what we do about those facts. In marketing, if there's a widely held suspicion that a piece of advertising doesn't work, then it almost certainly doesn't, but very often it's not until you prove it, that the offending campaign will finally be pruned from the schedule.

It's never been about the data; it's about the question. Data can provide a stronger answer to a question than opinion alone and so if you ask the right questions, it will help to make a stronger argument. What it will never do is proffer insights of its own accord and it will rarely shock you in its conclusions. Those looking for epiphanies from analysis of Big Data, are likely to be disappointed.

Tuesday 20 March 2012

Wallpapering a new home

What address did you enter to get here? Now look at the address bar...

That's right, Wallpapering Fog has a new home! No more blogspot, we're a shiny new .co.uk. Watch those Google rankings climb! Or maybe not.

For the record, Wallpapering Fog's new home is www.wallpaperingfog.co.uk. The old blogspot address and RSS feeds will still work though.

Thanks are also due to HomeBiss, where I found a really weird little hack which gets rid of the ugly navbar at the top of blogspot blogs. It seems more like an Easter egg than a hack so we'll have to see how long it stays gone, but for now I'm one happy blogger.

You can't fix a bad product with marketing. Not even if you're Disney.

Disney's latest effort, 'John Carter' looks set to be one of the biggest box office flops ever, losing in the region of $200m. Who'd have thought that a plot about an American Civil War veteran, transported to Mars to fight in another (presumably more Martian) war, wouldn't hang together as a film? Unbelievable.



Once the film was complete, Disney must have realised that they'd got a problem. Rough cuts of films are tested to see if audiences like them and so that they can be tweaked to get a better final product. If feedback from that early research looked anything like the Rotten Tomatoes reviews of the film, then Disney will have known pretty early on that they had a failure on their hands.

Take this contribution, from the Guardian's Peter Bradshaw.


"I felt as if someone had dragged me into the kitchen of my local Greggs, and was baking my head into the centre of a colossal cube of white bread. "

Like Private Garlick in Good Morning Vietnam, "I have no idea what that means, Sir, but it sounds pretty negative to me."

Depending on which source you believe, Disney has spent somewhere between $50m and $100m marketing John Carter, almost certainly based on - at best - lukewarm pre-test results.

This never, ever, works. It's a golden rule of marketing that you cannot persuade a lot of people to buy a bad product by spending more on advertising. The first person who falls for it will tell all his mates that you lied and you've just wasted $100m.

In the end, this is a sign of whether a company really believes in research and is willing to follow through on the consequences of what they know. Any company that throws this amount of marketing money at a bad product, at its core, doesn't want to believe the research that it commissions. Disney hasn't just wasted $100m on marketing, it's wasted a smaller amount finding out ahead of release whether anybody likes their films too.

Friday 16 March 2012

You can't measure the ROI of Social Media. Stop trying.

I could have written this post under the title "How to measure the ROI of Social Media." There are a lot of pieces scattered across the internet already with that title, but it seems to be the done thing to write a 'How To' and then not actually explain how to do it all. On Wallpapering Fog though, we try to avoid misleading headlines - if we didn't, the site's traffic figures would probably be higher.

If you'd like to read some posts that claim to tell you how it's done, try Google. Or Twitter. There's even a book.

So, this post will not explain how to measure social ROI.

What it will do, is explain why a marketing analyst, a person whose job it is to measure the ROI of all sorts of media, says it can almost never be done. To clear up any confusion, that analyst would be me.

Let's start with what ROI means because there have been some quite determined efforts to corrupt the term and it really does mean something specific. ROI stands for "Return On Investment" and that means money. If an activity has a positive ROI, then it makes more money for a company than it costs to run. Easy, right?

Unfortunately, this means that cost per follower, cost per share, cost per like, cost per view and any other easy to generate social metrics that you might care to name are not a measurement of ROI. "Return" in "Return on Investment" means financial return; it means £ or $ and that's all it ever means. I'm not saying cost per follower is irrelevant (we'll come back to that another time) but it is not ROI. Saying that it is, will very likely wind up the finance department of whichever company you're talking to, even if you manage to sneak it past the marketing director.

To prove that Social Media makes more money than it costs, we're going to need to do two things; work out what it costs and then show that in the end we sold enough extra product as a direct result of the activity to pay for the campaign. These must be sales that would not have happened without running a social media campaign.

The first bit's easy, if a little unorthodox vs. traditional media channels. The cost is the cost of any content you need to create or buy, plus the time of the staff who keep the social presence running, plus any fees you have to pay agencies. That might be quite a bit of money; viral (and effective) doesn't necessarily mean cheap. You might need to pay for some boffins and a load of tech, like Mercedes did recently to make cars disappear...



Ok, we've spent some money and made a noise in a corner of the internet. Now we come to the hard bit; did a social presence persuade people to buy more of our product?

Let's park the idea that our social efforts may not have sold anything at all and just assume that they did persuade people to buy. I don't want to get into a debate about whether social works or not as it doesn't really matter for this post. Social could be tremendously effective and it still wouldn't be measurable.

As an example, we'll take an imaginary company - make it a car manufacturer - that's taken the plunge into social, with a Twitter account and a Facebook page for their brand. I've picked a car manufacturer partly because measuring any marketing impact on car sales is pretty difficult, which means social ROI measurement has a definite challenge on its hands.

It's difficult to measure the impact of marketing on car sales because people usually take several months to decide which car to buy, so we're trying to attribute a sale now, to some marketing activity that happened three months ago. If the buyer has seen loads of marketing messages over the past three months, then which one do you choose?

To measure by how much any activity has increased sales, we look for uplifts. These might be big uplifts that are easy to measure - like half price promotions, which put a big spike in sales - or they might be smaller uplifts that need econometric models to find. As long as the uplifts are there (essentially, if advertising achieves anything...) then we've got a chance of measuring them and working out what an activity did. It really helps if the uplifts happen at roughly the same time as the marketing campaign, which again, is why cars are difficult.

A thought experiment with social media though, says we're going to be in trouble straight away.
How might social media work? And if its working, what would the uplifts look like?

To begin with, social media could sell our product to completely new customers, who have never thought of buying us before.

Which is tricky. If these customers don't consider our product right now, then what are they doing following the brand on Facebook?  Most people who follow us and consume our content are probably already customers. This moves the measurement from 'can I see new customers being won?' to 'can I see existing customers not leaving?.

Now we're not trying to measure a spike in sales, we're trying to measure something not happening; trying to measure existing customers not leaving. We've got no chance.

If social media was a fantastically effective way to prevent customers from leaving, we wouldn't see a spike in sales that coincided with activity on  the Facebook page; we'd see a customer base that was a little less likely to defect to a competitor. We'd have almost no chance of measuring that.

Then there's a second problem, that puts the final nail in the social measurement coffin. If social media works, then it's reasonable to expect that it will work slowly. We've already said for this example that people will decide slowly which car to buy. Social persuades with a steady drip of content and so it's slowly changing that decision (if it works - I'm not an evangelist for social, I'm just pointing out the measurement difficulties...)

Crucially for social measurement, there is no change in sales that you can point at; no sudden step upwards and no short term spikes. If social was hugely effective (please do note the 'if') then you'd see sales start to slowly trend upwards some time after you started to invest in it. You wouldn't be able to blame that upward trend definitively on your Facebook efforts, because it could have been caused by many different things. Life's never quite as simple as beginning a Facebook page in January and then six months later being able to point at the start of a mysterious upward trend in sales, starting from that date.

All of which is why I don't claim to be able to measure the ROI of social media and am deeply suspicious of those who say they can. Especially after having read quite a few pieces titled "how to measure the ROI of social media".

That's not to say there's carte blanche to go social crazy with the client's marketing budget. There are good metrics and a framework for social to give you the best chance of making a good investment and once we park ROI, we can have a sensible conversation about what they might be. Maybe next post...