Monday, 15 October 2012

A clash of broadcasting worlds

Did you watch Felix Baumgartner's record breaking jump yesterday? It was amazing.


Watching live on the Red Bull branded wesite at www.redbullstratos.com, I thought we could be seeing a new era in live broadcasting. After all, if you own the content, then why get somebody else to broadcast it for you? Run your own station, for as long as you need it, on the web.

Why would Neil Armstrong's moon landing be broadcast on TV in an era of live streaming?

There is a very good reason.

Red Bull's stream (via YouTube) attracted an audience of eight million. That would be very impressive if it was a UK audience, but it isn't. It's a global audience.

Eight million globally is, frankly, a bit crap. It's a YouTube record, but that's not the point. ITV were showing an hour long Coronation Street special at the same time as Felix was hoping he'd packed his 'chute properly and that did 6.25m, just in the UK.

So why do you want a regular TV broadcaster for your content? Simple. Audience reach. It's the same reason advertisers want TV ads, even if they've truly bought into social media.

I'm assuming one of two things happened with Red Bull Stratos. Either Google bunged Red Bull some fairly serious cash for the exclusive rights to Live stream via YouTube, or regular broadcasters just weren't interested, because they couldn't be given a predictable prime-time slot when the jump would take place. "Sometime in the next week if the weather's ok" doesn't really work for an ITV scheduler.

In one (fabulous) event, we've got the best and worst of new and old media. Only old media could have got that footage in front of its true potential audience. But TV is too inflexible to make the scheduling work for an event as unpredictable as the Stratos project.

In the UK at least, it's a shame we didn't have a dedicated digital TV channel that could be activated on short notice and then trailed on a major network. Press the red button to watch a nutter jump from space. The technology is there and it worked really well during the olympics. For a moon landing type event (Mars landing?) in 2012, I'm betting that's what would happen.

Unfortunately, the broadcaster with that capability is the BBC. With Red Bull logos everywhere? Never going to happen.

We're not quite living in the future yet. If you missed the footage yesterday because you didn't have one eye on Twitter, then here's the Austrian with the big cahones in all his high altitude parachutey glory. Enjoy.


Friday, 5 October 2012

Nobody needs hourly reports. I now understand why they want them.

We build a lot of business dashboards at MediaCom, to track advertising performance, what your competitors are up to, your latest sales figures, that sort of thing.

I'm a strong believer in the fact that nobody needs to see those kinds of figures daily, or even more frequently than that. You can't learn very much when the frequency of your reports is that high. There's a good chance you'll focus on a misleading figure that's not part of a general trend and the little that you can learn, you can't react to. If one of your products is flying out the door, great! What are you going to do about that this afternoon?

Other than feel really good about it.

Which is what I discovered this week.

I set up my first ad campaign this week, for an online shop selling paragliding t-shirts, mugs and gifts (pump those SEO terms...) and I've instantly become obsessed with the traffic stats. Now that there's money at stake, it's even worse than monitoring Google Analytics for this blog.





So I get it. I want daily reporting too. If we can automate it, you can have it (because whatever the benefits, manual daily reporting is still a bloody silly idea.)

You still can't do anything useful with it. You're still going to need weekly and monthly summaries to understand what's actually going on. But very frequent reports are a huge motivator - they remind you that what you're doing is actually out there, in the world, and people are buying it. And that's important.

I didn't understand that, until in a very small way, I turned into an advertiser. It's been a great reminder that before you argue with what a client wants, you should walk in their shoes.

Friday, 14 September 2012

Planning a Big Data holiday

Friday analogy time. Bear with me...

Imagine you're planning a holiday. Or rather, you're deliberately not planning a holiday. You know you want a break from work but you're not sure where you'd like to go, or what you'd like to do.


Imagine also, that you're a marketer who's got very over-excited about the concept of Big Data.

So here's what you do.

You buy some really big suitcases and pack all the clothes you own into them. After all you don't know if you're going to the beach or the Arctic yet, so you'd better pack everything from Speedos to ski gear.

Will there be accommodation when you get there? We don't know yet. Best put in a tent. Or two, one for summer and one for winter.

And off to the airport, to investigate flights!

In the airport, you realise you also need lots of toiletries and medical supplies for your trip and another suitcase to store them in. You buy most of the stock in Boots and store it in your new suitcase because you still don't know where you're going or what you might need. The mosquito repellent bottle leaks everywhere and Deet is nasty, smelly stuff so you have to buy lots of things twice.

You pick a flight and head off on holiday. The flight was expensive, because you bought the ticket at the airport, rather than in advance. You'll be paying off your excess baggage fees for the next ten years.

Your hotel at the other end is expensive too because you didn't arrange a cheap deal before you left.

Finally, despite your hotel room being stuffed full of suitcases that you didn't need to bring and your bank balance having taken a hammering, you have a really fantastic holiday. A job well done.

You've probably guessed where I'm going with this one, but (not) planning a holiday like that, is pretty much the approach we're taking when we say "let's assemble loads of data and wonderful things will happen."

They might. If you don't run out of money along the way.

But if you decide where you want to go first and then build what you need to get there, you'll build something faster, better, more useful and for a hell of a lot less money.

Big Data is not an end in itself, it's a means to an end. If you don't know where you're going yet, then stop, work that out, and then go looking for what you'll need to get there.

Monday, 30 July 2012

Does the marketing industry bury bad news?

This article turned up on Adage last week. It's a proper, well thought out, scientific piece of marketing research, with an extremely important conclusion.

So why haven't you seen it anywhere?

Well, unfortunately, it strongly suggests that most of the clicks we see on display advertising may be just noise. Accidents. Slips of the mouse. There aren't that many clicks you see - click rates on display ads are around 0.02% to 0.04% - and with a click rate that low, a lot of them could easily be flukes.

Read the Adage article. It's important.


You didn't read it, did you?

OK, quick summary. The authors ran blank display ads and got click through rates on them that were significantly better than industry benchmarks for branded display ads that don't carry a call to action.

Stop trying to think up reasons why that might be able to happen, which wouldn't cast any doubt on the effectiveness of many display campaigns (which is exactly what most commenters on the Adage article immediately did). The authors even covered the possibility that people might click a blank space out of curiosity, by asking those who clicked, why they clicked. Like I said, it's a proper, well thought out, scientific study.

Adage ran the story, which is great. Adage do like a bit of controversy. As far as I can tell, the industry has buried it. If it had definitively proved that display ads made significant numbers of people go and search for products on Google, you can bet your life you'd have heard about it - it would be all over Twitter. Or if it had proved that Facebook advertising had a massive ROI? We've all seen those studies (and they're not very scientific...)

If we're going to take marketing measurement seriously, we need to accept that sometimes ads will be shown not to work as hard as we hoped and the studies that return those results shouldn't be buried without a trace. The authors here are also very careful not to be entirely negative. They're most interested in the fact that we use clicks to tune display placements, but those clicks look largely random. They don't try to make the step to any ROI implications.

Going back to item #3 on last week's top ten, we're going to see this result again. Third time around, maybe the industry might decide it can't just be ignored.

(Adage article originally found via @AdContrarian)

Tuesday, 17 July 2012

Ten rules of marketing analysis

It's been a while since we had a top ten on Wallpapering Fog. Number one on this list came up (again) today, so let's have Wallpapering Fog's top ten rules of marketing analysis.
  1. If you think you've discovered a radical, unexpected, new result that nobody's ever noticed before, your data is wrong.

  2. More complicated analysis can help you measure your marketing much more accurately. But if simple analysis can't find any impact at all from a marketing campaign, then there probably wasn't one.

  3. Nobody ever abandons a campaign that doesn't work, the first time that you prove it doesn't work. Three is the magic number.

  4. ROI means return on investment and it's measured in money. Not clicks, likes, web traffic or re-tweets.

  5. If you're not selling ice-cream, then the weather isn't responsible for your 50% year on year sales decline. Even Noah needed food and clothes.

  6. Never trust a piece of research that was funded by a media owner.

  7. Ten thousand respondents is plenty. A million is very rarely necessary - it just takes much longer to open the spreadsheet. You only need a spoonful of soup to know what the whole bowl tastes like.

  8. That means the BARB TV ratings panel is fine. Leave it alone, online people.

  9. When forecasting next year's sales, assume that your new adverts aren't any better than your old adverts. I'm sorry if that's depressing, but it's almost always true.

  10. The world is never changing so fast that you can't learn something from the past couple of years. People's basic motivations haven't changed since the dark ages.

Monday, 25 June 2012

Joe Hart officially named Twitter's man of the match.

England vs. Italy, 24th June 2012...

88,142 tweets mentioning "England"...

Analysed for positive or negative sentiment and then used to rate each player's performance.

The result? Joe Hart was England's man of the match based on tweets that mentioned player names. Ashley Young was, erm, less good.

Instead of the usual static infographic, here's a Tableau dashboard! Don't forget to click on the different pages across the top. Go here for overall England ratings, player scores and interactive player performance over time.



A few interesting bits that popped out for me...
  • Rooney's performance was nowhere near his pre-match expectation (check his time-line)

  • We all got progressively more depressed about England as the game went on. Have a look at sentiment over time and compare the pre-game level with the decline over the next two hours.

  • We were happy to make half time and greeted the second half with a big COME ON ENGLAND! Then went back to getting steadily more depressed again.

  • Cole's been harshly treated for that penalty miss. He scores a low rating due to the large volume of negatives as England exit on penalties

  • Nobody tweets about poor old Lescott! That probably means as a centre back that you're getting the job done. I thought he had a good game.

If you want to see some methodology, it's the same as I did for England vs. Sweden.

Monday, 18 June 2012

Rating England vs. Sweden using Twitter

If you follow me on Twitter (why would you not? Don't answer that) you'll know I've been playing with R a lot recently. First attempts at pulling data from Twitter resulted in a word cloud I quite liked, but which an ex-colleague dubbed the "mullet of the internet". Thanks Mark.

This time, I've pointed R at Euro 2012. Specifically, I set R running from half an hour before kick off in the Group D England vs. Sweden game - 19.30 last Friday - with instructions to pull every tweet it could that contained the word "England".

The results? 78,045 England related tweets (excluding re-tweets), running from 19.30 to 21.15.

Let's see what we got. Grouping up the tweets into 5 minute intervals, here's overall volume.


We're averaging just under 2,300 tweets every 5 minutes. That's got to be enough to do something interesting with!

It's a bit easier to read if you colour the first and second half in red, with pre and post game and half time in grey.



OK, so lots of Tweets then. One of the cool things we can do with them is to split the tweets by sentiment; positive, negative or neutral. An example of a strong positive from the database would be:

"Well done and very proud of you. England may not have the most talented players but they played with guts, passion and heart #England" @ozzy_kopite

And negative (no points for grammar here either):

"Now lets watch england lose bcoz they use caroll!!! N the game will b bored!!! #damn" @Anomoshie

The sentiment algorithm isn't perfect so we're not going to push it too hard. I'm dumping any data about the strength of sentiment, tweets are either positive, negative or neutral and that's it.

If you'd like to know what kit I used to do all of this, please see the bottom of the post. I'm assuming most readers just want to jump to results, so here we go.

Keep the five minute time-slots and divide the number of positive tweets by the number of negative, to get a view on how cheerful Twitter was feeling about England during the game.


On average, there are 2.8 times as many positive tweets as negative. That will partly be down to the settings on the sentiment algorithm though and it's the movements we're really interested in.

Twitter was very positive in the lead up to kick off, but that didn't last long. Twenty minutes in, the balance of positive over negative had dropped from 4.1 to 2.2 as Sweden failed to roll over and let England hammer them. Then Carroll scored the opener...

In the second half, we can see a trough all the way down to 2.0 as Sweden take the lead and then a positive swing via England goals from Walcott and Welbeck. The game ends on a positive / negative sentiment value of 2.9. Well played lads.

Come to think of it, well played which lads? We've got loads of mentions of the players in this database too, so let's see who Twitter thinks had a good game.

Height of the bars is positive / negative sentiment and width is volume of tweets (some players like Lescott generate really low volumes so don't take their rating too seriously.) I've restricted the database just to tweets that took place  during the first or second half. If you were slating Carroll before the game, we're not interested in your opinion here!


Carroll comes out man of the match, both in terms of sentiment and volume of tweets. There's a definite break between the players who did best - Carroll, Welbeck, Gerrard, Hart and Walcott - and everyone else. The overall England rating never goes negative (below 1,) and none of the players' ratings do either, although Johnson tries hardest, which may be a reflection of his own-goal.

Finally, let's see how the player ratings fluctuated during the game. Sentiment on top. Volume of tweets below. This doesn't work so well for players with low numbers of mentions in tweets but you can see it works for Andy Carroll. That huge volume spike is his goal.


One more; here's Gerrard. Game of two halves for the Liverpool midfielder and his rating dropped significantly after half time.



Want to see another player? Here they are - knock yourself out. If you select "False" it will show totals for tweets that either don't mention a player, or mention more than one. The chart is a bit squashed below to fit in with the Wallpapering Fog template. For bigger, go here.



Tools:

Tweet database pulled using R, R Studio and TwitteR. Sentiment analysis using the R 'Sentiment' plugin. Cleaned up a little in Excel and then all the charts are Tableau.