Thursday, 26 January 2012

Rigging the Scottish Independence vote

Yesterday, Alex Salmond released his preferred wording for the question that will decide whether Scotland should remain as part of the UK.

"Do you agree that Scotland should be an independent country?"

And immediately, a cross-section of the research community cried foul.

Anyone who works in marketing research will be pretty familiar with the best ways to rig a survey question; it's how PR companies get exciting sounding press releases to plant their client's name in the newspapers. In a previous job, we "proved" that British women would swap a shopping spree for a night of passion and bagged the Daily Star front page under the headline, "Sex? I'd rather go shopping." Was it true? Frankly, who cares? Probably not, but it was a PR fluff piece that got us loads of free publicity and the survey was designed to produce exactly those kind of answers.

One of the more subtle ways to rig a survey is to ask people to agree with something. When in doubt, respondents have a tendency to agree with a statement, particularly if it's a complicated concept that they don't understand, or if they don't really care either way (which is handy if you're trying to rig a PR survey.) It's called Acquiescence bias and Wikipedia explains the issue well, with a few examples.

The Guardian today gives an example that acquiescence bias could easily create a 9% swing in the response to a positively worded question. That's a lot and could easily decide a tight vote.

The first thing I'd want to do with that question above, is to get rid of the word "agree", which loads it towards a "yes" response.

"Should Scotland should be an independent country?"

Yes, or no? That's much better.

Better still, would be not to demand a yes / no response at all. There's still an issue with "Should Scotland..." because you could ask:

"Should Scotland be an independent country?"

Or... 

"Should Scotland remain part of the UK?"

You're not asking for agreement, but there's still an element of potential bias. Actually, this time the option to "remain" is likely unfair as it invites respondents stick with the status quo, which they will have a tendency to do, when in doubt.

You could argue that this is pedantry (fun though, isn't it? And if nothing else, you know how to rig a survey now) but for me, it's very important. The one thing that you don't want from a referendum is the possibility that the answer is ambiguous and can be challenged. Alex Salmond's preferred question undoubtedly can be. 

For the same reason, politicians shouldn't be allowed to avoid questions about what they will do, for example if the vote is very close, by saying "it's hypothetical". Yes it is, but it's very, very important and we need to know up front what we'll do in that situation, not to argue the meaning of the result afterwards.

The best way to ask about independence is an option that won't sit well with politicians at all, because it isn't a yes or no question.

Which of these would you prefer for Scotland?

1. To be a country within the UK

2. To be an independent country that is not part of the UK

And if you're going to be really thorough, rotate the answers so that remaining in the UK only appears in the top slot half of the time.


Don't get me started on the suggestion of third options and "Devo-max". What are you going to do if they come out with 33% of the vote each? Have a bloody great row, that's what. Which is exactly where we're headed.

And finally... if you really want to know how to rig a survey, ask the experts.


Monday, 16 January 2012

Are you ready for Real Time Planning?

This is a reproduction of an article I've written for this month's Admap. They've chosen to title it 'Track the data on the dashboard', which I think rather misses the point but there you go. On Wallpapering Fog, I choose the headlines.


Real-time planning is a tactical tool that, through analysis of customer behavioural data, enables the short-term refinement of communications strategy, explains Neil Charles of MediaCom.


Real-time planning is one of those marketing terms that has a danger of meaning different things to different people, so I'd like to start off with a brief definition. For me, real-time planning means adapting marketing schedules on the fly, in reaction to new data about how customers behave.


The challenge that this type of adaptable marketing presents is to process new data and then react quickly enough, to take advantage of opportunities as they are identified. However, too often, marketers expect data on its own to be enough and that deep insights will reveal themselves if only we can bring different data sources together. Analysts have known for a long time that this is rarely the case, but large quantities of consumer data are seductive. Surely we could build a more efficient, more flexible media schedule if we had more up- to-date tracking of consumer behaviour?

Inevitably, the data that has provoked this new marketing philosophy flows from the web. We have faster access to more granular data than ever before, both in terms of marketing response through clicks and traffic tracking, and also the ability to ask questions of large online research panels cheaply, and to see the results in a very short period of time.

In practical terms, the web will largely be the focus for the outputs from real-time planning too. Traditional media - where the creative process and buying deadlines are longer-lend themselves much less readily to the type of quick schedule changes, which allow us to take advantage of new data. This online focus should put real-time planning in context for marketers as an exciting new possibility, but one which must never be allowed to compromise an overall campaign. The Internet Advertising Bureau and PricewaterhouseCoopers put UK internet spend at £4 billion in 2010, accounting for 25% of all advertising spend. So while we may have the ability to monitor consumer behaviour (on the web at least) in almost real-time, only a part of the marketing budget is as agile as the response data that we can monitor. Of course, TV or press schedules can be adjusted, but once a commitment to TV has been made, barring disaster, the ads will run largely as planned.

Crucially, most ads should run largely as planned. We often preach the benefits of consistency in advertising and of seeing a brand campaign through, for its full benefits to be felt. Real-time planning doesn't replace the normal planning process, but is about tactical adjustments to a campaign that has been well planned in advance. If our understanding of new data is allowed to constantly re-shape a brand's proposition then we risk compromising our ability to put across a consistent message to consumers.

So, with real-time planning in context as a tactical, rather than strategic tool, and one that is based on very recent data about our customers, what do we need to do to make it work?

It is easy to generate and to track extremely large volumes of customer data. Over the past few years, dashboard software has become cheap and capable, and for a small IT investment, marketers can easily bring together their sales information every week, their own brands' and competitors' advertising spends, response data from off-line direct marketing channels and web tracking from a count of homepage visitors, right down to the number of clicks on individual Google keywords. We can also incorporate brand mentions and sentiment from social networks, track PR coverage both online and offline and conduct quick consumer research polls.

Collecting this data and visualising it, in the hope that it will provide insight and lead to greater marketing efficiencies usually results in disappointment. Large volumes of data, without analysis, are more of a hindrance than a help and, unfortunately, analytical insights very rarely jump off the page from a single chart.

Even where a relationship is obvious - such as when the number of brand term searches is charted against TV investment - what do we do with this information? It's not enough to know that TV is driving additional customers to search for us on Google. We need to know whether this means we should increase the TV budget, attempt to convert more of the online interest that TV is shown to be generating - both, or possibly neither. After all, the current schedule appears to be working!

Rather than tracking large volumes of data and hoping to generate insights from them that will lead to more efficient marketing, the data that we choose to track should flow from analysis work that has already been completed.We need analysts to identify from the vast quantity of available information, variables which are useful, show how they can be used and then to hand that information to marketers.

A recent client example concerned a business which had no concrete data on overall sales volumes in its market, but many variables that might indicate whether they were rising or falling. Sales in the client's business were rising and they wanted to know whether - as some believed internally - this was bucking the market trend, or following it. The answer would have significant implications for advertising, since if the overall market wasn't getting stronger, then the most likely candidate to have caused the extra sales was a recent increase in marketing spend.

Large volumes of data were available that might provide insight, from a set of total market sales estimates that may or may not have been reliable, through to a number of Google searches for various brand and product terms and government economic data on the health of related sectors. The data contradicted each other and tracking alone raised many more questions than it answered.

A long-term econometric study into the drivers of sales had recently been completed, which identified a few key Google search terms that accurately mirrored market trends. This prior analysis flagged up data that was worth tracking and which could answer the question: No, marketing response didn't appear to have changed, and yes, increasing sales were being led by a market recovery.

The key point here is that the data we track to aid our marketing efforts, and which we aim to use to refine campaigns on the fly, should already have an identified purpose at the point when we decide to track it. Data that we do not yet understand in detail doesn't allow us to plan in real-time, it raises questions, which first need to be answered. Answering those questions is an analysis process that can take from a few weeks, to several months.

Together with data, which is already well understood, a second ingredient is needed for real-time planning to work. We need to know beforehand, what our likely reaction will be to a change in the data.

Marketing dashboards, metrics and tracking should be like the petrol gauges or the speedometer on a car. When they change, we already know why and so we already know what to do about it. When the petrol gauge gets too low, we stop and fill up, to avoid an embarrassing call for a tow from the side of the road.

A lot of information about your car isn't displayed on the dashboard. Not because it isn't useful at all, but because it isn't useful minute-by-minute and would be a distraction from driving. This sort of information - on engine efficiency for example - is checked annually when the car is serviced. Marketing analysis should work the same way, meaning that we track what we already understand and can respond to, and ask mechanics (our planners and analysts) to react to more complex data once or twice a year. What the analysts discover might increase the scope of real-time planning as different data becomes well understood. To stretch the car dashboard analogy, we might gain new warning lights on the dashboard, but we are unlikely to start visualising large quantities of new data.

Without prior analysis, there is a sub-set of data that is always useful and, realistically, this is where a lot of brands already do 'real-time planning', whether it is labelled as that or not. Based on direct response data from clicks or phone calls, under-performing press insertions, search keywords and display placements can be pruned from a schedule in real-time without any need for further analysis. Their budget will be allocated to ads with a better response rate, and so all we need to know is that there is a better ad where we could be spending the money instead. It doesn't matter what is the true return on investment to a display ad - only that we can move budget from an under-performing ad to a stronger one that generates more clicks for the same money.

I should point out here that I'm absolutely not arguing against collecting marketing data. We have incredible quantities of information at our disposal to track and better understand consumers and we should keep them, because we don't always know what will be useful until later. This article is about reacting to that data in real-time and day-to-day, those volumes of data become a hindrance rather than a help. Once we focus only on the data that we genuinely understand, a lot of available data - from follower counts to web traffic - becomes surplus to requirements until somebody can work out why it's useful and what it means when it changes.

Even a measure of total sales or footfall to a store, is of dubious value to a marketer who doesn't know what impact the brand's marketing activity will have on the metric. A drop in sales presents two immediate possibilities - spend more on marketing in order to restore sales to where they were previously, or cut marketing in response to a worsening business environment. The data only becomes useful and real-time planning becomes possible, if econometric models or other in-depth response analyses are already in place. Then it is possible to estimate what marketing can achieve, given the data that we're tracking and to decide on the best course of action.

In summary, I would argue that real-time planning is a tactical, rather than a strategic tool. It creates efficiencies on smaller parts of an over-arching marketing strategy and allows us to quickly remove inefficient parts of the marketing mix, or to take advantage of short-term opportunities. It also allows us to increase the amount of marketing investment when that money is shown to be working harder than usual. The overall marketing plan, though, should be driven by longer term in-depth insight work and certainly shouldn't be compromised by trying to make too many short-term tactical gains.

To make real-time planning work, we need data and we need to have done some prior analysis. Monitoring data series that start a debate when they change can be helpful, but it doesn't allow us to make rapid changes to a marketing schedule. An upfront investment in statistical modelling, so that we fully understand the data that we monitor, allows us to predict the likely outcomes of making a change to the marketing schedule.

Real-time planning is about investing in analysis and preparing for situations that could be faced in the future, and if you haven't done that prior analysis, then you're not ready for real-time planning.

As a small illustration of these principles in action, Brilliant Media has a retail client where analysis has revealed that strong online sales can be generated, by up-weighting search activity against a competitor's television schedule. The competitor TV activity is largely predictable and the benefits of diverting the online interest that it generates have been proven. As a result, competitor TV schedules are closely tracked and search terms up-weighted to take advantage of the spikes in search volumes that they generate. This adaptable schedule has real benefits in terms of additional sales and has arisen as a result of a piece of investigative analysis that identified data that was worth tracking and could be responded to very rapidly.

In the end, I would argue that real-time planning is something of a contradiction in terms. We shouldn't attempt to plan in real-time; we plan and we analyse, so that we can react in real-time.

Reproduced with permission of Admap, the world’s primary source of strategies for effective advertising, marketing and research. To subscribe visit www.warc.com/admap. © Copyright Admap.

Wednesday, 11 January 2012

A legal sticking point for Sponsored Stories?

I'm a keen photographer as well as a data monkey and Facebook's development of Sponsored Stories, together with some comments on this Register article recently, have got me thinking.

If you take a photograph of a person and want to use it commercially, then you need a model release. It's a form, signed by the person who is in the photograph, that gives you permission to use their image and is an important protection. Without model releases, somebody could snap a picture of you in the street - maybe looking a bit tubby post-Christmas - and use it to advertise their diet plan in the national papers. Happy with that? Thought not. Model releases make sure that you get a say in whether pictures of you can be used commercially.

If you decide to try to make a bit of cash from your own back catalogue of photos by adding them to a commercial stock library like iStockPhoto, then you'll find that they're very fussy about photos of recognisable people. You need a signed model release or they won't accept the picture.

Here's a Facebook Sponsored Story.


It's an advert and it's identified as one. Starbucks have paid to have it appear (more frequently) in news feeds. If you ran exactly the image as above in a newspaper or magazine, then you'd need a signed release from Jessica Gronski allowing you to use her image.

You could make the argument that Facebook has so many users, this is impractical, but that argument always sounds to me like "we're breaking the law on such a massive scale that it doesn't count any more".

You could argue that by uploading a picture to your profile, that you've given Facebook implicit permission to use it. I don't believe that I have, but what if you're in my profile picture too? Or if my profile picture happens to be a picture of somebody else?

Model releases are a real nuisance for photographers, but they're required to abide by the rules and I can't see how Facebook using somebody's profile picture to advertise Starbucks is any different. It will be interesting to see if we get a test case in the near future, of somebody (probably a stroppy photographer) demanding payment from Facebook for commercial use of their image.

Friday, 6 January 2012

The risks of social media

We hear a lot of talk about the benefits of a social media presence, but we speak less often about the downside. However, like every marketing activity, social has the potential to sell a product, or to put across a message and it also has an intrinsic risk.

The risk on a traditional TV campaign is different to social. Unless a brand is being deliberately controversial, it's unlikely that you'll badly offend anybody with your thirty seconds of exposure and so the risk is in the budget that you commit to TV. You spend millions buying airtime and give up another slice of your budget to film the ad and you risk that it doesn't work; that it doesn't prompt people to go out and actually buy the product. We tend to think that this risk is pretty low and as long as your ad reaches some minimum quality, it will at least work a little, just by getting the brand name in front of people.

So what about social media risk? You don't usually commit such large marketing budgets to social and so the risk seems low. You might as well have a social presence - what is there to lose?

On a risk vs. reward basis, there's potentially a great deal to lose.

Ed Miliband's having a bad afternoon on Twitter. Somebody (and a staffer in his office seems to be taking the blame rather than the man himself) tweeted this is response to the death of Bob Holness today.




An odd typo, especially as it follows hot on the heels of Labour MP Diane Abbott (@HackneyAbbott) tweeting that "white people love playing divide and rule" and starting a racism row.

So, a substantial down-side risk of social media. The risk is heightened by the lack of oversight and care in a tweet, compared to a traditional ad that comes with an invoice attached. There's no way Diane Abbott's comment would have made it into an election leaflet to her constituency for example - or you'd certainly hope not - as a proof-reader would pick up on it and she'd choose her words much more carefully.

In the agency recently, we've been revamping our approach to social and one of the key questions for me is should you do social at all?. The answer is not always a simple yes. If you're not going invest serious effort, or are going to put junior staff in charge of the accounts, then social media is mostly a potential risk, rather than a potential benefit.

That risk is tough to assess and is similar to what disaster planners call a "high impact, low probability event".

The exec you put in charge of the Twitter account probably won't screw up and post something that could be construed as racist. Greenpeace probably won't decide that you're a corporate bad guy and start bombarding your Facebook page. Probably nobody will tell a customer with a legitimate complaint that they're a pain in the neck. Low probability.

But if they do, they'll be doing it in public... High impact.

Taking risks is fine if there's a potential benefit that outweighs the risk, which brings us back to Ed Miliband. I can't understand why he has a Twitter presence at all. @Ed_Miliband tweets once every couple of days, giving the bland Labour Party line on usually fairly dull topics. For me, there is no way that the @Ed_Miliband account can be doing a great deal of good for Miliband or for Labour. It's just there.

Which means it just sits there, generating minimal benefits and waiting for something to go wrong, like it has today. The account it a risk and that's all it is.

If your social presence is a "me too", or a vanity project, or in the hands of a junior exec because what could possibly go wrong? Then it might be time to think about whether you should have one at all. Maybe the only thing it's doing is sitting there, quietly, until the low probability event happens that makes you wish it could just go back to being ignored.

Monday, 19 December 2011

We need to talk about infographics

I blame Wordle. It's not fair to blame Wordle, but marketing's obsession with infographics got well out of hand at some point and I think it was around the same time that pretty word clouds came within reach of every bored account manager with a slide to fill.


I actually like Wordle a lot, so on second thoughts, maybe we should blame the infographic's leap to fame on marketing's discovery of Wordle rather than the tool itself, which has been around since 2008. A quick check of Insights for Search shows that the world went infographic mad a bit later than that, starting in 2009-2010.

Google searches for "infographic"



That growth line really isn't slowing down is it?

In case you hadn't guessed yet, this post is developing into a rant about infographics. To be more precise, infographics as they are used in marketing. I'm a little concerned though, that I may have just used an infographic to illustrate my point. Except that I'm pretty sure that's not what the image above is. It's a chart. If you're being pretentious, it's a data visualisation.

Data visualisations have a purpose; they exist to communicate data more effectively than text could. But could they still be the same thing as infographics? Maybe infographic is just short-hand for data visualisation?

Here's what Wikipedia thinks an infographic is...

"Information graphics or infographics are graphic visual representations of information, data or knowledge. These graphics present complex information quickly and clearly, such as in signs, maps, journalism, technical writing, and education. With an information graphic, computer scientists, mathematicians, and statisticians develop and communicate concepts using a single symbol to process information."

I really like the idea of the infographic as a sign. It differentiates it from a data visualisation and gives it a purpose. You could list all of the exits on this roundabout, with a paragraph for each that describes where they go, but you'd cause an accident as people tried, at 40mph, to read what you'd written. Infographics - by the Wikipedia definition - are useful to communicate a lot of information at a glance.



Let's be honest though, marketing infographics don't often look like that road sign. In marketing and journalism at least, the Wikipedia entry is out of date and the infographics we see today are a very long way from doing that job.

Let's pick on the IAB. That's not really a fair thing to do either, but they've been known to publish infographics and yesterday, they published this one.


Now that's a marketing infographic! Much more like it. Lots of charts all blended together into one big image and pretty typical of what you can see shared far and wide on Twitter every day.

It's quite pretty.

Probably took quite some time to draw.

My question is why draw it? Other than being link-bait for all those Google searches, what is an image like this for? Maybe it's art? Then it wouldn't need to be for anything, but I don't think it's art.

It needs a purpose, but I can't work out what its purpose is. For the time it takes to make one of these, it needs to be better at communicating than writing a short article and illustrating it with charts, but it just isn't. Infographics like this let you throw unstructured data at your audience (in a pretty format) in the hope that they'll draw some insights of their own from it. You're hoping that your audience will do the analysis for you.

In all honesty, I think this is what's been responsible for the infographic explosion. Facts are easy to get your hands on. They're everywhere. Original insight and analysis is hard and that makes the infographic perfect blog fodder. Take any topical subject, plus a bit of Googling for some relevant numbers, plus a few hours in Adobe Illustrator and you've got yourself some high grade link-bait.

Contrast the infographic above, with this lovely piece of data visualisation from The Guardian.


The Guardian's work makes a vast number of tweets more comprehensible than they would have been if you'd just thrown raw data about them at the viewer. The infographic on the other hand, throws large amounts of almost-raw data at the viewer.

Comparing between these two is undoubtedly unfair, as one is the product of a hell of a lot more work, but what's important for me is the scope of what they're trying to achieve. Only one attempts to aid your understanding of a subject. And only that one is worth your time and effort to decipher.

Feedback on this article from The Register, which recently started dabbling in infographics (the modern 'throw a lot of data at the page and see what sticks' versions) make me think that I'm not alone, but maybe you like the current direction? Have you seen any marketing infographics that you still find useful and refer back to? That did a better job than simply writing an article would have? I'd love to hear about them. Somebody's generating all those Google searches too, so if you're a sucker for an infographic, what is it about them that's so appealing?

Right now, the word infographic to me means "presenting data in a pretty format that makes it difficult to use" and until they go back to following that Wikipedia definition, I'll be steering well clear.

Tuesday, 8 November 2011

The other reason for Google+

I'll start this one with admission; I like and use a lot of Google's products. I've got an Android phone, have been singing the praises of Google+, Google's my default search engine and GMail is fantastic.

Like US
antitrust regulators though, I'm starting to wonder if Google might have too much power. If Microsoft had a case to answer in the way that Explorer was been bundled with Windows, then wouldn't Google have similar issues with the increasing integration between its products?

Google has a large suite of products, despite recently closing down
Labs and many of them are tied very closely to its search engine.




Search Google for any term that could reasonably return a map and you'll get a map included in the results. A Google map, naturally.

That's fair enough; I was looking for Leeds and Google fetched me a map of Leeds. Maps can probably be included in a legitimate list of the things I might have wanted. Unless you really want to get picky (and if you're
Streetmap, then you probably do,) all Google's really doing is returning a graphical result rather than a text based one.

The trouble with this type of justification, is that you can push it into almost any sphere that the web touches. And the web now touches virtually every part of our lives. If I want to do
anything then you can say that I'm looking to do it. Which means I'm searching for it. Which means it's a legitimate product for Google to develop and cross promote from its search engine.

This is exactly the argument that Eric Schmidt is
pushing with regulators in the US.


"[W]hat is crucial to understand is that universal search results are not separate 'products and services' from Google.

Rather, the incorporation of thematic and conventional results in universal search reflects Google’s effort to connect users to the information that is most responsive to their queries.

Because of this, the question of whether we 'favor' our 'products and services' is based on an inaccurate premise.

These universal search results are our search service — they are not some separate 'Google content' that can be 'favored'."
(Eric Schmidt. Quote borrowed from The Register)


Google's search market share in the UK is over 90% according to Hitwise. That's a hell of a lot of potential abuse of a dominant market position. I'm not saying that Google is abusing its position - the legal work on my house move is costing quite enough - but if everything Google builds can be integrated into search because it's all one product, then where do you stop? Taking a broad definition of the term, virtually everything starts with a search.

If want to know about a location then I need a map, Google has maps so Google directs me to its own maps.

If I need a flight, Google has a new flights product, so I can be sent there rather than to Skyscanner.

If I'd like to call someone, Google has phones and voice and video chat.

It's difficult to think of any service-based category that Google couldn't decide to enter, develop its own product, cross-promote it from search and use that same Eric Schmidt argument as a justification. Google Legal? Google Estate Agents? Music? News? Why not? You're searching for information and content.

As much as I like Google+, I missed one of its primary benefits to Google the first time around and it only became clear when the black product bar arrived, that now sits on top of just about all Google products.


By closely integrating their product offer - essentially by making it all one product - Google are playing the same game that Microsoft tried to play with Internet Explorer. Compare Schmidt's argument above, with this Microsoft justification for bundling Internet Explorer with Windows.

"Microsoft stated that the merging of Microsoft Windows and Internet Explorer was the result of innovation and competition, that the two were now the same product and were inextricably linked together and that consumers were now getting all the benefits of IE for free"
(Wikipedia link)

Sound familiar?

Microsoft ended up in a compromise with regulators, that was likely a much better outcome for them than if they'd just stubbornly refused to un-bundle Explorer from Windows.

Google seem to be playing the same strategy: Integrate your products so closely that you can argue they're not actually separate products at all. In that context, Google+ needs to be a window onto everything that Google does. It also explains why you'd ruthlessly kill off Labs, which might otherwise be cited as hosting examples of off-shoot products that have nothing to do with search.

Google won't get away from an antitrust investigation completely unscathed but it seems to be a good strategy to ape Microsoft and try to avoid a ruling that's heavily weighted against what they want to do as a business.

For their part, US regulators need to recognise Google+ for what it is: not just an aggressive move into social, but a very clever defensive move to counter a future antitrust ruling.

Friday, 4 November 2011

Book Review: Moneyball

Continuing Wallpapering Fog's series of occasional book reviews, we have Moneyball, a book about baseball statistics and a team called the Oakland A's that I picked up on the recommendation of @AdContrarian.

I know very little about baseball, other than that I watched a game on TV in a hotel room in New York once and found it quite dull. Before reading Moneyball, I'd never heard of the Oakland A's, but none of that matters. Moneyball is a stunning piece of work.


Baseball is the background for a story about how to change a business. The huge number of games that get played and the set-play nature of the games make baseball a statistical goldmine, where a few amateur analysts had noticed that a lot of long-standing, established wisdom about the game, was wrong.

One team - The Oakland A's - take this knowledge, which was freely available to anybody with an interest and set about building a team based on what they can prove about the tactics and the types of players that win games. They turn on its head the idea that the team with the highest paid players will always come out on top.

Michael Lewis works potentially dry statistics into a fabulous narrative, interspersed with the life stories of Oakland's oddball players, who don't look like athletes and would be rejected on any traditional evaluation of whether they're suited to the game. Overweight, old, injured and with bizarre throwing actions, they're mainstream baseball's rejects, but they've got stats that say they can hit...

As I read Moneyball, comparable problems from business, marketing and other sports kept jumping out. You find that you break from the page to wonder if the management at Stoke City FC have read it, or to curse (having worked for a year at EMI) that more people in the music industry haven't. You can't help wondering how many of our own established marketing practices are wrong and which ones we could prove definitely are. Baseball's brimming with statistics and yet the task of breaking established ways of doing things is incredibly hard, even when the evidence is staring you in the face. Marketing's a black art at the best of times, where it's much harder to produce the battering ram of hard stats that at least point the right way.

For me, as a statistician, Moneyball inspires, by showing just what can be achieved by dispassionate analysis and is daunting in its illustration of just how hard you have to work, to make the changes you've proved need to be made. Baseball went thirty years before anybody with money and control of a team paid attention to the hard evidence that established tactics and the usual metrics that were used to value players actually harmed your chances of winning.

Once one team picked up this knowledge and started to apply it (via a General Manager who couldn't care about who he needed to fire, intimidate or cajole to get his way), years went by before other managers started to ask how they were being consistently out-performed by a team with only a third of their player budget.

In short, read Moneyball. You don't need to know anything about baseball (though a little understanding of a few key terms, like base stealing, helps) and I promise by the time you've finished it, you'll want to make changes to the way that you work. It's the best non-fiction book I've read this year, by miles.