We identify new market and product opportunities to help companies grow.
Call us: 312.276.5140 Email: info@ericksonresearch.com

Erickson Research Blog

All posts in Data Analysis

Why Big Data, By Itself, Doesn’t Create Big Insights

bigdataIt seems like you can’t go a day without hearing something about big data. Much digital ink has been spilled on all sides of how big data could affect the research industry, from the breathlessly optimistic to the foretelling of the end of the profession.

I am very optimistic.

Here’s why.

It’s easy to see why people get so excited about the prospects for big data in marketing.  If you believe all of the hype, it is very easy to convince yourself that if you only had enough data, the answers to all possible marketing problems are within your reach. The problem is having enough data is a necessary but not a sufficient condition to generating insight.

What else do you need?

Judgment and strategic thinking.

These happen to be things that people are far better at than computers.

Dr. Michael Wu wrote a great piece in TechCrunch about the big data fallacy.  In it, he systematically destroys the notion that a lot of data means a lot of insight. The point he is making is that there is far more data than there is information, and there is far more information than there is insight. Further, he argues that more data and more information don’t necessarily result in more insight.

I think the reason boils down to this:

It is the analyst — not the analysis — that assigns meaning.

That’s because there can’t be any meaning without context.  Context requires a broad understanding of how the data fits into the issue and how the issue fits into the world.

The skills needed to make those connections?

Judgment and strategic thinking.

To me, this means marketers need skilled analysts to work with big data to generate the insights that will benefit the business.  This critical link in the chain isn’t going to be automated anytime soon.  In fact, one of the big worries marketers have is where to find all the data scientists to actually do the work.

It seems to me that there are plenty of people working in research today with all the requisite skills. The research industry just needs to change its mindset from strictly thinking about primary research to thinking about using data, wherever it comes from.

As Tom Anderson has mentioned, and I agree, the act of sifting through volumes of data to understand patterns and identify potentially lucrative groups in the market sounds a lot like segmentation.

Researchers certainly have the skills and the experience to be key players in big data.  Whether it is in the data manipulation and analytics or in the ability to connect the dots to create insights from the data, there is opportunity there for anyone who chooses to seize it.

So…which side do you fall on…optimist or pessimist?  Would love to hear your thoughts.

Avoid the Instant Analysis Trap

The web delivers an overwhelming torrent of information all day, every day.

Just about every online survey system boasts its ability to deliver real-time reporting.

We can watch events “live” on 24-hour cable news while the talking heads explain what it all means as its happening.

We’ve come to expect that because we can get data instantly, we should be able to create meaning from it instantly. There’s a problem with this cultural expectation of instant analysis, though.

It just doesn’t work very well.

Divining meaning without time for reflection and exploration can work in only the most straightforward situations.  This is fine if we are concerned only with what is happening.  Understanding what doesn’t usually require much beyond looking at the data stream and drawing a conclusion.  Unfortunately, what isn’t usually the interesting or valuable question.

The interesting and valuable questions are “why?” and “now what?“.

Trying to answer those questions in “real-time” results in, at best, a superficial understanding of the situation and recommendations for action that may or may not be useful.  Using this level of analysis is like following a map where only the biggest land features and roads are included.   It will probably get you there, though much less efficiently than if you had the complete picture.

At worst, it completely misses less obvious relationships in the data that change the interpretation.  This is like following the wrong map.  In that case, you’re better off without the map at all.

Luckily, the solution is simple.

Build time into project schedules to allow for the necessary review and discussion of the data.

I’m not suggesting that project timelines be dragged out to include endless discussion and meetings.  In many ways, a “death by committee” approach is worse than working from a superficial analysis.

There is certainly a middle ground, which for me is finding an additional few days to a week at the back-end of project.  This is enough time to make significant improvements the final product, while still providing timely information to the client.  A better thought out and vetted set of conclusions and recommendations mean the client team can spend its time on focused action instead of running around in circles trying to figure out which of the recommendations are good.  In the end, this saves far more time than was added to the project timeline.

A recent project reminded me of this.

When it became clear that we would be delivering some bad news to the client, I wanted to make sure we communicated the findings clearly and in a way that helped them take action.  So, I pushed our team to develop the first version of the report well in advance of the due date to the client.  We used the “extra” time to circulate and discuss the findings and recommendations among the project team, which led to some excellent refinements.

These discussions were very helpful in refining the findings and recommendations to make them more clear and actionable.  The additional perspectives we were able to include also produced at least one new recommendation that the core team hadn’t thought of, but was an excellent response to the research findings.

The first version of the report was absolutely not just a draft that needed work.  This first version would be considered a good work product by most people’s standards.  The key findings from the first version all remained in the final deliverable.  One additional key finding was added, but it was really just the result of splitting one of the existing findings in two.  Most of the initial recommendations remained, but were refined and made more actionable.

By getting away from the trap of providing instant analysis and moving on, we were able to make that good report better.

Who benefits from this “extra” work?

Everyone benefits.

Our team gets the satisfaction of digging deeper and producing a better product, the client team gets insight that is more robust and includes more focused recommendations, and our company gets the reputation boost of delivering superior work.

So, the next time the client asks if you can deliver that report sooner, resist the temptation to provide instant analysis.  Instead, take the time to apply some additional critical thinking and give them what they want – not what they’re asking for.

What about you?  How do you handle the pressure from clients to deliver instant analysis? Let me know!