Why companies should generate their own data

This is it. You’ve aligned calendars and will have all the right people in the same room. It’s the moment when they either decide to give you the resources to turn your innovative idea into reality or send you back to the drawing board. How will you persuade them to do the former?

At most companies, the natural tendency is to marshal as much data as possible: Find the analyst reports that show market trends, build a detailed spreadsheet promising a juicy return on corporate investment, create a dense PowerPoint document demonstrating that you’ve really done your homework.

Assembling and interpreting data is fine. Please do it. But it’s hard to make a purely analytical case for a highly innovative idea because data only shows what has happened, not what might happen.

If you really want to make the case for an innovative idea, then you need to go one step further. Don’t just gather data; generate it. Strengthen your case and bolster your confidence – or expose flaws early on – by running an experiment that investigates one or a handful of the key uncertainties that would need to be resolved in order for your idea to succeed.

That may sound daunting if you haven’t tried it. And, you may well ask, how do you do it when you lack a dedicated team and budget? Fortunately, there’s a fairly systematic way to go about it.

Start by resolving what you imagine will be the biggest question on the minds of the people to whom you will pitch your idea. That might be whether customers will really be willing to use – and purchase – your proposed offering. Or it might be whether the idea is technologically feasible. Or maybe your potential investors will be concerned that some operational detail could stand in the way of success.

Once you’ve identified the issue that has the greatest potential to kill your deal, find a quick and inexpensive way to investigate it. The key here is to simulate the conditions you’re trying to test.

For example, for several years Turner Broadcasting System, a division of Time Warner, had been playing with the idea of connecting the first advertisement in a commercial break to the last scene in a television program or movie. Imagine a scene of a child landing in a puddle of mud followed by a commercial for laundry detergent. Academic research showed this contextual connection had real impact, raising the possibility that Turner could charge a highly profitable premium to match the right advertiser to the right commercial slot. But would the system it used to match content to advertising be too expensive to make the service profitable? And what if there just weren’t enough scenes in Turner’s library of movies and TV programs that could serve as effective contexts for its advertisers? How could the project team find out?

Instead of speculating, Turner put a team of summer interns in a room for a few weeks, had them watch movies and television shows, and asked them to count the number of points of context in a select group of categories. Then Turner brought the results to a handful of advertisers, who enthusiastically supported the idea.

Imagine how these experiments changed the meeting. Without them, the team would have presented a conceptual plan full of glaring unknowns. But with these data in hand, they offered evidence that the idea was feasible and that potential advertisers were interested. Perhaps not surprisingly, Turner ended up launching the idea, named TVinContext, in 2008 to significant industry acclaim.

Working out how to generate data to test out an idea at its earliest stages requires some creativity. For instance, Innosight advised a mobile device company that was considering a new service that would serve up customized content to consumers based on their mood and location. Would anyone want that? Would they pay for it?

To find out, we had to find a low-cost way to simulate the offering and some way to test people’s interest in something that didn’t exist yet. First we found third-party designers on eLance.com and worked with them to develop mock-ups of what the interface might look like and to create a two-minute animated video describing how the service would work.

How could we tell whether the idea resonated with customers? Of course we could show them the mock-ups and videos and ask them if they liked or didn’t like the idea. But that really wouldn’t tell us whether they liked it enough to use it, let alone pay for it. So we asked customers at the end of the presentation if they wanted to be the first to participate in a beta test of the idea. All they had to do was give us their credit card number, and we would charge them $5 once the test started. We didn’t actually plan to charge the consumers. Instead, we wanted to know how many were interested enough in the service to part with sensitive data. When a significant number of customers were willing to give us their credit card details, we knew we were headed in the right direction.

One of the most valuable things these kinds of experiments can do is illuminate serious flaws in your idea before you make the mistake of investing serious resources in it. The results from one concrete demonstration are worth reams and reams of historical market data.

For instance, an education company had an idea for recruiting teachers that at first seemed really promising. Schools and applicants have long complained that paper résumés aren’t very good indicators of teaching ability and interpersonal skills. What if, the company wondered, we created a service that allowed schools to review short video clips created by prospective teachers showing them in action? Both teachers and schools loved the concept – on paper.

But then the education company tried to convince real teachers to create real videos. It advertised the service at a handful of teacher-training colleges and in online forums. No one expressed any interest. The company even began offering $100 for people to sign up. Still, no one expressed any interest.

It turned out that once the opportunity became less abstract and more real, prospective teachers clammed up. They loved the concept of selling themselves through video, but in reality they worried about how they would come across.

Notice how all of these examples involved some kind of prototype. As online tools improve and 3-D printing becomes increasingly affordable and accessible, it’s becoming easier to bring an idea to life without substantial investment. For example, a company that manufactures insulin pumps for people who suffer from Type 1 diabetes knew that customers didn’t love the physical designs of current pumps. The company was curious to find out how patients would react to pumps of different sizes and shapes. It worked with a small design shop in Rhode Island to develop a series of physical prototypes that brought the look, feel and weight of the imagined devices to life. The company then asked insulin-pump customers pick up and play with the prototypes and compare them side-by-side with current offerings. This approach enabled the company to solicit critical feedback before it invested millions of dollars in more comprehensive design work.

None of the experiments described here required an inordinate amount of money or time. And yet they all quickly generated critical data that helped innovators to strengthen – or, in the case of the education company, discard – ideas. When it comes to making your case persuasive, one carefully prepared experiment is worth a thousand pages of historical data. Certainly that’s well worth a little extra effort.

(Scott Anthony is the managing partner of Innosight. He is the author, most recently, of The First Mile: A Launch Manual for Getting Great Ideas Into the Market.)

© 2014 Harvard Business School Publishing Corp. Distributed by The New York Times Syndicate