ISSUE: A major health care insurer was using direct mail to solicit potential customers for its new specialized insurance programs. After merging in-house and purchased databases, the company settled on a ZIP code and income-based decile (1=highest income; 10=lowest income) method for targeting direct mail. Targeting the higher income ZIP codes, the company mailed over two million DM pieces over the course of two and a half years. In short, initial direct mailings were less than satisfactory, with less than 2% response among all confirmed recipients.
APPROACH: One of the keys to this analysis was not only just looking at the direct mail response, but also persons whom were already subscribers of the programs. Examination of demographic and lifestyle variables among current subscribers revealed robust clusters of subscribers: the self-employed with average incomes, people needing temporary or additional coverage for a family member (i.e., an elderly parent), and lower-income people. Direct mail responses tended to come from lower-income ZIP codes—precisely the groups not being targeted in the direct mail campaigns.
Using a combination of regression techniques, the ZIP code deciles were “re-balanced” to emphasize the higher-likelihood respondents to direct mail for the specialty insurance programs. A simple set of rules based on the regression equation yielded high priority targets for direct mail.
RESULT: A test direct mail campaign to 300K households—200K less than the company’s typical DM campaigns—based on the new deciles and rules yielded surprising outcomes. As a result, response to the direct mail campaign increased by 300%, with actual subscriptions more than doubling. In addition, the company saved additional money with the smaller, more-targeted direct mailing. A subsequent larger campaign had similar, but slightly more modest results.
ISSUE: A major luxury retailer which relies heavily on catalog sales in addition to in-store sales was looking to better understand its customers and improve its targeting of catalog mailings. Over the years, catalog mailing lists had grown organically through in-store sales and credit card purchases, as well as website requests and partner retailers. Each twice-yearly catalog cost over $1 to print, and combined with mailing and processing costs, added up to over $10M annually. With such a significant cost associated with catalogs, the company needed to better understand and target its expensive and limited catalogs to those customers who were the most likely to be repeat buyers—ongoing, long-term customers.
APPROACH: Starting with the natural segments of loyalty card vs. non-loyalty card, additional segments were derived from demographic and purchase history data using correlation and cluster analysis methods.
Using a logistic regression approach to predict return buyers from catalog recipients, three pieces of customer information could predict over 70% of return buyers. These pieces of information were 1) loyalty card membership; 2) distance from a retail outlet, and 3) total number of orders by the customer. Even more compelling was the finding that the model predicted over 90% of customers who would not return as buyers to the retailer.
RESULT: With the new information, the luxury retailer assigned a letter grade (“A”, “B”, “C”, “D”, and “F”, that latter of which were dropped off mailing lists) and re-targeted its catalog mailings. The retailer could now budget and target mailings based upon the new grades since each letter grade accounted for approximately one-quarter of mailing list customers. As an added benefit, the company eliminated from its catalog mailing list those who were non-loyalty card customers and one-time buyers saving the company over $2M of it annual catalog budget.
ISSUE: A large technology company was deploying a new CRM software solution to its sales teams. Training and deployment of sales team users on the tool were done in waves, with anywhere from 200 to 500 users during a period. Training was required for all sales team members, and lasted a week. Like any major corporate software deployment, many users will have problems installing, configuring, running, and getting training on the newly deployed CRM tool. At times the in-house help and training center would be suddenly deluged with incoming calls or quiet with very few calls. As a result, the company wanted to get a better handle on when call activity was likely to spike and be able to staff the additional call traffic appropriately.
APPROACH: A number of pieces of information were used to develop a rough timeline of key events in deploying a sales team on the CRM tool: 1) the software license start dates; 2) training dates and schedules for teams; 3) first log-in dates of team members; and 4) help center call dates. As the data were lined up and examined, some compelling patterns emerged.
Using regression (and ultimately lag regression) techniques, entering key variables and dates into the equation, yielded a simple solution. For every 100 users added, the call center could expect 50 additional calls approximately three weeks after the start of training date. With 500 users being added, 250 calls could be expected during the third week from the start of training.
RESULT: The in-house call center could adequately staff and cover the spikes in traffic. Prior to adopting the solution, average call center problem resolution was 90% during regular times, but substantially lower during heavy traffic times. After determining the appropriate call center staffing during post-training intervals, the call center re-established its average 90% problem resolution—whether during a spike in call activity or relatively quiet periods.
ISSUE: A major property insurer had embarked on a statewide advertising campaign including TV, print, direct mail, and web promotion totaling over $10M. The television advertising component contained a number of specific “triggers” (phone numbers) that would indicate that a specific television spot was the source of the customer inquiry into insurance programs. Six months into the campaign, many fewer phone calls were logged to the trigger phone numbers than were expected. Looking only at the television ad hits and subsequent sign-up and revenue totals was not encouraging—the company had spent $6M to generate less than an anticipated $2M. The company needed to decide whether or not it was worth purchasing additional TV advertising time based on the results to date.
APPROACH: Key to this analysis was the multi-source nature of the advertising, inquiry, and revenue data. Looking only at the TV spot triggers was misleading. The insurer had not only TV, but print, website, and other sources to consider. Moreover, understanding how customers research and respond to their advertisements was critical as well.
Using simple mean testing and other methods to look at inquiries, subscriptions, and revenue in the post-airing intervals as well as year-over-year results showed remarkable results. Looking at the weeks directly before and after the post-airing interval showed significant “channel migration”—that is, people viewed a televised spot, but respond via a different media channel. As an example, the company’s website received a 900% increase in traffic during the week following televised ads compared with the week before. Incoming calls from Yellow Page listed telephone numbers increased significantly as well.
RESULT: Revenue generated from the campaign alone was $27M. While some revenue and subscription increases can be directly attributed to certain sources or triggers from the advertising campaign, the vast majority of revenue ($21M) migrated to alternate media channels (website, general telephone number) than the triggers listed in the TV spots. As a result, the company continued its campaign, generating similar results in the following six months.