Posted by infotrellislauren on Monday, Dec 15, 2014 @ 1:34 PM
© Phil Date | Dreamstime Stock Photos
© Phil Date | Dreamstime Stock Photos
© Phil Date | Dreamstime Stock Photos

There’s a metaphor I like to use about public washrooms. Have you ever been in a public washroom where the toilet flushes automatically, the soap dispenses automatically, and the water turns on and off automatically, but then the drier is manual, and it seems really jarring and weird because you stick your hands under it expecting it to be automatic too and then nothing happens? That’s what’s going to happen to digital customer experiences and marketing best practices.

Let me elaborate.

Say it’s around the second week of December. I’m working on doing my Christmas shopping still, like many people are at this time of year. I open up an email from a large bookstore chain that I happen to have a loyalty card with – one of the few I actually use and carry around with me, and tolerate the promotional emails from. In the email is an offer that says “Got friends around the world? Check out with this coupon and we’ll ship to three different locations for free when you spend more than $100!”

For me, I’d be thinking: “Holy smokes, that’s perfect!! I have lots of friends around the world! I would love to be able to ship to three different places in one purchase! That’s so convenient!”

That might not be something that would excite you, but that’s why (although I’m not aware of it) I got this email and you didn’t. It’s tailored specifically to me because they know this is an extremely relevant offer that will motivate me to make a large purchase.

So I click to get the coupon and it takes me to a “gift suggestion” page. And somehow, it’s only showing me gifts and books that my friends and my family would like. It’s got science humour books, nerdy video game related books, and even suggests a book with big glossy pictures of cars for the two people on my list of ten loved ones that really dig cars. Me personally, I don’t like cars that much – but this isn’t a list tailored to me anymore, it’s tailored to the people I most care about and would likely spend more money on a gift for.

So here I am, sitting at my computer and thinking “WOW that is perfect for this person I care about, this one here is perfect for THAT person I care about, look at this I’m going to get all my shopping done in one afternoon,” and before I know it I have $250 of things in my basket.

It’s like a next-best-offer section, but super intelligently suggested.

How do they do that? They match my customer profile to my social media profiles, and they not only profile me, but they determine which of my friends I pay the most attention to and then they profile those friends. My activity and relationships on Facebook, Twitter, LinkedIn and other websites will all tell them who I most value of my friends. They can then match those friends of mine to their own internal customer records and provide me with the “next best offer” that would most apply to my friends based on their purchase history, without actually revealing that purchase history to me. If they don’t have a customer profile with the bookstore, they still have lots of data about their likes and interests from their social profiles that build a comprehensive idea of what kinds of books and other items they’d enjoy as gifts.

Now, this is the point that you think “That sounds kind of creepy.”

Yes! Extremely creepy!

But useful to the consumer.

Which is why you would label the suggestions “What’s hot right now!” The shopper can only assume the rest of the world has the same taste as all their friends, which isn’t that big of a stretch if they have a wide variety of relationships with a wide variety of people. By knowing when to make the personalization obvious and when to be more subtle about it, you reduce the chance of making your customers uncomfortable.

Ultimately, the consumer benefits because they get all the stuff they want and they don’t have to wade through products that are irrelevant to them, and the coupons or incentives they’re offered are always relevant or useful. It’s about making life easier for people. “If you could use magic to make shopping better in ways you don’t believe are actually possible, what would you change / improve?”

Now of course, it’s arguable that improving your marketing relevance is less about making it easier for people and more about making it easier to target consumers to spend more money. It really ought to be both, ideally. When the consumer benefits, the seller benefits – the idea being that if you give people a better experience, they reward you with loyalty.

So yes, your end goal is money – you are a business – but at the same time, you differentiate yourself from other businesses by recognizing that every person is unique, and giving them “special treatment” by using technology capable of instantly customizing the experience.

Eventually the majority of companies will be capable of never, ever sending you something that is irrelevant to you. And then when that does happen, your reaction is likely to be, “Wow company, get it together.” To return to the bathroom metaphor, you’ve gone from three automated interactions to one unexpectedly manual one. Before, you’d never have thought about the dissonance, because you’d never been given a different experience. But once the ball gets rolling and you’ve gotten used to it, the experiences you thought of as normal will be bizarrely outmoded and stand out.

So the next time you’re pushing through a crowded mall or clicking through an online catalog trying to figure out what the heck to get for the people on your Christmas list, stop to imagine a better world in which the retailer has suggestions for you that are genuinely helpful and designed for you and the people you want to see smile this holiday season.

Big Data technology is actually making this kind of automated and sophisticated microsegmentation possible. Maybe you’ll even see it in action this time next year.

InfoTrellis is founded by a team of three architects who have been shaping the Information Management market space since 1999. At InfoTrellis we work on developing cutting edge technologies designed to tackle the new challenges facing the modern data-driven company, driving the creation of next-generation products for enabling targeted marketing, highly customized and individualized loyalty programs, deeply detailed competitive analysis, and enriched, automatically updating customer profiling. To learn more about InfoTrellis, visit our website at www.infotrellis.com or contact us directly at info@infotrellis.com

Topics: Big Data Customer Relationship Management Marketing Retail segmentation

Leave a Reply

Your email address will not be published. Required fields are marked *

Posted by infotrellislauren on Monday, Nov 24, 2014 @ 10:00 AM

“Recent research by McKinsey and the Massachusetts Institute of Technology shows that companies that inject big data and analytics into their operations outperform their peers by 5% in productivity and 6% in profitability. Our experience suggests that for retail and CPG companies, the upside is at least as great, if not greater.”

Peter Breuer, director of McKinsey & Co.’s retail practice in Germany

With November half over and 2015 starting to peek at us over the horizon, we decided it was time to take a look at a few examples of what retailers have been using big data for in 2014. Here are three examples of use cases for Big Data in retail that have emerged in the last year, followed by a few InfoTrellis predictions about what will happen next in the new year when it comes to the evolution of how companies are implementing their Big Data strategies.

Macy’s

Personalized Marketing

 

The main goal for Macy’s CEO, Terry Lundgren, is to offer more localized, personalized and smarter retail customer experience across all channels. They use Big Data among others to create customer-centric assortments. They analyse a large amount of different data points, such as out-of-stock rates, price promotions, sell-through rates etc. and combine these with SKU data from a product at a certain location and time as well as customer data in order to optimize their local assortments to the individual customer segments in those locations.

 In addition to that, Macy’s gathers, and of course analyses, a vast amount of customer data ranging from visit frequencies and sales to style preferences and online & offline personal motivations. They use this data to create a personalized customer experience including customized incentives at checkouts. Even more, they are now capable of sending hyper-targeted direct mailings to their customers, including 500,000 unique versions of a single mailing. The results are compelling; Macy’s e-commerce division alone has witnessed a growth of over 10% and an overall annual revenue growth of 4% with the use of Big Data Analytics.

(Macy’s Is Changing The Shopping Experience With Big Data Analytics)

Personalizing the user experience is a ubiquitous use case for big data, so it’s exciting to see a retailer actually implementing the technology to accomplish and prove out the value of this marketing strategy. Four percent growth is nothing to scoff at; this number represents millions of dollars on pure profit they didn’t have before. For companies that still believe they can accurately segment their hundreds of thousands of customers with fewer than a hundred profile archetypes, this is an undeniable piece of proof that they may need to consider getting on the bandwagon if they don’t want those millions of dollars to be coming out of their share of the customer’s spending habits.

What’s more, this is a front-and-center application of big data that is highly visible to the shopper. Whether or not they can articulate the difference in quality of experience they get from a retailer that uses it and a retailer that doesn’t, it’s a difference they can intuitively feel and will definitely react to by rewarding one store with loyalty over the others.

Once companies start pulling social data and combining it with internal customer data, their targeting and micro-segmentation capabilities will enable even more uniquely tailored marketing and customer experiences. So long as companies remember that the purpose of this data-collection is to minimize friction and irrelevant messaging for their customers and never to manipulate them or milk them for money like a mindless herd, the consumer stands only to benefit from the evolution of this practice.

For this reason, my prediction is that this will be a big differentiator in the coming years as the companies that experiment with it first (i.e. the early adopters) get better and better, making the gap increasingly noticeable to the end consumer. There will be a scramble by the companies that lagged behind to try to catch up, and this will represent a big shift for the retail industry’s established best practices in much the same way the idea of the loyalty program and the digital storefront did.

LUSH

Supply Chain Efficiencies

 

LUSH Fresh Handmade Cosmetics used big data technology in 2014 to drive in-store profitability and deliver savings of over £1 million in stock loss. Working with many datasets – retail data within EPOS systems, supply chain and stock management, payroll and timesheet systems for staff management – LUSH sought and implemented a technology platform that could be used by employees at every level throughout the business to provide access to relevant sales, stock, store and staff information to improve performance.

 The BI tool is deployed across the entire LUSH organisation. The retail accounts team uses their new platform to dig into the numbers behind the sales in each shop, to keep an eye on ledgers, petty cash and every other aspect of running the business financially.

The technology is used in all the stores, so all the shop managers and employees on the shopfloor have access to updates every hour. The big data technology is also used in LUSH’s manufacturing department for stock management, allowing the team to facilitate orders between the factory and the shops and can keep an eye on stock position around the country. The gathering and analysis of big data has also helped LUSH to get a view of its sales and stock which has led to improvements in forecasting and sourcing of key ingredients from suppliers.

(LUSH Fresh Handmade Cosmetics saves £1 million with user-driven BI)

Big Data doesn’t just have to mean customer-obsessed data like social media or purchase data, and LUSH has demonstrated excellently how better managing and understanding the “big data” generated by business processes can lead to the increased efficiencies that save companies real money. The above quote doesn’t go into a great amount of detail about this, but LUSH is also apparently giving their in-store staff the power to make decisions about the physical customer experience around displays and promotions using insights gained from this data.

The big advantage here is the immediacy of the data and the accessibility of it across the organization. Giving employees in the store both the ability and the direction needed to make swift data-driven changes means a real-time responsiveness that is key to taking advantage of interesting correlations like how shopping behavior changes during certain weather or changing a marketing strategy on the fly when regional tweaks have the ability to boost sales.

The more data they have and the closer they get to real-time, automated analysis, the better companies like LUSH get at managing their backend operations. This isn’t something that the customer will see as clearly as a customized marketing experience, but when a store always magically has the kinds of things they’re hoping to buy and then a few other things they didn’t realize they wanted until they laid eyes on them, repeat visits are likely just for the convenience of the inventory. Customers don’t much care about whether a store loses money on excess, unwanted inventory, but they’re unknowingly benefiting just as much as the company is when predictive analytics can prevent that unfortunate occurrence.

Going forward, these kinds of clever algorithms stand to get better and better as companies pull in data from outside of their company, using social media to understand demand and competition as well as more granular breakdowns by region and even individual store location. I don’t anticipate the rush to get in on this use of Big Data technology will quite as dramatic as need to invest in its marketing applications over the next few years, but it’s a tangible, valuable use-case that will apply to the more pragmatic of executives. We will likely see slow but steady growth in the number of companies that decide to implement Big Data technology with the goal of improving their BI behind the scenes.

Starbucks

Maximized Profitability of Store Locations

Understanding the pools of information pouring into the databases of Starbucks has become a major focus at the international brew chain, even though its high-profile CEO doesn’t much care for it. “Howard [Schultz] doesn’t care about data. He has absolutely no head for data,” said Joe LaCugna, director of analytics and business intelligence at Starbucks during a session at the Big Data Retail Forum in Chicago.

 A full quarter of Starbucks transactions are made via its popular loyalty cards, and that results in “huge amounts” of data, Mr. LaCugna said, but company isn’t sure what to do with it all yet. The same goes for social media data, he said. Starbucks has a team who analyzes social data, but, “We haven’t figured out what exactly to do with it yet,” he said. It’s a common refrain among brands, and many of the speakers and attendees here at the conference.

 (At Starbucks, Data Pours In. But What to Do With It? – Published in 2013)

In 2007 and 2008, Starbucks’ CEO Howard Schultz was forced to come out of retirement to close hundreds of stores, and rethink the company’s strategic growth plan. This time around, Starbucks took a more disciplined, data-driven approach to store openings and used mapping software to easily analyze massive amounts of data about planned store openings.

The software analyzed location-based data and demographics to determine the best place to open Starbucks stores without hurting sales at other Starbucks locations. The software is also helping to determine where the next 1,500-plus stores should be placed not only to help the company expand, but drive revenue for new store developments.

 (How Big Data Helps Chains Like Starbucks Pick Store Locations — An (Unsung) Key To Retail Success – Published in 2014)

Starbucks is an interesting case because it’s collecting all the right data to be implementing something like Macy’s is doing, but doesn’t seem particularly motivated to try it just yet. Indeed, a year ago they admitted they didn’t have a clear strategy for using the data generated by their loyalty program.

This year, they’ve put forward a clear case that benefits them in an area that is very important to the franchise; store location. For retailers, this has always been an absolutely essential part of decision-making when it comes to growing their profits and customer base year over year. It makes sense that Starbucks, having lost so much money with bad calls on store locations in the past, would choose to use their data for determining where they can gain the most long-term growth and profit when planning new locations.

This is a great example of uses for Big Data that aren’t as intuitive as customer experience related ones. It speaks to the kinds of use cases that speak to CEOs like Schultz – nothing fancy or weird, just better quality and higher quantifies of information being used to answer a question they would have been asking anyways as part of their overall strategy. It goes to show how powerful Big Data can be when you use it to approach an established goal from a new direction.

It does, however, raise a few eyebrows. Starbucks is considered an industry leader in many regards, but the reluctance to use their glut of data for more than just location planning could be a serious miscalculation by the chain. My prediction is that one of their competitors will figure it out sooner than they do and offer a data-driven, social-media-integrated loyalty program that tailors its messaging and rewards at a clearly higher level of sophistication, and when this happens Starbucks will either swiftly release an imitation or else find themselves with a significantly reduced grip on their core market.

Still, that’s not to say that it’s a bad idea to be using their data in the way they are now, and I expect other retailers will adopt the methodology in 2015 – many of them already are.

What to Expect in 2015

We found that 62 percent of retailers report that the use of information (including big data) and analytics is creating a competitive advantage for their organizations, compared with 63 percent of cross-industry respondents.

To compete in a consumer-empowered economy, it is increasingly clear that retailers must leverage their information assets to gain a comprehensive understanding of markets, customers, products, distribution locations, competitors, employees and more. In this industry deep dive, we examine industry-specific challenges, as well as provide our top-level recommendations for retail organizations.

IBM

 

The number of retailers initiating Big Data projects in 2014 jumped sharply from the number in 2013. The adoption rates are increasing steadily and the industry is soon to reach a tipping point and the arms-race will begin.

Although there has been plenty of hype around Big Data this year, the explosion of in-earnest technology implementations has yet to begin. 2015 looks like it will either be the beginning of this tipping point or will be the year where one or two huge successes will hit the news and initiate the rush to follow suit.

Core to these major successes will be the ability to source both internal and external data for these analytics. The next-best-offer that can build on data about you from your Twitter and Facebook profile, sharpening its understanding of your wants and needs, will have a clear advantage over algorithms that only use past purchase data. The loyalty program that understands what kind of incentives motivate you using the same kind of connected information across multiple platforms will similarly benefit.

We expect to see early adopters achieve great success with leveraging the in depth understanding gained from social media – to target their customers or loyalty program members as individuals. We also expect that the results (and market share gain) of those early adopters will kickstart similar projects within a larger number of retailers.

We also anticipate an increasing number of implementations of new micro-segmentation models. Retailers using classic segmentation approaches (for example, one classic model uses just 66 segments and only updates their characteristics once per year) will start considering big data enabled dynamic segmentation models with larger number of segments, where the segments are updated potentially weekly – to better reflect the dynamic nature of today competitive environment

All in all, 2015 looks like it will be a very exciting time for retailers.

Topics: allsight Big Data Big Data Analytics bigdata Customer ConnectId Data Lake for Retail Retail

Leave a Reply

Your email address will not be published. Required fields are marked *

Posted by infotrellislauren on Wednesday, Jan 29, 2014 @ 11:29 AM

This is an abridged 4-page summary of the full 25-page whitepaper, which can be found in totality at http://infotrellis.wpengine.com/insight.php

 

Why Should I Change My Loyalty Program?

 

Hotel industry loyalty programs are failing to promote true loyalty.

 

Airlines and hotel chains – widely regarded as the masters of the loyalty program – are faring no better than the rest of the business world in terms of actual customer loyalty. According to our survey of 4,000 travelers, hotel loyalty program members are not loyal to their preferred brand and loyalty programs drive undesirable brand-switching behavior.” (Deloitte)

Despite the industry-wide investment in rewards programs, the impact on sales numbers has room for improvement. “Travelers spend as much as 50 percent of their spend with non-preferred brands and 65 percent  of high frequency travelers report having stayed with two or more brands in the past six months.” (Deloitte)

Hotel loyalty programs aren’t delivering ideal results. “The best-case scenario is that hotel loyalty programs as they are constituted today have either little or no impact on travelers’ purchase decisions, and, worst case, these programs drive undesirable brand-switching behavior.” (Deloitte)

There is a glut of identical loyalty programs with no meaningful differentiation.

 

Hotel Loyalty Program Memberships in 2012 reached an approximate total of 223,550,000. (COLLOQUY) Nothing is stopping those customers from subscribing to every hotel loyalty program that appears before them and, indeed, many do. “Our research found that approximately 45 percent of hotel travelers, and 80 percent of high frequency hotel travelers, hold two or more loyalty cards. Of the high frequency travelers, 41.6 percent are members of four or more loyalty programs.” (Deloitte)

In particular, the notion of collecting points towards a reward is no longer unique or especially motivating. “Accumulating reward points towards a free night’s stay was meaningful at one time—before the landscape became saturated with loyalty programs and consumers’ kitchen counters were littered with account numbers and point-summary statements.” (Deloitte)

 

 

Customers are expressing a desire for better experiences, not better prices.

 

In the US hotel industry in just 2013, 6% of hotel customers switched preferred hotels due to an inferior customer experience. (Accenture) That 6% represents billions of dollars of lost revenue.

50% of customers who switched could have been retained just by being made to feel more appreciated. (Accenture) Customers who would forsake a better price for a better experience are not a gentle-hearted minority. 31.7% of mobile savvy customers, many of a generation considered by traditional wisdom to be fickle and price-motivated, surveyed by Aimia fell into the category of “Experience-Seekers”, who “value the best experience, not just the price.” More customers fell into this category than any other category. (Aimia)

This is no small trend – all referenced studies made a positive link between better customer experience and higher brand loyalty. “Past customer experience trumps loyalty programs. High frequency travelers rated past experience as being the most important attribute to their overall hotel experience.” (Deloitte)

 

Billions of dollars in unclaimed loyalty and wallet share are waiting to be captured.

 

“Genuine loyalty drives share of wallet, migrates customer behavior, and, ultimately, enhances shareholder value.” (Deloitte) There is undeniable potential for revenue increase for those companies who can secure the loyalty of their competitor’s customers.

24,590,500 hotel loyalty program subscribers are highly likely to switch and do not feel compelled by today’s loyalty program models. The total unaffiliated and at-risk walletshare from hotel loyalty program members without strong attachments to any one brand is approximately $20 billion per year. (See appendix). This means there is $20 billion of annual hotel spend up in the air that has not been “claimed” by loyalty to any one hotel chain.

 

 

How Should I Change My Loyalty Program?

 

Differentiate from your competition by personalizing and customizing interactions.

 

If a hotel wants their loyalty program to be memorable and unique, this represents an opportunity to get ahead of the competition. “Even in industries such as hotels only 36% of customers acknowledge receiving a tailored experience.” (Accenture)

“To build affinity and loyal customers, hotels should consider reinventing what their customers overwhelmingly consider to be uninspired loyalty programs that lack personal and customized experiences.” (Deloitte)

It is the superior customer experience that will help to secure the loyalty of the younger generation, too. “Empowered by technology and influenced by social media, [members of the new generation of travelers] make informed travel decisions and are likely to give their attention to hotels with personalized, differentiated loyalty programs.” (Deloitte)

 

Leverage a deep understanding of the customer by adopting a more data-driven approach.

 

Traditional wisdom or common assumptions about customer segmentation is not effective in understanding the modern customer. “Many businesses fail to utilize valuable consumer data collected at enrollment and point of purchase to differentiate their loyalty programs across customer segments.” (Deloitte)

Organizations are increasingly turning to data mining to drive better business decisions and customer experiences. “More and more companies are seeing the value of offering loyalty programs and – more importantly – the value of tracking, reporting, and drawing actionable insights from customer data.” (COLLOQUY) This data is absolutely essential for providing each customer with the ideal experience at every point of contact with the company.

“Mining [customer] data will likely produce a rich understanding of discrete customer segments with distinct service preferences. These data-driven insights can be used to determine which customers’ brand loyalty is critical to build and maintain.” (Deloitte)

The loyalty program revolution will happen; the early adopters will profit the most.

 

“We expect the entire loyalty industry to grow, on average, in the years to come. But those companies that study the data […] will be the ones to finish first in terms of growth, and will make the most of the economic comeback.” (COLLOQUY) This next step in loyalty program evolution, says the research, is all but inevitable. Those companies that hesitate and lag behind will likely find themselves losing their “at-risk” loyalty program members to more data-driven programs that deliver a highly customized and ultimately more impressive customer experience.

 

For the full article, which includes the full appendix, please download the unabridged version of this whitepaper. http://infotrellis.wpengine.com/insight.php

 

Bibliography

Deloitte: A Restoration in Hotel Loyalty: Developing a blueprint for reinventing loyalty programs

Deloitte: Rising above the Clouds: Charting a course for renewed airline consumer loyalty

Bulking Up: The 2013 COLLOQUY Loyalty Census

Accenture 2013 Global Consumer Pulse Survey

Aimia: Showrooming and the Rise of the Mobile-Assisted Shopper

Topics: Big Data Customer ConnectId Customer Loyalty hospitality hotel industry loyalty loyalty programs Retail social media whitepaper

Leave a Reply

Your email address will not be published. Required fields are marked *

Posted by infotrellislauren on Thursday, Nov 21, 2013 @ 8:50 AM
Topics: Big Data canadian retailers infographic Retail Social Cue social media

Leave a Reply

Your email address will not be published. Required fields are marked *

Posted by infotrellislauren on Monday, May 6, 2013 @ 11:18 AM

The availability of Big Data is changing the way companies interact with the people who make up their customer base, and changing it rapidly. Some of these changes are ones we’ve seen in an embryonic form for many years as CRM systems try to better collect and share information about customers and web analytics provide new tools for excitedly trying to measure marketing metrics. Organizations that wanted to target women over thirty learned to place ads in magazines with readerships that reflected that intended audience. Toy companies learned to book ad space on the TV channels with the most colorful cartoons. The bright minds behind political campaigns learned to identify swing voters and target the publications they read, the channels they watched, and the radio stations they listened to. We constantly surge forward in the levels of sophistication we can apply to targeting the people we want to receive our message.

 

The problem with trying to do this with high levels of accuracy has always been us. Our brains aren’t capable of quickly sorting through huge amounts of information about people and then using little bits of knowledge to accurately categorize them. It just isn’t possible – not in any reasonable amount of time. The moment computers get involved, though, the process becomes a lot more feasible. First customer segmentation becomes possible: with what limited information a company can collect on their suspects, prospects and customers, they can group them up and market more effectively by tailoring their message and their offering to the general characteristics of that group. This isn’t too bad of a model when it comes to business-to-business, but when the end consumer comes into play and insists on being an individual, things get more complicated.

 

Our traditional understanding of the customer has always been incredibly limited by either quantity or quality. Hundreds of years ago a merchant might know his or her customers with intimate detail through personal interaction – and some small business owners still do. They could craft custom sales offers on the spot simply by knowing the customer well. “Hey, George, I haven’t seen you in a few weeks. New baby must really be sucking up your time. Hey, you know what I bet you need. Some good coffee. You look tired. Tell you what, I just got some new stock in of this really good coffee, strong delicious stuff. Let me throw in a little sample of it free with your usual order.”

 

That’s the most powerful kind of marketing, and what I would call “data-driven marketing” – it just so happens that all that intimate customer data is stored inside our shopkeep’s brain, and not in a database somewhere. The problem with this scenario is that the shopkeep can only remember this level of information about so many different people. With twenty or so regulars, that’s not a problem – as many as a hundred, if shopkeep is a smart guy. The more customers he has to try to remember, though, the less detail and intimacy he’s able to retain about them, and the harder it is to treat them as a friend and accurately anticipate what their needs and desires will be. You either have to compromise on the quantity of the data (remember only a few people in high detail) or on the quality of the data (remember lots of people without any meaningful detail).

 

These days, when a single organization may have billions of individual customers, companies have no choice but to lean towards quantity. They’ve started to grasp more at that ‘personal touch’ they once had in their humble roots as a Mr. Hooper-esque friend and advisor, but it isn’t easy. Even with computers to collect all this data, often the best they can do is to divide their ten million customers up into primitive groupings based on a high-level categorization like age or income bracket – which we’re starting to recognize aren’t really very meaningful classifiers for targeting marketing messages. Marketing departments often don’t have the manpower to do more than that very simple segmentation, though, because at the end of the day a human, not a computer, has to make the call about how to divide up these groups and define the various markets. As we’ve established, the capacity of the human brain is incredibly limited.

 

Computers, however, are getting smarter. With the steady advance of Machine Learning and Natural Language Processing technologies, our wonderful little robot assistants are becoming more and more adept at identifying significant patterns without our direct intervention and helping us to see pathways for smarter, more targeted marketing and sales efforts. Some of what is being accomplished with a handful of very clever algorithms and well-built platforms is beyond impressive – it’s stuff that seems pulled right out of a science fiction novel. Walmart figured out that people buy more Pop Tarts when they know a hurricane is coming and took advantage of this to drive dramatic sales boosts by having the right product in the right place at the right time. Target can use innocuous purchase data to deduce pregnancy. MIT has put together an analytics piece that can supposedly determine a person’s sexual orientation.

 

So having grappled in the last few decades with the sheer immensity of the number of customers they need to try to remember (nevermind trying to pick out important information about), organizations at last have the technology to deal with all of this data. With a “brain” capable of handling the three Vs (volume, variety, velocity), they can start working on getting back to that cheerful, familiar shopkeep status. The sooner they can say, “Hey George, been a while since you were last at Walmart, how’s the baby? Bet you’d like some coffee. How about a personalized coupon for half-off on this new coffee from your favorite coffee brand sent right to your phone?” the better.

 

Most sales and marketing people would call this practice “microsegmentation”, but the more I dive into the motivation behind using big data for sales and marketing purposes, the more I think this term is missing the point. The notion of “microsegmentation” just sounds like nitpicking over customer segmentation for the sole reason that we can do it. Technology gets smaller and more compact, so obviously standard segmentation will go the way of the SD card and get all micro on us, because that’s just how it goes.

 

What we seem to be forgetting is that the language we use shapes us and shapes the way we think about things, and this language is completely overlooking the whole point of what we’re trying to do. The end goal isn’t to make our defined markets smaller and more specific. The end goal is to get back to a point where we interact with our customers like valued individuals again. It isn’t segmentation for segmentation’s sake – there’s a reason for doing all this, and the reason is to have stronger customer relationships, deeper brand loyalty, more effective customer service, and more trustworthy and accurate recommendations to the customer. We want to group our customers in ways that give them what they actually want from us – and not what we assume they probably want based on something arbitrary like what year they were born or whether they use the public washroom door with the pants or the door with the dress.

 

For that reason, I’m rejecting the term “microsegmentation” for what we’re trying to do with Big Data analytics. Instead, I’m calling it “anthrosegmentation” (from Greek “anthropos”, meaning “man”): the principle of highly tailored sales and marketing campaigns with the ultimate goal of treating customers like individual human beings rather than faceless members of a crowd. Anthrosegmentation is about using technology to offer highly customized, individualized interactions with customers and patients, rather than painting them all with the same brush.

 

Anthrosegmentation is the confident and excited reply to that basic demand of every patient, customer, and citizen: treat me like a human being. This is why Big Data is a big deal to retailers, governments, financial institutes and every company that runs the gamut from corner store to corporation – they finally can get back to the highly personalized and tailored customer experience. We’re moving away from clunky, outdated modes of thought that were stunted in growth by the limitations of our data and our technology. As these limitations are rapidly overcome, we need to remember that keeping up with technology doesn’t just mean inventing new words – it means inventing new ways of thinking about what we’re doing, and never losing sight of why we’re doing it.

 

Big Data presents a multitude of opportunities for improving and innovating around how we do business, and customer segmentation is just a subsection of that opportunity. For an overview of some of the exciting use cases we’ve seen so far, stay tuned for our upcoming article.

 

InfoTrellis is a premier consulting and product development company in the information management industry. With our deep heritage in Master Data Management, we bring rigorous data quality best practices to our Big Data products and solutions. Visit  our website or contact us directly to learn more about our Social Cue™ and Human Profile™ Big Data solutions or to schedule a product demo.

Topics: Big Data Big Data Analytics Big Data Quality CRM Customer Loyalty Customer Relationship Management Machine Learning Marketing Natural Language Processing Retail social media

Leave a Reply

Your email address will not be published. Required fields are marked *

Posted by Khurram on Tuesday, Jul 31, 2012 @ 12:03 PM

Why upgrade?
Clients ask me all the time, why? Why do we need to upgrade? We are happy with the way the software is working? if it ain’t broke, don’t fix it!

This is what I tell them…
There are a variety of reasons why one might choose to upgrade. Usually, our business needs are evolving and as such we can take advantage of the new features in the product. Sometimes, it is not the business but rather our technical needs that are evolving, which push us towards an upgrade. Then again there are times when we upgrade not so much because of change but because the current version of the product has reached the end of its life cycle and will no longer be supported (see below).

Version End of Service Date
IBM Initiate Master Data Service v6.x December 31, 2009
IBM Initiate Master Data Service v7.0, v7.2 December 31, 2010
IBM Initiate Master Data Service v7.5 June 30, 2011
IBM Initiate Master Data Service v8.xIBM Initiate Address Verification Interface v8.x September 30, 2013
IBM Initiate Master Data Service v9.0, v9.2IBM Initiate Address Verification Interface v9.0, v9.2 September 30, 2014

Change for the sake of change is not necessarily a value driven MDM strategy. While it is true that the MDM strategy in many organizations pave the way for other initiatives, it is also true that in order to derive the most value out of any initiative, we should have a holistic view of all changes so that a synergistic approach can be taken to decide which changes should be implemented. The solution should add to the overall synergism of the solution, not take away from it.

Let’s Initiate®
Like most software, there are no groundbreaking changes from one version to the next but when we look at the breadth of changes across multiple versions, a strong case can be made to upgrade to the latest. Let’s look at a few upgrade scenarios that outline some of the major changes between the different releases.

Version Key Features Description

10.0

MDM Name and Packaging Initiate v10 is part of IBM InfoSphere MDM suite of products. The combined packaging provides easier methods to move from one platform to the other
BPM Express configure workflow based solutions that support data stewardship and data governance applications
Automated Weight Management Workbench guides the process of determining if the weights are appropriate for the data set based on guidelines and rules developed by IBM’s data scientists
Workbench Simplification Modifications to weight generation, algorithm configuration, and threshold calculation functionality have been made to reduce time and simplify hub configuration and deployment.
GNR Name Variants Integration (embedded in v10) same as 9.7 but GNR is now embedded in v10
MDM Application Toolkit Formerly, Initiate Composer, the MDM Application Toolkit is a library of MDM application building blocks (business components or widgets) that make MDM capabilities available to end-users. It helps development teams, customers, and partners accelerate the development of MDM powered applications.
Event Notification By using event notification, you can expose the changes made in the MDS to external applications or workflow systems (such as BPM Express v7.5 available with MDM v10).
Linkage Search The Inspector tool now allows users to be able to search for entities using a variety of different criteria (similar to task searches). This new functionality enables data stewards to inspect entities that have been autolinked or manulinked to verify the quality of the linkages.
Algorithm functions for Chinese names The 10.0 release introduces the CHINESE standardization, comparison, and bucket generation functions to support searching, matching, and linking by the Master Data Engine.
Relationship Linker Performance The batch relationship linker (RelLinker) process has been modified to improve scalability and performance.

9.7

IBM Initiate Provider Direct (not part of standard edition) Is a web-based application which enhances IBM Initiate Provider by offering organization-wide access to data about physicians, care organizations, nurses, and other care providers, supports more collaborative interaction between these provider groups.
Flexible Search Is an additional search capability built into the Master Data Engine which is independent of the heritage search capability. Multiple query operation types are supported. For example: wild cards, Boolean queries, inexact queries,range queries, etc.
GNR – Global Name Recognition (separate license) Provides a list of global name variants. Name variants can be used by the Master Data Engine during candidate selection to provide better matches

9.5

Performance Log Manager The ability to monitor system performance is vital to alerting operations staff of potential issues or clues to resolving existing issues. The Performance Log Manager can easily capture MDS information during a given interval and output the results in a web-based report.
Multi-threaded Entity Manager The entity manager has become a multi-threaded process for increased overall efficiencies of the entity management process.
International Data Accuracy Enhancements to algorithm functions have been made to increase MDS’ accuracy for comparing and linking international names, addresses, and phone numbers. Specifically, for name parsing, custom phonetics, bucketing on partial strings, and date ranges. These solutions can also provide better accuracy for local data.
Initiate Composer is a unified development environment used to quickly build lightweight but robust applications to display, access, and manipulate the data managed by IBM Initiate® Master Data Service

9.2

Interceptor tool Enhancements to the Interceptor tool enables speedier upgrade and maintenance of the Master Data Engine by recording interactions executed on one Engine and replaying those interactions on other Engines.
Entity Management by Priority Customers can set the priority of records that enter the entity management queue (e.g. data from real-time systems are higher priority than data from batch systems).
Initiate Composer is a unified development environment used to quickly build lightweight but robust applications to display, access, and manipulate the data managed by IBM Initiate® Master Data Service
Compensating Transactions Compensating transactions via MemUnput and MemUndo interactions enable the rollback of a member insert or update. These interactions are available for the Java and .NET SDKs.

9.0

Advanced issue management Allows customers to implement and manage custom data issues, also called implementation defined tasks (IDTs) or custom tasks.
Initiate Enterprise SOA Toolkit The 9.0 release introduces the Initiate Enterprise SOA Toolkit that provides a Java object API and WebServices interface to the Initiate Master Data Service.
AES encryption and IPv6 support To increase security, the Initiate Master Data Service now supports Advanced Encryption Standard (AES) and Internet Protocol (IP) v6.

Upgrade Recommendations

  • We recommend all clients on version 8.x or prior should upgrade to version 10, which is the most recent version. While a target upgrade to version 9.5 or 9.7 is possible, moving to version 10 will give a longer span before next upgrade is necessary.
  • Since version 10 is a part of the IBM Infospere MDM suite of products, it provides certain advantages when moving between platforms. Almost 50% of the effort can be reused when migrating from one platform to the other.
  • All components listed under the Initiate MDS platform must be upgraded during any major or minor upgrade process, as per IBM’s recommendation
  • Special consideration needs to be given to the custom processes or code, e.g.
    • Engine Callouts: provides the ability to intercept & modify existing Engine behavior either pre or post-interaction.
    • Custom Search Forms: Customized search screens in data steward tools (inspector and/or EnterpriseViewer) need to have their customizations moved to the upgraded solution.
    • API code (JAVA or WebServices) Ensure that existing functionality is unaffected
    • To minimize downtime, we recommend doing as much of the upgrade in parallel as possible. However some downtime will still be inevitable.

Are we ready? (to upgrade?)
We may not realize it but staying with the current solution or upgrading to the latest are two distinct decisions, not one. Either decision will have a lasting impact on the vision and the mission of the organization. Regardless the decision, the recommendation would be that the approach we take should not only be vetted by industry experts but ideally it should be created with the help of those experts. The right experts can help you validate and evaluate that even if the decision is to stay with the current solution, at least the solution will not adversely impact the organization. They can ensure that the current solution is in line with the organization’s vision and mission. The right experts would have the Initiative needed for the organization To relate, evaluate, locate, link & identify subjects.

Decision…
At the end of the day, it really comes down to what do we need today and what might we need tomorrow. Regardless of the multitude of new features, the question remains…are they enough to warrant an upgrade? Are we making substantial strides towards our goals? Are the new features relevant to our current and/or future business needs? Should we upgrade even if there isn’t a lot of value today? The decision rests solely with you.

However, regardless of the decision, we should keep this principle in mind:

“Change when change isn’t absolutely necessary does give us the luxury to plan, procure & implement

not only what we need today but also what we will need tomorrow. On the other hand, change when

change is absolutely necessary forces us to put a band-aid on the issue and just fix the problem(s) at hand.

Who am I, and why am I saying this?
I am a Sr. Initiate Consultant at InfoTrellis and have a long history in Initiate MDS. I started working on the Initiate MDS platform before it became IBM Initiate MDS. I have seen the product grow from early versions with a limited feature-set to a very mature and robust product that it is today. I have worked with a number of healthcare and other clients over the years. Almost all of my client projects where Initiate was their first MDM product, the clients were usually hesitant when they first start working with the product. As time went by, we saw the (proverbial) light bulb go on and the clients started to “get” the potential of what could be. A lot of times it was hard to quantify every single iota of value before the project is implemented. However, in my experience, there was seldom a client that did not derive more value from the implementation then what was initially targeted in the project scope.

Today, I am not directly connected to IBM but I am still very much involved in the MDM industry. I am also a strong and vocal supporter of the Initiate platform and the related services that developed during my tenure with Initiate and then IBM. These days I am working with a dynamic and smart group of MDM specialists at InfoTrellis to help organizations realize their true destination as they travel on their MDM Journey. (more details, later…)


Topics: Algorithm CDI Cleanup Data Data Quality Deterministic DQR EMPI Entities Finance Government Healthcare IBM Industry Infosphere InfoTrellis Initiate Leader Link Linkages Master Match mdm MDS MPI Probabilistic Remediation Retail Score Service Steward upgrade Why

Leave a Reply

Your email address will not be published. Required fields are marked *