At 3:30am one May morning in 2011 in California, Tony Prophet, senior vice president for operations at Hewlett-Packard, was awakened and told that an earthquake and tsunami had struck Japan, a vital supplier of parts and equipment for major industries like computers, electronics and automobiles. Soon after, Prophet had set up a virtual situation room, so managers in Japan, Taiwan and the United States could rescue HP’s global supply chain, which at $65 billion per year feeds its huge manufacturing engine churning out two personal computers a second, two printers a second and one data-center computer every 15 seconds. HP managed, but not everyone fared as well: General Motors truck plant in Louisiana temporarily shut for lack of Japanese-made parts, and Toyota and Sony suspended production on several plants. Korean shipbuilders, European cell phone companies, and U.S. solar panel makers too were hit.
Such Acts of God resulting in glitches and losses are often unpredictable. But even the predictable patterns in the global economy have long gone unpredicted. Companies, especially small ones, lack insight into consumer whims in China, price levels in Brazil, quality of suppliers in Malaysia, or demand trends in Europe They have driven in the fog, a risky endeavor. But now a crystal ball is within reach: Big Data.
By 2020, there will be 44 times more data in the world than there was in 2009. Big data is not only numbers on Excel files – it is continuous stream of information that can be leveraged to predict the future. And it can be used to make efficiently: in the Internet of Everything, Big Data flows from one smart machine to another.
Big Data can help fix supply chains before they break. Manufacturers that collect and connect street-level data on buying habits and social media chatter with their internal data on operations, manufacturing, financial, and logistics can mitigate the infamous “bullwhip effect”, in which changes in customer behaviors magnify across the global supply chain all the way to raw material purchases. Bullwhip effect has long left companies with obsolete inventories when demand unexpectedly falls, and lost sales when demand suddenly spikes. In a recent study of 4,689 public U.S. companies in 1974-2008, two-thirds had felt bullwhip effects, typically due to over optimistic sales forecasts. Each quarter, upstream orders exceeded downstream demand by $20 million. When the Eurozone crisis expanded, Chinese exporters to Europe got stuck with excess inventories, going out of business and sending workers back to their villages.
In the era of Friedman’s Flat World (yes, I am writing a book on globalization and think we need an updated one as even Facebook and Twitter never made it in his), companies always struggled to finding reliable, high-quality and low-cost suppliers from East Asia and Easter Europe. Now, platforms separate wheat from chaff. For example, ImportGenius uses 100 million ocean freight records, previously sitting idle in government databanks, to help U.S. manufacturers and retail chains to see which suppliers are used by such stalwarts as JC Penney and Boeing. New York firm Panjiva holds detailed records on 10 million suppliers in 190 countries to help clients understand what their competitors are buying and which suppliers they’re using. In short, companies can now see not only their own supply chain, but everyone else’s, and make better choices.
Another Flat World challenge for companies was getting pricing right in foreign markets. Now, Big Data gives a bird’s eye view for companies to see competitors’ prices in their target markets in real-time – and adjust prices in real-time to drive demand. For example, a price-monitoring platform Dynamite Data crawls more than 1,300 merchants and 11 million buy pages around the world, generating real-time price data on countless of verticals such as cars in China and widescreen HD monitors in the United States. San Francisco startup Premise uses an army of part-time laborers to take daily pictures of the prices of thousands of food and other consumer products that are not online, but in open air markets and supermarkets in developing world, such as in India. Translated online, these data not only help companies price their products to the bottom of the pyramid; they alarm in real-time governments, still stuck on using backward-looking monthly consumer price indices, about impending inflationary hikes and food crises.
Big Data makes it easier also to find new markets. Most companies, even corporations, still focus on countries as the key units in their business and marketing decisions, even though there are huge variations within countries – compare the glitzy cities in Eastern China with ones in the Western provinces. According to McKinsey, by 2025, 600 cities will drive 65 percent of global economic growth. But now the lens can be even sharper, zooming into “micromarkets”, or regional pockets of growth that don’t correspond to cities or Zipcodes, but rather to areas like Easternmost part of Mexico-U.S. border or Northwest of South Africa. Big Data will rename future markets – and they won’t be called China, Brazil or France.
Big Data empowers small business: whereas before small businesses would need to invest in costly hardware and software, now they can use cloud computing, open-source software, and software as a service to access and leverage large-scale data. Small businesses that sell online can also impute data in Google Analytics API to establish relationships between their website traffic and real-world, on-the-ground sales and shipments.
For the first time, all parts of doing global business are visible and transparent to companies of any size: any company can access data on suppliers and their credibility and prices; competitors and their process; customers and their preferences; and potential new markets and product needs. This lowers the oldest and most elusive obstacles to globalization – companies’ “information costs” to figure out foreign demand patterns and prices, and “search costs” to find customers and partners.
Of course, even the greatest data does not sing on its own: its usefulness for business insight hinges on how it is analyzed. Already companies war over the rare world-class statisticians who know what Big Data can do, can identify relevant data and wade through the complexity, and, critically, understand the business in order to ask the right questions and answer questions in the right way. In the battle of infonomics, the gap will widen not between data haves and have nots, but between analyst haves and have nots.
Granted, for specific projects with clear questions, analytical horsepower can be rented: a 50-employee online car dealer Carvana used crowdsourcing firm Kaggle to offer $1,000-$5,000 prizes to attract a hundred data modelers to establish the likelihood that particular cars found at auctions would turn out to be lemons.
A thornier challenge is governments’ growing interest in curbing companies’ access to data and its transfer across borders. Cross-border data flows are a big deal: they are key for companies that manufacture and export to access digital support services – such as logistics, retail distribution, finance or professional services – and digital goods at competitive prices, and these services depend on secure and efficient access to data. The transatlantic data flow arena is already highly contested: Europeans, incensed by the Snowden scandal, have tightened consumer data privacy rules, which complicates U.S. companies’ customer service in Europe – right when companies are hoarding information on prospective customers as a key source of competitive advantage. As advocates of free data flows go head-to-head with proponents of data localization, law suits and angry government action follow.
Another fundamental problem for transporting data is that some emerging markets like Brazil, China, Indonesia, Nigeria, Malaysia, and Vietnam have or are planning laws to force companies place servers as a condition for market access. Usually used as a job-creation program for locals to man server hubs, this too complicates doing business: for example, for companies to establish an extra server center in Brazil just to meet the law could cost as much as $40 million. According to a recent analysis by European think-tank ECIPE, this is self-defeating: server localization laws could cost 0.2-1.7 percent of national GDPs. Equally bad, many companies have servers in one country, customers in another, and operate out of a third one.
Yet there is no clarity in international law on data protectionism, or on which country can control or stop the data traveling between these three points. Policymakers need to urgently focus on this issue – it is inherently international, and at its worst, it can balkanize the global digital economy before it starts.