Skip navigation
All Places > Discuss > Big Data Analytics > Blog
1 2 3 Previous Next

Big Data Analytics

62 posts

Edge computing in industrial environments offers the promise of getting the right device data in near real-time to drive better decisions and maybe even control industrial processes.  For this to work, it means that the edge device, its embedded software, edge servers, the gateways, and cloud infrastructure must all be up and running correctly all the time.

edge computing in industrial environments

The Industrial Network Edge

The industrial network “edge” (where computing occurs) can extend to industrial devices, machines, controllers, and sensors. Edge computing and analytics are increasingly being located close to the machines and data sources. As the digitization of industrial systems proceeds, analysis, decision-making, and control are increasingly being physically distributed among edge devices, edge servers, the network, the cloud, and connected systems, as appropriate. These functions will end up where it makes most sense, making it essential for today’s automation assets to be designed to leverage the edge.

Edge computing supports IT/OT convergence by bridging these two areas of the architecture. This is particularly obvious as edge devices evolve beyond their traditional role of serving field data to upper level networks and emerge as an integral part of the industrial internet architecture. Today, the IT organization owns more and more of the architecture and standards associated with the industrial internet, including both clouds and networks.

With edge computing and analytics, data is processed near the source, in sensors, controllers, machines, gateways, etc. These systems may not send all data back to the cloud, but the data can be used to inform local machine behaviors as it is filtered and integrated. The edge systems may decide what gets sent, where it gets sent and when it gets sent. Placing intelligence at the edge helps address problems often encountered in industrial settings, such as oil rigs, mines, chemical plants, and factories. These problems include low bandwidth, low latency, and the perception that mission-critical data must be kept on site to protect IP. 

As manufacturers implement solutions that connect their machines, equipment, and production systems to the digital enterprise, end users in both process and discrete manufacturing plants would like to see real-time intelligence at the edge. In today’s connected factories and plants, edge computing will provide the foundation for the next generation of smart connected devices and the digital enterprise. These intelligent edge devices can aggregate and analyze sensor and other data and stream information to support predictive analytics platforms.

Hybrid Approaches to Edge Computing

Hybrid approaches utilizing edge computing and the cloud will provide end users in process and discrete plants with actionable information to support real-time business decisions, asset monitoring, data analytics, process alarming, and process control, as well as machine learning. Increasingly, the computational capabilities from both edge and cloud computing are migrating into the gateways and edge devices for IIoT networks.

Not surprisingly, many end users expect to perform data analytics at the edge. If industry is to move to ecosystems of smart connected machines and production systems, the first step is to create a digital environment that securely connects factories and plants using intelligent devices that can access, capture, aggregate, and analyze data at the production process and provide actionable information to enable operations, maintenance, and plant and product engineering and support groups to optimize how products are designed, manufactured, and supported.

Factors Driving Connectivity at the Edge

Operational, asset management, and reliability issues will drive end users to deploy edge computing.   However, for edge computing and devices for machines, equipment, and production systems to continue to proliferate, cybersecurity concerns must first be addressed. While edge devices can connect factory ecosystems, products and equipment in the field, and even the manufacturing supply chains; these devices and connections must first be made secure and reliable.

Smart manufacturing and edge computing with information-enabled operations offers virtually infinite potential to improve business performance. Companies will be able to use data that has long been stranded inside machines and processes to quickly identify production inefficiencies; compare product quality against manufacturing conditions; and pinpoint potential safety, production, or environmental issues. Remote management of this edge infrastructure will immediately connect operators with off-site experts to be able to avoid or quickly trouble-shoot and resolve downtime events.

Finally, edge and cloud computing architectures will accelerate IT and OT convergence. As a result, IT and OT professionals who previously only oversaw their own individual systems are learning about the counterpart technologies. IT professionals need the skills to transfer their experience of enterprise network convergence and ubiquitous use of Internet Protocol into manufacturing applications. OT professionals need the skills to migrate from yesterday’s islands of automation to today’s plant-wide, information-centric edge and cloud architectures. 

Reprinted with permission, original blog was posted here”. You may also visit here for more such insights on the digital transformation of industries.

 About ARC Advisory Group ( Founded in 1986, ARC Advisory Group is a Boston based leading technology research and advisory firm for industry and infrastructure.

 For further information or to provide feedback on this article, please contact


About the Author:

Craig Resnick

Vice President, Consulting

Craig is the primary analyst for many of ARC’s automation supplier and financial services clients. Craig’s focus areas include production management, OEE, HMI software, automation platforms, and embedded systems.

If you follow Cricket, you must be already aware that curating a pitch to your advantage can help you win a Cricket game. Similarly, creating a Hadoop-based big data pitch can help your enterprise get your data act together.


Creating a centralized data lake with a modern Hadoop-based big data architecture is like preparing a Cricket pitch where, depending on your business requirement, you can perform advanced analytics or data science solutions. I see this process similar to curating a Cricket pitch to play the format of your choice - T20 or a One-Day game.


The gap that bleeds enterprises


Often, when you contact a bank once for opening an account, you still get multiple calls from their sales representatives who have no clue that you are already in touch with their colleagues. This is a glaring example of how the lack of a unified or centralized system can create a negative customer experience.


From getting bill payment reminders even after you have paid your bill to your frequented airline not having information about your preference or flying status, examples of how information gap bleeds enterprises are plenty.


Most businesses do not have a consolidated data to help them understand the customers and address their pain points. Even with a large amount of data readily available, thanks to social media and IoT, enterprises fail to tap potential business opportunities and steer them to their benefit.


Most organizations operate in silos with the lowest level of maturity where one department doesn’t know about another. Lack of technical skills, higher costs, and time stand in the way of bringing it all together, which deters most enterprises from harnessing the real promise of big data.


Taking uninformed decisions without having a unified view of the data (unified logical data model) and insights is a concern for many large-scale organizations. This is crucial, and enterprises need to act on it immediately. Drawing reference from a Cricket game, it is like a blindfolded batsman not giving himself the best chance of making the right contact with the ball to execute a good hit.


The good news


It’s time for enterprises to stop being blindfolded. Big data experts are making technology accelerators available to enable enterprises achieve a ‘single source of truth.’ These accelerators transform traditional systems like Teradata, Netezza, Oracles, Ab Initio ETL workloads, etc. to a Hadoop-based big data architecture where enterprises can perform various tasks like:


  • Perform real-time streaming analytics with full Data360 coverage

  • Apply various data science techniques (machine learning, deep learning, AI, etc.) to derive valuable

    insights for informed decision making

  • Consume and visualize the data effectively for BI and reporting purposes


With Hadoop-based big data architecture, enterprises will have the power to understand their customers, streamline their business, optimize their product, and enhance their brand.


At Impetus Technologies, we are addressing this gap that exists across organizations, particularly the larger ones who are far from having a ‘Single Source of Truth’ - a unified view of data to consume all information across the enterprise.


We partner with Fortune 100 customers, creating powerful and intelligent enterprises through deep data awareness, data intelligence, and data analytics.



About the Author:


Yogesh Golwalkar is an Indian Cricketer having played for India-A, IPL, Middlesex CCC UK, and First-Class Cricket (Irani Trophy, Ranji Trophy etc.) for several years. He has extensive work experience in Banking (India and UK) and IT sector as well.

At Impetus Technologies, Yogesh is responsible for Customer Success and Business Growth in the APAC region.

Yogesh holds a computer science degree and an MBA from the University of Bradford School of Management UK.

A well-designed and well-executed asset management strategy can lead to significantly better reliability and lower maintenance cost.  Such a strategy requires accurate and up-to-date asset and maintenance information to set priorities right. It also includes the maximization of condition-based and predictive maintenance while keeping reactive maintenance to a minimum.

There are many approaches to predictive maintenance.  To get started with predictive maintenance in process plants, it’s necessary to be able to monitor a large numbers of tags and learn based on historical data across the full plant.  Precognize, a provider of predictive maintenance solutions for the process industries, offers such a solution.  Its technology combines machine learning (to determine normal and abnormal behavior), augmented with a combined asset and process model with information about cause-and-effect relationships among variables.  The technology is designed to require minimum effort by the plant operator to build.  This technique enables minimizing false positives.

Carmel Olefins, part of the Bazan Group,  recently testified about its use of the solution in a polyolefins plant and how this helped the company address environmental issues.  The company found that the tool helps its operations team detect operational issues in the “darkest corners” of the plant, where “it hurts the most.”  Looking back, the solution could have detected/prevented a valve issue before the valve failure resulted in negative publicity due to the environmental impact.  The software can also provide advance warnings for any new valve issues.  Since it began using the software, the company gets only a few alerts per day and has not experienced many false positives.

Effective Asset Management Strategies

According to industry analysts, world-class oil and gas producers have 95 to 98 percent availability (even in older facilities), while their maintenance cost is 30 percent less than average.  This compares to 85 percent availability for the average producer and less than 75 percent for a poor performer.  Also, the safety incident rate is 30 percent lower among leaders than average.  Apparently, these companies make the right decisions on savings and investments.  For example, the value of their small-modification project portfolio can be up to 50 percent higher than that of average operators.  Furthermore, those operators apply some form of asset maintenance excellence, including a regular review of their policies and nurture pdlpa2.JPGa culture that continuously eliminates all sources of losses. Seventy-percent of the maintenance jobs in the leaders group are of preventive nature. These companies optimize preventive and condition-based maintenance of their critical equipment, while minimizing additional maintenance for less-critical systems.  Another finding was that these companies have good planning and scheduling processes that optimize the use of their maintenance resources, tool allocation, and support flawless process execution. Work that needs to be executed during stops is particularly well planned and re-planned at well-defined intervals before executing the plan.

The numbers may have shifted a little over time and may be a little different for downstream processing, but the features of this benchmark in maintenance strategy and execution remains valid. Whatever the methodology used - reliability centered maintenance (RCM), operational excellence (OpX), or world-class manufacturing (possibly in combination with an asset management standard such as ISO 55000) - the point is that an asset management strategy is required to set priorities and devise a fit-for-purpose approach adapted to each priority.

Asset and Maintenance Data Availability, Quality, and Integrity Required

To decide upon asset management priorities, good-quality data are required that, ideally, are easy to access.  This includes statistics on the number of maintenance interventions, root causes of equipment failures, cost of repair, overall equipment efficiency (OEE) broken down into unwanted stops, planned stops, throughput reductions, and quality losses. It’s also very important to know the cost of production losses related to unwanted stops. This will inform the operator how much effort and resources he is spending, where to focus, and which goals to define.  A quantified business case could also help decide where to best focus efforts and optimize for return on investment. Even if the quality, availability, and integrity of asset information are not optimal, an operator should engage in improving and optimizing maintenance. However, the operator should be aware that the preliminary effort would be higher and the time to results longer. 

Fit-for-purpose Asset Management Approaches

Once the owner-operator knows its asset priorities, it can prioritize the maintenance efforts, also referred to as “fit-for-purpose.”  Since most equipment fails randomly, preventive maintenance based on regular inspections, replacements, and maintenance corresponds to a higher workload and cost than reactive maintenance and does not necessarily improve reliability significantly.

pdlpa3.JPGFor less-critical equipment, for example a lawn mower - a run-to-failure approach (reactive maintenance) may be most economical.  Workload and cost can be reduced while preserving reliability of more critical equipment by using condition-based maintenance based on instrumented monitoring of a few key variables and attributes of the equipment.  Only when the trends indicate that maintenance is necessary will it be planned and executed.  Often this leads to longer average times between maintenance intervention, but the approach can also determine if maintenance intervals need to be shortened to avoid damage or impact on the process.  This can be useful for slowly degrading equipment performance, such as in the case of clogging of filters and fouling of heat exchangers that depend more on the processing conditions than on elapsed time.  Condition-based maintenance, because it reflects the actual state of the equipment, increases reliability and reduces cost. In these examples, condition-based maintenance includes a simple evaluation by maintenance personnel.  In the case of predictive maintenance, the evaluation of the equipment condition and the prediction of a potential failure or issue is made by mathematical methods and algorithms, also called predictive asset analytics.

Choosing an Asset Analytics Approach

Many types of analytics are available that can be applied to industrial processes to improve process understanding by studying past and current behavior.  Technologies are becoming increasingly more capable of predicting behavior and prescribing options for process management.  Analytics can be applied to assets, covering the gamut from quality, process, the supply chain, and combinations of those.  Some solutions focus on a single variable and/or on a few variables of a specific type of equipment and can With Retrospective Predictive Analytics, the Valve Issue Leading to the Flare Was Detectable in Advance  apdlpa4.JPGdetect anomalies very early; in fact, early enough to be able to maintain or repair the equipment before an unplanned shutdown occurs with unnecessary damage or loss of containment.  Some warning signals are easy for humans to detect.  Examples include a gradual increase in power consumption and vibration of a pump that maintains a constant flow. But some weak signals are hidden in the variability of the data and not visible to humans.  However, software can detect breakdowns weeks or months in advance.

In some cases, deviations are only relevant in combination with deviation of other variables.  In continuous or batch processing, relationships between variables are very common and find their origin in the physics and chemistry governing the process.  Many can be modeled accurately using scientific and engineering approaches, as is done in process simulation.   Simulations can be made for steady-state or transient conditions and can be of great value in process design and analysis.  Used in combination with historical data, these tools can help explain relationships that led to unexpected events. Owner-operators should design process management measures to avoid these.  These tools, when used online, could support better operating decisions.  However, detecting a small deviation that could lead to important consequences would likely require high-fidelity simulations, which are extremely costly and resource-consuming to build.  So, for discovery and predictions, another approach is needed.

Where to Start

When starting to apply asset analytics, the task of setting up monitoring of critical equipment can be daunting, not to mention actually monitoring the information and setting up predictive analytics.  This is particularly true for large processing plants such as in refining, petrochemicals, steel, or paper production with thousands of tags, measurements, and possibly augmented with IoT sensors. In such cases, an approach is needed that can monitor all this equipment and detect the most important deviations, creating few false positives.  Ideally, this would require a relatively small investment in terms of resources.  For any analysis to give meaningful results, the quality and the integrity of operational data quality used must be adequate.  In some cases, data must be screened and cleaned.

Precognize’s Predictive Maintenance Solution

Precognize provides a solution precisely for the situations described in the previous paragraph.  As a starting point, the technology uses about a year of historical data from operational databases at one-minute intervals that is available at every processing plant. Using an unsupervised process, the software determines regions of usual behavior of each tag in relation to each other to create what Precognize calls the baseline machine learning model.  This is multivariate in nature; that is, a variation in a tag value is normal or abnormal only in relation to values of other tags.  To avoid false positives, the solution feeds the content of this model to a second model representing cause-and-effect relationships.  This increases the precision in distinguishing normality from abnormality.  Operations know-how is required during the software configuration phase to help specify relationships between equipment properties and process variables.  These relationships have a direction, from cause to effect, or could be bi-directional.

Predictive Analytics - Construction of the Asset and Process Model and the Resulting Influence Graph for a Valve pdlpa5.JPG

The information is internally converted into graphs with well-specified mathematical properties and are inputs for the graph analysis engine.  Based on the know-how from operations, the engine can score the relevance of the detected abnormality (taking its evolution into account), propose a root cause, and display a suggestion for intervention.  The user can further fine-tune the model, for example by manually specifying a root cause.  The solution runs in the Precognize or the client’s private cloud on a virtual or physical machine with 32 Mb RAM and 12 cores.  Users access the system using their standard browsers.

Addressing Environmental Concerns with Predictive Maintenance

Carmel Olefins reported recently at ARC’s European Industry Forum about its experience with the software. The company, part of the Bazan Group, produces polyolefins - low density polyethylene (LDPE) and polypropylene (PP) - in the bay of Haifa, Israel, a densely populated area.  Currently, Israel does not have regulations prohibiting flaring.  Carmel Olefins uses flaring to safely discharge light hydrocarbons in the case of over-pressure.  On September 1st, 2017, a huge flare started burning 40 tons of hydrocarbons per hour and within five minutes the company was in the news for all the wrong reasons. Half an hour later, a faulty valve was identified as the cause of the incident.  The company decided to “go digital” and apply predictive analytics to reduce, and if possible, avoid such incidents.  It selected Precognize software, because it can cover the full plant and be implemented quickly.  Moreover, the company found Precognize is responsive to demands and the software is used by BASF, a reliable reference. 

When Temperature Rises Without the Valve Opening, an Anomaly Is Detected with Predictive Analytics pdlpa6.JPG A post-event analysis of the flaring showed that Precognize could have detected (and thus helped prevent) the valve issue three weeks before the incident.  After implementing the software application, Carmel Olefins only gets a few alerts per day and with few false positives, which helps preserve maintenance efficiency.  The company recommends starting “where it hurts the most,” that is, where risks and impacts are most severe, and to make the production manager with the shift supervisor of the unit responsible for the outcome.  The challenges are to change the mindset and attitude of production personnel from "it is broken, let's fix it,” to "there is something strange here, let's investigate it.”  

A real-time issue provided the opportunity to inculcate new habits. In this case, it appears that a temperature rise of the contents of a pipe that depends on a steam valve opening occurred without the valve actuator moving.  Precognize detected the problem, which was not complex to diagnose: the valve did not close properly despite its setpoint and the leaking steam caused the temperature rise.  The valve could be repaired before an incident or further damage occurred. 

Carmel Olefins recommends using predictive analytics to put a spotlight on the “darkest corners” of the process that remain invisible through human observation of trends.  At Carmel, full transparency is applied in the sense that analytics results are visible throughout the ranks, including upper management.  However, only production personnel and shift managers act on the signals from the software.


ARC recommends elevating asset management to the strategic level because of the importance of its impact on enterprise KPIs such as bottom line earnings, environmental footprint, and process safety.   This strategy will provide the governance and the management system, enabling a maintenance strategy.  The latter depends on the quality of asset information.  A maintenance strategy should focus on high-priority equipment and processes, maximizing condition-based and predictive maintenance and minimizing preventative maintenance.  Predictive analytics are a key technology enabler for predictive maintenance.  However, the operational data used to build models to detect anomalies must be of adequate quality.

The Precognize predictive maintenance solution was designed to help owner-operators reliably detect operational issues in large process plants with large number of tags, with a limited effort required from the plant operator in the configuration phase. It’s essential to involve operations personnel and transform their habits from break-and-fix to investigating alerts and preventing incidents and damage.  The latter may require management attention.


Reprinted with permission, original blog was posted here”. You may also visit here for more such insights on the digital transformation of cities and industry.

About ARC Advisory Group ( Founded in 1986, ARC Advisory Group is a Boston based leading technology research and advisory firm for industry and infrastructure.

 For further information or to provide feedback on this article, please contact


About the Author:

Valentijn de Leeuw

Vice President


Valentijn's responsibilities include research and consulting in the process industries, with a focus on clients in Europe, the Middle East, and Africa. Valentijn has extensive experience in best management practices in process industries.  His experience includes knowledge of unit processes, simulation and modeling, and business practices utilizing application software designed for manufacturing operations.  He also has experience in aligning organizations, strategy, business processes and technical architectures.  At ARC, Valentijn's responsibilities include research and consulting in process industries.  His technology focus is on manufacturing operations management, performance management, knowledge management, and the role of the knowledge worker in manufacturing.  Valentijn is the focal point for the ARC Benchmarking Consortium in EMEA. 

At last week’s OTC 2018 conference, I stopped at the Phoenix Contact booth for an update on its offerings. It has introduced some interesting products around industrial networking, connectivity, and electrical protection. What caught my attention, however, was the PLCnext controller, which is its ‘open’ PLC product line for the industrial edge.

Data science tools at the industrial edge?

The attention grabber was that its implementation of IEC61511 allows the inclusion of languages not associated with machine or process automation. Instead, the user can implement logic from third-party tools like MATLAB, Eclipse, or Visual Studio; tools of data science.

This is significant because PLCs are where manufacturing assets are and the primary path for most industrial data. In 2016, ARC wrote about the importance of this type of technology. At the time, we noted that edge/fog computing would be key to preventing a data deluge through a cloud-based system, deliver feedback locally and in near real-time, and address the concerns of customers who are reticent about sending data to cloud-based platforms.  Most edge platforms are focused on the network edge; smart routers and switches. Certainly, there are plenty of other products (like industrial PCs) that could compete or coexist within a cloud/fog scheme, but few are as intimate with sensors as PLCs and DCSs. We are now seeing a battle shaping up between classic IT companies and OT companies.  The good news is there is room for both.

Battle for the Industrial Edge

Young Engineers and IT Will Lead the Charge

The push for this type of functionality at the control level will come from young engineers and the IT side of the business. Data analysts are unlikely to use an industrial controller for their work at the edge because of the lack of a modern installed base.  They are also less likely to know about the offerings that do exist. Industrial controllers, especially process controllers, have relied on less powerful processors (fewer transistors means fewer failures) to increase availability.  The available computing power is totally dedicated to control of the manufacturing process. This may well hinder the implementation of sophisticated controller-resident analytics in process control, or perhaps initiatives like Open Process Automation will prove that the process markets want the benefit of more powerful, flexible, and modern processors in the field to leverage analytic opportunities.   Discrete manufactures are likely to have a wider set of devices from which to choose and are more likely to implement.

End users need edge-to-cloud integration strategy

It is quite apparent that technology allows far more flexibility in architecture than ever before available.  Computing power is generally available in excess at all layers of the manufacturing process.  ARC recommends that end users don’t hesitate to pursue IT/OT/ET collaboration (and has done so for almost a decade). This facilitates the harmonizing of data access and visualization requirements as well as rationalization of where it’s best for the data to reside. This collaboration needs to come from the top and capital investment needs to be viewed more holistically.

Companies should already have a formalized vision for edge-to-cloud integration.  Due to the way technology is advancing, plans shouldn’t be restricted to today’s technology. Rather, the vision should be flexible enough to host functionality where it makes sense.  If the technology doesn’t currently exist to meet the vision, it will soon enough.

“Reprinted with permission, original blog was posted here”. You may also visit here for more such insights on the digital transformation of industry.

 About ARC Advisory Group ( Founded in 1986, ARC Advisory Group is a Boston based leading technology research and advisory firm for industry and infrastructure.

 For further information or to provide feedback on this article, please contact


About the Author:

Mark Sen Gupta

Director of Research


Mark leads ARC's coverage of process automation, process safety, SCADA, terminal automation, and automation supplier services. He is also part of the IIoT Team.


About ARC Advisory Group (  Founded in 1986, ARC Advisory Group is a Boston based leading technology research and advisory firm for industry and infrastructure.

NASSCOM staff writer in conversation with Dr. Prashant Pradhan, CTO, IBM, India-South Asia. 


Speaking to a CTO is always tricky. They work with very complex ideas and there’s always the listener’s apprehension whether every idea will be comprehended in its entirety. The man himself was crystal clear in his thoughts and explained technology so lucidly, that our worries proved to be unfounded. 


1. On the importance of building a cloud strategy for data.

Cloud technology started off as “cheap compute on rent” and over time it may not have shed the tag entirely (though arguably). Needless to say, the technology has evolved way beyond, and he explained to us the significance of what is a “Cloud Native” architecture – vs. focusing on where the infrastructure sits (public/private).


  • Virtualized, “software defined” infrastructure – providing elasticity and agility
  • Microservices – enabling continuous development and delivery through loosely-coupled applications
  • Containers – to streamline dependency management, packaging and isolation
  • DevOps – to tightly integrated development and deployment/operations, deployment as more elements become programmable


A data strategy – built without a Cloud Native architecture – makes it very difficult to put data “in service of” the business, as the insights extracted from the data drive user journeys and business process flows.


Built on top of a Cloud Native base, is the rest of the Enterprise Architecture:

  • An agile data platform.
  • The Intelligence Layer (AI and analytics) which works on the data.
  • The engagement layer – typically digital engagement – which delivers “journeys” for various stakeholders such as customers, employees or partners.


The overall architecture is akin to an iceberg – with the engagement layer (what users see) being the “tip”, and the “heavy lifting” happening in the architectural foundation below the surface.


The data layer is in a “closed loop” with the rest of the architecture. For example, increased digital engagement with customers leads to additional data sets being captured – revealing newer insights. Based on these insights, the engagement becomes deeper, more personalized, scales to more offerings, and so on. Once the architecture is well-designed, the cycle-time from data to insight to action reduces drastically – from months to weeks.


Enterprises that miss this holistic outlook, often face a lot of friction in different stages of their journey. In fact, their choice of Cloud Service Provider should be shaped based on alignment to this architectural model.


Incumbents have the great advantage of having access to a treasure trove of data but their cycle time often gets in the way. Digital-native / “born on cloud” companies are often able to address that challenge because they have “designed” to this robust 4-layered framework – but again where they may lack, is access to substantial amount of data, or a large customer base.


The benefits of the two can be married with the right architecture -  where incumbents are equally well-positioned to deliver new-age tech via Cloud and agile processes – ultimately delivering great value to their business.


2. On why Enterprise CEOs need to look at Data Strategy.

CEOs realize the importance of data – which they are custodians of – and how it can be made to work. We see that most of our large Cloud projects discussions now start with the CEO – a clear departure from the past. They primarily look at the RoI – which can be in terms of reduced cost to serve, or revenue – influenced by deeper insights, and turned around quickly via the digital front office.


Data security and privacy are equally paramount CEO conversations now – and can no longer be an afterthought.


3. Leadership Mantra.

Interestingly, he has segregated the mantra into two parts: Customers & Internal Facing.


For the former (externally), it’s about shifting the narrative from “digital disruption” to “digital dominance”. The moment we say ‘disruption’ there’s a certain defensiveness which creeps in. In reality, we have to now move to a position where we can use cutting-edge technology to dominate in the marketplace.


This is particularly true for incumbents who are challenged by digital disruption. It cannot be about protecting their past anymore. Incumbents have access to a very large client base, wide distribution networks, data and strategic capital assets – how can all these be leveraged through digital technologies to help them dominate their industry? This is the future pivot. The role of the CIO has also changed, who now has the additional responsibility of being a business enabler as well.


Internally, the mantra is about skilling on a continuous basis, which is akin to “muscle memory” he says. You pick up where you left off previously. This is an imperative because the new-age skills have to be applied in the interest of the client.   


4 Re-skilling Initiative.

IBM’s Your Learning is a digital and cognitive platform.  Powered by Watson and Cloud, it provides employees across the enterprise with a personalized portal to access internal and external learning across various modalities – face to face, video, audio, text based, and a combination of these.  


It facilitates Discovery not only by its ability to respond to a learner’s search request but also, by proactively suggesting appropriate learning based on the learner’s profile, job function, and its cognitive learning from past searches.  Your Learning can pull out preferred modalities (be they video or books) for the learner to begin his or her Exploration. 


Exploration is further reinforced by smoothening the enrolment process in a formal workshop or in an informal mentoring or job shadowing program, guiding the learner through the process. It encourages Immersion by presenting search results of vetted programs with credible accredited award-winning organizations – expanding an already rich IBM portfolio of learning programs.  It also captures the hours spent as a metric the employee and his/her manager can track in the future. Finally, it supports Adoption and reinforces learning as a journey, which doesn’t end with the completion of a workshop but rather, is applied on the job through reminders and additional resources weeks / months after a class.    

Your Learning also leverages the power of social through its feedback mechanism.  It captures feedback from past learners offering them the opportunity to review the learning and to share its impact.  This is cited as among the most influential reasons for participants’ commitment to learning.  By leveraging technology, the power of social is magnified, so instead of depending on word of mouth, the feedback is captured for posterity in Your Learning.  Constructive feedback also enables a continuous improvement of the learning offerings out there.  A bonus is that it can sometimes help the organization identify potential facilitators among business leader participants.  When business leaders are brought in to teach, their own learning is further enhanced. The learning journey does not end with adoption; rather, it loops back to a higher level of Discovery with the added insight of learners who have been there, done that.  


For partners they have various program like IBM Partnerworld Program empowering Business Partners with the tools and resources to help transform clients into industry leaders. It also provides them with regular training in new age technologies like AI, IoT, Blockchain and access to our senior leaders and research labs to learn and co-create innovation.


Similarly, IBM Global Entrepreneur Program (GEP) offers startups various tools to build their business. 


Want to read the other interviews in the series? See them with leader talk

This blog is authored by Piyush Chowhan, VP and CIO, Arvind Lifestyle Brands. He is a speaker at the Nasscom Big Data and Analytics Summit


Even as AI technologies move into common use, many enterprise decision makers remain baffled about what the different technologies actually do and how they can be integrated into their businesses. AI is real and the early implementers are already tasting success but the adoption has predominantly been seen success in the digital native companies while other large organizations are still just doing POC’s to understand the use cases. There are inherent challenges which any large organizations will face while democratizing AI and it’s important that a clear cut strategy is drawn which can help them adopt it to get the full benefit. This article will help put a very simple high level construct to evolve a Framework / Roadmap for adoption of AI in your organization.


Identify the opportunity for AI – It’s quite common for CXO to get lost in the noise around AI to identify the correct use cases. A simple framework can help identify clear use cases around AI are:


  • Identify - Is this task for application data driven? – AI problems will work only when task are data driven. If you would like to find out how customers will perceive a new product launch.
  • Assess - Is the data available? – It’s no fun applying AI to problems for which data is not readily available. For e.g. If the data is not available in a data lake / WH then it would not make sense to apply AI since the results may not be effective. Also use of IOT bases solutions should be good use cases for AI application.
  • Measure - Is problem to be solved at scale? – The application of AI will be relevant if the scale of problem is big. For e.g. if a small team of 4-5 analysts are looking a sales data and creating forecast it may not be very effective to apply AI to solve the problem. The idea would be get the use of that forecast and solve that problem.


A simple 3-point assessment as mentioned above will help in identification of appropriate use case for application of AI by the business.


Build vs Buy – This is a question which organizations are always wondering and this is quite relevant for AI. There might not be a straight answer either side and hence it must be a middle path which needs to be adopted. AI solutions have three core elements i.e. sensing, Analyzing, and Proposing. Sensing is all about creating large IOT / Big Data Platform which curates and stores large amount of relevant data. This data needs to be analyzed at scale and in real time in most of the real life AI solutions. These may require large scale implementation of Speech / NLP / Vision recognition technologies which would be easier to buy rather to build as these maybe platform services in the days to come. The real differentiation would be in the proposing where the use case needs to be enterprise specific. The use of the sensing and analyzing of this large data would be where the real benefits for the org lie hence it would be wise for enterprises to work on building in-house capability to identify those competencies to apply these data to the correct AI use-case. Use freely available open source software to quickly develop solutions. Google, Microsoft, Facebook, Amazon and Yahoo have all released open source machine learning or deep learning algorithm libraries.



Pre-requisites for starting AI – It would be essential that you are looking at application of AI only after proper modernization of necessary Technology landscape. A few pointers for the same are:

  1. There should be a Service Based Architecture available for easy access the various data elements to be used by AI engine.
  2. The data should be clean and accessible in Data lake / non-relational database.
  3. The organization should have built basic capability in IT / Business Team to understand AI / Machine Learning algorithm.
  4. Build necessary highly scalable infrastructure on-prem or cloud for application of AI solutions. Look at GPU / TPU infra structure on cloud for the best use case.
  5. Ensure that Data Security and Protection is covered before you venture into large scale AI. It would disastrous to control large scale solutions once basic security policies are not in place.
  6. The organization should have adopted agile ways of working and there is enough appreciation of design thinking and modern ways of working in the organization.
  7. The enterprise should also be working on small agile projects which are not very duration for assessing early success and / or failure of implementation to course correct.


There is no perfect algorithm available but it needs to be built for each enterprise and its use case. Hence don’t look at perfect solution on day one. Being the AI journey and take small steps to evolve your AI solution to reap the real benefits.


In the coming years, artificial intelligence will change the way we interact with our colleagues, family and the world around us. We will expand our capabilities and understanding of the way we interact with others. AI will drive growth for the companies that embrace this new change. In this new AI era, we will be able to automate processes that will allow our associates to embrace new challenges while freeing them from time-consuming repeatable tasks. By bringing together AI and the world of digital, we can connect and expand the capabilities of entire industries to push human knowledge to previously unknown heights.


Happy AI – Piyush Chowhan


The Nasscom Big Data and Analytics Summit will touch on all the above and much more in detail.


About the Author


Piyush Chowhan

VP and CIO, Arvind Lifestyle Brands

As the CIO for Arvind Brands, he is responsible for IT strategy and execution of technology for all its brands business.

He possesses a strong domain knowledge in retail, e-commerce and supply chain management while working for global retailers like WalmartTargetCircuit CityTescoBest Buy etc. He has set up and managed competency centers/ teams for retail and supply chain as shared service or captive units.

Piyush Chowhan has strong expertise in Data Analytics, Business Intelligence, Customer Relationship Management and IT business strategy. With various publications under his name, Piyush is also proficient in Program Management, P&L Management, Business Consulting, PMO setup, ERP Implementation.


He has been featuring in many IT related events and published his views in many magazines like ETCIO, AIM

Piyush Kumar Chowhan has an MBA in Finance and Operations from Xavier Institute of Management (XIMB), Bhubaneswar. 


 You can have data without information, but you cannot have information without data”.

Plato once said those who tell the stories rule the society. In a world that is getting increasingly data-driven, this statement holds especially true. It is only when we apply the art of storytelling to the science of data do we create visualizations that deliver business value.

In 2016, Forbes called data storytelling the “essential data science skill everyone needs.” Why? Simply because stories, since time immemorial, have been the most effective tools to transmit experiences. It is only when we use a narrative that we develop context and insight into a situation and gain the capability to interpret a situation. Data storytelling is no different. While businesses have zealously embarked on the data bandwagon and go about mining gigantic volumes of data with ‘potential value’, the fact remains that the true value of this data can only be created when it helps businesses unlock insights and translate the same into business actions.

In a 2009 interview Google’s Chief Economist Dr. Hal R.Varian has stated, "The ability to take data—to be able to understand it, to process it, to extract value from it, to visualize it, to communicate it—that’s going to be a hugely important skill in the next decades." Jump to 2018, and we realize the importance of this astute statement.

What is Data Storytelling?

Often data storytelling is confused with visually appealing data visualizations presented in the form of charts and graphs.

However, what data visualization essentially does is present complex information in a format that is easier to understand to answer ‘what is the data telling you’ part of the equation. However, what visualizations do not answer comprehensively is ‘why’ is the data telling you what it is expounding. Delving into this ‘why’ demands context. It needs someone to interpret results, articulate the insights and the business opportunities that lie ensconced within that data. What data needs is a narrative, one that explains what is happening with the data and purposefully draws attention to a specific insight or insights to create a data story that has the capability to drive change and inspire action.

To put it quite simply, data storytelling is the narrative that gives more meaning to data visualizations. Data storytelling is becoming a vital component in Business Intelligence tools since data is only getting bigger and more complex each day. With a sea of data at our disposal, having the capability to tell a compelling story becomes more important than ever before.

James Richardson, research director at Gartner stated, “The ways in which organizations deliver business analytics insights are evolving, notably in the rising use of what is called data storytelling…This trend is an extension of the now-dominant self-service model of business intelligence (BI), combining exploration data visualization with narrative techniques to deliver insights in a way that engages decision-makers in a compelling and easily embraced form.

Data storytelling explores and explains the changes in data over a period of time. It is usually linked together using a series of visualizations through a narrative flow.

How data storytelling impacts an organization

In 1854 during the cholera outbreak in London, Dr. John Snow managed to convince the city council to take action based on the data story he had created. Snow spoke to the local residents and presented his findings, his data story, to the council stating that the cause of the illness was contaminated water supply. His data story contained the data points of location, time, volume, significance and proportion. And like any story, it had a plot and a hero.

A data story holds the capability to broaden the understanding of data analysis. It helps people relate to the story, helps them understand it, and relate better to it.  The world is moving towards becoming more self-service, there is a rise of BI tools that allow people across business domains explore and access data on their own, and there is also an increase in the number of people generating insights expand beyond data scientists. It has become evident that unless an insight is understood, it will not be able to communicate any change. Data storytelling is becoming vital to business since it helps explain what’s happening with the data, why a particular insight is important and engage an audience by combining the right visuals with the narrative and the data to influence change.

How can you be a great data storyteller?

Data storytelling has become a critical skill set for data scientists. As data gets more complex, storytelling brings more simplicity – that is what makes it a vital business intelligence tool. Some of the basic skills needed to be a great data storyteller can be highlighted as follows:

  • There is no one-size-fits-all in data storytelling – Data storytellers have to have the capability to tailor data stories to meet the sensibilities and the level of understanding of their audience and also fulfill the business need         
  • Have the capability to understand a business problem as in the absence of that the data scientist will not be able to add context to the story        
  • A data storyteller has to be able to field all questions generated as a result of the story         
  • Have the capability to identify and use the right data to create a credible story         
  • Have the capacity to present the data story in an engaging and visually appealing format with a tight narrative.

To end this blog lets dive into another story.

Ignaz Semmelweis, a mid 19th-century obstetrician, discovered that hand washing could save countless lives. But he failed to communicate his findings effectively to a skeptical medical community. Not only were his findings ignored and his life-saving idea rejected but he was also discredited by the community.

As we step further into a data-driven world, some of the most incredible insights will face a similar end if they are not molded into successful data stories that deliver transformative insights.

Thank you for reading! Follow Me on LinkedIn or Twitter

ARC Advisory Group recently collaborated with John Fryer from Stratus Technologies on an article published in Pumps and Systems Magazine to determine if real-time data could be applied as a disruptive technology to more effectively manage water and wastewater infrastructure. Due to tight budgets and a lack of broad public recognition of the problems, needed improvements have been deferred for years or, in some cases, even decades.

“Rip & Replace” Approaches Not Feasible

The problem isn’t limited to prominent public cases. Studies have revealed that water losses between source and destination are as high as 46 percent in some cases. Losses of this magnitude are clearly unsustainable over the long term. Yet a wholesale “rip and replace” of the existing water and wastewater systems is not feasible.

How can cities and towns protect the quality and availability of their public water and wastewater systems within their significant budget and resource constraints? Increased use of real-time data analytics is playing a key role in answering this question, a role that will only grow in scope and importance.

Using data to manage water and wastewater systems is not a new concept. Public works professionals have long relied on test data from water samples and other manually collected data to monitor their product and the efficiency of their distribution systems. But this data is limited and retrospective. Results only provide a snapshot of what happened at a particular moment in the past. And the data is rarely analyzed in aggregate, missing the opportunity to identify subtle trends that could provide early warning of developing problems.

Real-Time Data, A New Paradigm

Installing sensors at critical control points linked to data aggregation and analytics systems enables continuous monitoring, measurement, and analysis of a wide range of parameters, from water quality, to flow rates, to equipment performance, to deliver insights in near real time. The advantages are significant.

Consider the water loss problem. By placing sensors at key distribution points to monitor and analyze flow data, operators can accurately pinpoint problem areas and focus their scarce resources at those sections requiring repair or upgrades. If a new leak develops, operators can be alerted to the flow problem in seconds, allowing faster response to minimize loss and the risk of an outage. Just as significant, analyzing data from across the water or wastewater infrastructure over time provides insights that help municipalities make more informed long-term capital planning decisions.

High-Value Applications

Real-time data analytics can transform the management of water and wastewater systems. But it also increases the need to protect the data. Ensuring continuous, uninterrupted data availability is a critical success factor for tapping the full potential of real-time analytics for high-value applications.

real-time data Represents Disruptive Technology to Help Manage Water/Wastewater Infrastructure

Safety and Compliance

Real-time, continuous monitoring and analytics give public works professionals the ability to identify and respond to quality issues proactively to help protect public safety. This data also provides a rich historical record to support compliance documentation. Any interruption in the flow of this data, however, could lead to operational issues that might affect supply, pressure, water quality, or other critical performance issues. If data is lost, this gap could lead to a regulatory compliance violation, resulting in a fine.

Predictive Maintenance

Continuous monitoring and analytics take asset performance management (APM) to new heights. Instead of waiting until pumps or valves fail, sensors gather data on vibration and other subtle performance variations and feed it into analytics engines.  By detecting early signs of problems, unscheduled downtime can be avoided. With lead time on replacements often stretching to weeks, knowing in advance when a piece of equipment requires overhaul or is nearing end-of-life is crucial to avoiding a process interruption.

Analytics also provide insights that enable municipalities to only repair or replace those assets that require repair or replacement, thus optimizing use of financial resources. Any interruption in this data flow, however, could effectively block operators to knowing the condition of key components, leading to a possible unscheduled downtime and/or increased remediation costs.

Remote Monitoring

Remote monitoring allows fewer people to monitor many more assets, resulting in labor savings that can quickly show a return on investment for these projects. This is key, especially as older employees begin to retire and finding qualified talent to replace them can be challenging.

In addition, the ability to monitor systems using devices that people already have, such as smartphones and tablets, avoids the need for municipalities to have to purchase dedicated devices for remote monitoring. Uninterrupted remote availability is essential in these settings, as it ensures that systems can be continuously monitored, even without on-site staff.

Water Conservation

An example of the opportunity of real-time data analytics is the potential to integrate weather data, including temperature and precipitation trends, into plant management analytics. This can provide predictive insights to dramatically improve water allocation and wastewater processing. These insights enable a more proactive approach to declaring or lifting bans on lawn watering or filling pools, or agricultural water distribution. Improving conservation efforts enables municipalities to avoid costly expansions of capacity. Uninterrupted data makes it possible.

Laying the Groundwork

As more public works professionals recognize how real-time data analytics delivers value in real-world applications, more municipalities will take their first steps on the journey. Certainly, developments, such as the Smart City Initiative, are encouraging urban centers to adopt new, intelligent monitoring and automation technologies to improve both the efficiency and safety of public services. As data analytics become more mainstream in public works, these capabilities will eventually migrate down to mid-size and even smaller communities.

As public works and municipal leaders plan their real-time data analytics road map, it is important to make the right investments now to ensure the greatest payback. Investing in data systems that provide the high availability required for continuous monitoring and analytics is critical.

Equally important is making sure any new data infrastructure is simple to operate and serviceable, given the limited IT resources typical to many public works departments. The right decisions today will position public works departments to reap the benefits of the intelligent water and wastewater systems tomorrow.

“Reprinted with permission, original blog was posted here”. You may also visit here for more such insights on the digital transformation of industry.

 About ARC Advisory Group ( Founded in 1986, ARC Advisory Group is a Boston based leading technology research and advisory firm for industry and infrastructure.

 For further information or to provide feedback on this article, please contact


About the Author:

Craig Resnick

Vice President, Consulting


Craig is the primary analyst for many of ARC’s automation supplier and financial services clients. Craig’s focus areas include production management, OEE, HMI software, automation platforms, and embedded systems.


About ARC Advisory Group (  Founded in 1986, ARC Advisory Group is a Boston based leading technology research and advisory firm for industry and infrastructure.

ARC has been covering the market of multiphase flow for several years though its coverage mainly focused on the physical multiphase flow metering (MPFM) solutions that are designed to being able to measure three or more non-homogenous phase fluids, such as oil, gas – wet and dry, condensate and water.  There are several major multiphase flow regimes recognized when describing flow encountered in oil and gas wells: bubble flow, plug flow, slug flow, stratified (smooth) flow, wavy stratified flow, and annular mist flow.  MPFMs have been around for over two decades and adoption has been slow and, at times, spotty, due to issues of reliability, accuracy, cost, and maintenance related factors.  MPFM units are multi-sensorial and typically rely on multiple different physical sensors as well as some computational capability to assist in reducing the amount of uncertainty in measurements. This blog will be covering a tangential topic of virtual flow metering.

Virtual Flow Metering Solutions – Complementary or Competitive?

As part of its ongoing research, ARC also indirectly covers the market for virtual flow metering solutions as they are often seen as complementary solution to a MPFM and, increasingly, a competitive one.  Operators are increasingly looking to virtual flow metering as the performance of these solutions improve, as does the performance of MPFM units themselves.  ARC is always interesting in learning about and promoting solutions that can help owner-operators, independent E&P firms, oilfield service providers and others to lower costs, improve operational performance, and increase collaboration on an oilfield project.

Analytics are Empowering Virtual Flow Metering for the Better

Arundo Analytics, a software company enabling advanced analytics in heavy industry, and ABB, a global supplier of control and automation technologies, have collaborated to create the first cloud-based virtual multiphase flow meters for the offshore oil and gas industry. This solution will be part of the fully integrated ABB Ability™ portfolio for the oil and gas industry.

“Our customers are demanding lower purchase, installation, and ongoing support costs in their operations. Using the scalable Arundo software to combine physical models with the latest in data science and machine learning, we are able to bring a number of innovative, cloud-based data-driven applications to the oil & gas industry,” said Espen Storkaas, ABB Group Vice President for Offshore Oil & Gas Digital.

Strong Partnerships Provide Real Business Value

For over fifty years, ABB has leveraged extensive physical modeling and simulation experience to deliver analytical insights into the offshore oil & gas industry - this includes modeling flows of individual phases of various intermingled fluids in a single stream. Typically, such flows are measured with expensive multiphase flow meters (MPFMs), which can be a significant amount of a facilities capital expense.

The cloud-to-cloud solution will provide connectivity between ABB Ability™ and Arundo’s Composer and Fabric software in order to offer a significantly more affordable and reliable option for oil & gas operators. This virtual flow meter provides analytics as a service to help facilities gain real-time data to understand the constituent properties of any given stream of produced fluids. According to Storkaas, “This collaboration will give the industry more transparency into their operations while also supporting with the need to find cost efficient, reliable solutions.”

“Arundo’s software is purpose-built for taking desktop-based analytical models into live, online environments in just minutes. This collaboration will give the industry more transparency into their operations while also supporting with the need to find cost efficient, reliable solutions,” said Mogens Mathiesen, Co-founder and Commercial Lead at Arundo.

In a separate move, ABB has unveiled a new global collaborative operations centre equipped with industry experts and enhanced digital technologies.  The new centre will enable customers in the oil, gas and chemicals (OGC) industries to improve operations and maintenance using the services of digital technologies operations experts. Based in Norway, the centre will offer information insights to help enhance customers’ profitability and productivity.

ARC believes that there is tremendous value realized when individual companies bring together their unique strengths to develop solutions and/or services that can help upstream oil & gas companies thrive in the “lower for longer” market that is the new normal.

Virtual Flow MeteringCEO and Founder of Arundo Analytics Tor Jakob Ramsøy, Global Head, Oil, Gas and Chemicals for ABB Per Erik Holsten, and Vice-President, Oil, Gas and Chemicals for ABB Norway Borghild Lunde


“Reprinted with permission, original blog was posted here”. You may also visit here for more such insights on the digital transformation of industry.

 About ARC Advisory Group ( Founded in 1986, ARC Advisory Group is a Boston based leading technology research and advisory firm for industry and infrastructure.

For further information or to provide feedback on this article, please contact

 About the Author:

Tim Shea

Senior Analyst

As a senior analyst at ARC, Tim's research primarily focuses on upstream oil & gas automation as well as Digital Oilfield technologies

Tim's focus areas include upstream oil and gas operational activities in support of the Digital Oilfield including multiphase flow metering, oilfield operations management systems, artificial lift optimization, leak detection systems, drilling optimization, compressor and turbine monitoring & controls, and general field devices such as radar and ultrasonic level measurement devices, and pressure transmitters, among others.

It’s time of the year again. As we welcome 2018, a lot of conversations are happening in the business world speculating about the technologies that will be trending high and will dominate in the coming year. There’s always something new in store for us every year.

As someone passionately involved in the AI, IoT, Data Analytics space, I am fortunate to be working with some of the brightest minds and forward-thinking companies around the world. Based on my conversations with them, here are some of the promising trends of data analytics which I think will shape the future of analytics in 2018 –

Augmented analytics will become popular

Augmented analytics uses machine learning and NLP to automate data preparation and present data in a simplified manner. The coming year will see Augmented Analytics being used widely to aid human intelligence to go beyond opinion and bias in order to bring better outcomes.

Augmented Analytics allows data scientists, citizen data scientists, and the IT community, in general, to focus on strategic issues and make better business decisions, and improve service offerings, pricing, and other aspects of the business.

AI and Machine learning methods will be more widely adopted in the industry

AI and Machine learning will be playing a major role in the coming years as these technologies are helping to simplify our lives and work processes. The explosive growth of big data has resulted in real-time approaches to addressing various business issues including customer experiences for most businesses. The close relationship between big data analytics and different forms of artificial intelligence, such as predictive analytics, machine learning and, deep learning will help organizations to drastically improve their customer experiences.

Another technique called AutoML is fast gaining popularity. Simply put, AutoML is a technique of developing machine learning by automating workflows using deep-learning and statistical techniques. This technique is being widely adopted in the industry today because it helps democratize AI tools by allowing business users to develop machine learning models without a deep programming background. It will also reduce the time that data scientists take to create models and without any programming. The coming years will see more AutoML packages being created for commercial usage and integration of AutoML within larger machine learning platforms.

The growing adoption of Big Data for meaningful outputs

The last few years have seen technological progress being made which has resulted in cheaper computers with more processing power than ever before. The price of storage devices has also dropped drastically. With companies like Amazon, Microsoft and many others offering storage, servers, and apps on the cloud at very reasonable rates, it is possible to process huge volumes of data from various sources to derive meaningful insights that will help businesses grow. Businesses can now use data to accurately predict the needs of their customers. This data could be made available in various formats; text, image, and video. Big Data is predicted to be the most powerful technology that gives answers to many consumer-centric questions that companies are trying to answer these days.

Edge Computing and Cloud will see an explosive growth

Edge computing is a distributed information technology (IT) architecture in which computer services are moved closer to the source of data or to the periphery of the network. Edge computing helps overcome connectivity and latency challenges as the distance traveled by the data is reduced. Edge computing has become popular due to an increase in the use of mobile computing, the decrease in cost of computer hardware, and a drastic increase in the use of IoT-enabled devices. Many of the leading companies, such as Cisco and HPE are looking forward to leveraging this technology for their benefit.

Many of the smart devices, such as smart drones, wearable technologies, and autonomous vehicles will benefit with the use of Edge Computing as they require real-time response and processing to work efficiently. Though there is a speculation that edge computing will eventually replace cloud computing, cloud computing finds its usefulness in many areas. For example, for centralized storage and big data analytics applications that are not as sensitive to a timeout response. I feel that in many cases, the two technologies could have a symbiotic relationship.

Predictive Analytics will be used for solving difficult problems

Even though predictive analytics is not new and is being used in the industry for quite some time, more and more organizations are now turning towards this technology to increase their bottom line and competitive advantage.

Owing to the availability of interactive software, many organizations and business analysts are using predictive analytics to precisely predict future behaviors to improve the organization’s profitability. Predictive analytics is finding major uses in the areas of fraud detection, reducing market risks, optimizing marketing campaigns, and improving business operations.

I feel that The year 2018 will see a convergence between big data and others technologies. There will be a big surge in the application of Machine learning in day-to-day life.  Artificial Intelligence, IoT and cloud-first strategy for big data analytics will be more widely adopted and allow these smart technological capabilities to be incorporated in various enterprise solutions and services – this will allow people and organizations to leverage their benefits. I am excited to be a part of this fascinating evolution.


Thank you for reading! Follow Me on LinkedIn or Twitter

Today data volumes are growing exponentially and it is coming from various sources like sensor data from theInternet of Things, log files, social media files like audio/video, call center call logs and all the organization internal data. 


An organization who harness this data and exploit it for their advantage are surviving the competition even from nontraditional players.


Big data has become the foundation for digital transformation.


Though the big data opportunity is growing rapidly, the top two big data challenges that organizations face are determining how to get value out of big data and defining a big data strategy.


Unless you acquire, store and retain the internal data from organization coupled with all the external data from call logs, audio/video files, customer surveys etc. there will be fewer chances of applying analytics on top of it.


Here are top 5 use cases businesses are deploying to get a competitive advantage.


  1. Customer 360-degree view: A lot of companies had problems integrating large bulk of customer data between various databases and warehouse systems. They are not completely sure of which key metrics to use for profiling customers. Hence creating customer 360-degree view became the foundation for customer analytics. It can capture all customer interactions which can be used for further analytics.


  1. Fraud detection and prevention: Financial crimes, fraudulent claims and data breaches are the most common challenges faced by organizations across various industries. Thanks to big data analytics and machine learning, today’s fraud prevention systems are much better at detecting criminal activity and preventing false positives.Today with help of Big Data platforms, banks can store all the historical data they have which can help in better fraud detection.


  1. Recommendation engines: In this digital age, every business is trying hyper-personalization using recommendation engines to give you a right offer at right time. organizations that haven't taken advantage of their big data in this way may lose customers to competitors or may lose out on upsell or cross-sell opportunities


  1. Sentiment Analysis: Today, it is important to know consumer emotions while they are interacting with your business and use that for improving customer satisfaction. Big data and social media channels together help in analyzing customer sentiments which gives organizations a clear picture of what they need to do to outperform their competitors. Disney, Nestle, Toyota is spending huge money and efforts on keeping their customer’s happy.


  1. Predictive and preventive maintenance: With internet of things and sensor technology data is captured from machines, equipment, and devices in real time. All the data is put to use for predicting the failures up front and reduce unplanned downtime and maintenance costs. Companies like GE are using Digital Twins in their wind farm to drive down the cost of electricity.


Big Data is nothing new today and companies are building data lakes to take advantage of storing and retaining any number of years’ worth of history.  


There are many more use cases but which other use cases you can think of that measure the success of an organization?

If you’re a sports buff, you may have already heard the story about how big data and SAP helped Germany win the 2014 World Cup. Many wisely said that it was data off the pitch as much as the players on it that stole the show for the four-time winners!

Big data analytics

 This is the stuff legends are made of.  In October 2013, the German Football Association (DFB) and SAP began collaborating to develop a “Match Insights” software system for the German national team. This software would be used in preparation for and during the tournament (and is being used ever since).

The beauty of this software lay in its simplicity. During the World Cup, the German team analyzed the data (captured by video cameras placed around the pitch) and turned it into information viewable on tablet or mobile devices to help improve team performance and gain a deeper insight into its rivals.

The biggest resultant improvement: the team’s speed of passing. Germany reached the World Cup semi-finals in 2010, the team had an average ball possession time of 3.4 seconds. After using SAP Match Insights, based on the supplier’s Hana technology, it has been able to reduce that time to 1.1 seconds.

Other data captured included players’ speed and distance travelled, positioning and number of touches.

Boosting performance with data

Today, almost every major professional sports team has an analytics department, or an analytics expert on staff. Articles suggest that teams often scan scout notes from clipboards, convert those PDFs to Excel, and then hand those files over to top-notch data developers. Thereafter, another set of young talented mathematicians crunch numbers that scouts and general managers use to help determine which players they think fit their club best.

No doubt, analytics are the present and future of professional sports. Any team that does not apply them to the fullest is at a competitive disadvantage!

Also trending is the popularity of data driven decision-making in sports among fans. Fans across the world are consuming more analytical content than ever – and loving it! There are now entire websites dedicated to the research and analysis of sports statistics and how they relate to a prediction in performance.

Sports analytics is the new name of the game!

The year 2018 will be the year of the Marketing math factory. Companies who win will adopt Marketing math & build common sense on top of it. One of the biggest changes the algorithmic approach brings for both businesses and consumers is a rich new level of interactivity. Algorithms & Math will allow Marketers to make meaning out of this data & make it actionable. The Mad advertising men will soon be running a Math factory.


The customer experience for many legacy companies is often secondhand or thirdhand. Legacy companies go through distributors & therefore have only a second-hand insight about consumers. While the new age businesses have real-time interaction with their Mobile apps or websites available to them for defining their customer strategy. Dr Ram Charan calls these new age companies Math houses. Their knowledge about customers is so rich that Uber had to deal with questions about privacy after some press reports about a data insights tool it used called “God view”.


The world is changing in one significant way-companies are willing to share information & have started to access data that is available publically. Open data—public information and shared data from private sources—can help create $3 trillion a year of value according to Mckinsey.


And new age companies do this very well, in fact, we can call this, the notion of “profitable data sharing”. They do not hesitate to share data across partners to ensure their customers get a kick-ass solution. So the new age companies use partnerships through API’s very effectively. Twillio, for instance, provides a service that allows partners to send and receive voice and SMS communications.When a customer receives an SMS message telling them that their Uber driver has arrived, this is powered by the Twillio API.

Also, Uber uses Google Maps, SendGrid (e-mails), and Braintree (payments) to make it ridiculously easy for consumers to interact with them. Airbnb, too, uses SendGrid, Twilio, and Braintree. Uber allows itself to be embedded into the Open Table and United Airlines apps. The OpenTable app has a “Ride With Uber” feature, making it easy to book an Uber ride with a single click within the OpenTable app. This again appears on the screen when paying your check using the OpenTable app.


They share data through API’s.There are over 14,441 API’s offered by firms today, according to 


As a term “API” has been around for a while, but the modern, Web-connection version gathered steam in the early 2000s thanks to Amazon’s Store API. This allowed any Web property to have an Amazon presence on its site.Facebook and Twitter soon followed suit with their own open API strategies. A Web publisher that integrated with Facebook’s or Twitter’s login API(s) began to get richer insights about their customers, in turn allowing them to sell better products and services to their advertising customers.

 You don't see legacy companies doing this - HUL sharing APIs with Philips or Saffola sharing APIs with FitBit? Or, HDFC Bank sharing APIs with the Future Group?


 The algorithmic & API based business will be the business of the future and it will change marketing, as we know it. Gartner has predicted that By 2020, algorithms will participate in 5% off all economic transactions. The real wild card and potential game changer though is the Internet of Things (IoT). Over the next decade, as the IoT becomes a reality, almost everything powered by electricity will become web connected with a digital exhaust. It is estimated that there will be up to 50 billion connected devices in the world by 2020 (about 10 billion exist today).


And yet, despite this invasion of Algorithms in the marketing world, about 61 percent of consumers are less likely to make future purchases following less-than-satisfactory personalized experiences, according to a study conducted by Forrester Consulting. About 91 percent of marketers are prioritizing personalization but only 16 percent have the ability to capture customer intent and deliver real-time, behavior-based marketing across channels.

Pointing to this disconnect and the need for brands to contextualize each customer engagement, the study highlights that while 66 percent of marketers rate their efforts at personalization as ‘very good’ or ‘excellent’, just 31 percent of consumers report companies are consistently delivering personalized, cross-channel experiences.


Because consumers are sharing so much personal data with brands, they expect value in return – in the form of personalization & rewards. And they want to see it translate into a superior customer experience for themselves. While most marketers seek to improve personalized customer experiences from this customer data, their strategies are immature and their marketing efforts are falling short in this regard.


To move to a new era, Marketers must set up Math factories. Marketers need to visualize a Math factory like a manufacturing plant with operators and assembly lines, with automated production lines. Leads come in like raw materials & finally reach the ‘distillary’ area – where the “segment of one” nurture system, the remarketing engines, and dynamic content platforms refine them to finished products: ‘Customers’. For this CMO’s will have to reimagine their teams, bring in the left & right brained people together into one team. True value add always happens at the intersection points, CMO’s can make the math factory work if their creative people can connect with their Math geeks & make the customer magic happen!


You can read more about data led marketing here: Cequity - Latest thinking in Analytical MarketingCequity | Latest thinking in Analytical Marketing 

Businesses these days are facing increasing complexity and volatility with the availability of vast amount of data. Despite the gloom and doom situation prevailing in the IT sector, a large number of companies are turning to advanced analytics to manage, deal and understand data to gain actionable business insights.


The data and analytics market in India is undergoing a significant change. Indian companies have evinced keen interest in analytics and placed analytics as their numero uno priority for 2017. “The average commitment of top management in Indian companies towards the analytics is by far much higher than the global average.” said Beatriz Sanz Saiz, Global Advisory Analytics Leader, EY. India currently ranks third in terms of analytics adoption worldwide.


Not just private organizations, the analytics wave is also embraced by government agencies and non-profit organizations. The Comptroller and Auditor General (CAG), drafted a big data management policy last year for Indian audit and accounts departments to increase the use of data analysis in different functions.


The Indian analytics marker is currently pegged at US$2 billion and is growing at a healthy CAGR of 26%. The market is expected to reach US$16 billion by 2025. Nasscom anticipates the big data industry in India will occupy 32% of the global market by 2025. Retail and consumer, BFSI, manufacturing, and healthcare are the key contributors to the analytics market in the country.


While the picture looks quite promising, Indian companies feel that lack of skills is a major impediment in their analytics initiatives. There are currently 50,000 open positions for analytics in India. Moreover, analytics is yet to become a formal field to be taught in technology institutes as part of their full-time courses.


In view of this, Nasscom has proposed engineering institutes to upgrade curriculum to include big data and data analytics in their courses. On the same note, a Centre of Excellence was initiated to drive research in the analytics space and strengthen the ecosystem. The country contributes to 12% of all analytics and data science job openings globally. With a large pool of people with mathematics and programming background, India could become the largest hub for skills in big data and analytics.


Data analytics is becoming an integral part of organization’s business strategy and the trend is expected to spread to more industries in the times ahead. The rise in data which stems from growing smartphone penetration is forcing organizations to delve deeply into consumer minds through machine learning techniques. Companies are now focusing on Internet of Things (IoT) data integrations and management to customize their product and service offerings. Indian IT giants such as Infosys, TCS, Wipro and Tech Mahindra have already infused significant money to step up their automation and AI platforms. This trend is not just limited to large multinationals. A website quotes a sum of US$210 being invested by start-up firms covering 20 deals in 2016.


Data analytics is slowly becoming embedded in a majority of organizations in India. The adoption is gradual but the time is not far when analytics will be seen as a key business differentiator in this highly competitive space. Organizations using analytics to data are twice more likely to outperform their peers. As India undergoes a rapid digital transformation, it will create myriad of opportunities to different companies globally and will metamorphose itself into the largest hub for analytics globally.

Big data analytics in healthcare industry is a combination of clinical innovation and technology. In fact, healthcare businesses are facing difficulties to manage a large amount of data over soft or hard copy formats. So the current trend Big Data Analytics favours such organizations. This technique supports a wide range of healthcare operations to improve services and manage problems in the healthcare sector. 


Big data analytics vendors in USA and world-wide require to perfectly implement big data strategy to generate actionable insights, organize their future vision, improve the outcomes and reduce time to value. So this approach is helpful to provide insightful information to healthcare enterprises regarding their management, planning and the measurements.



Benefits of data analytics for the following fields in healthcare industries:

Data analytics helps improve the care, save patient’s life and lower the cost of health facilities offered.


Public Health:

By doing analysis on disease types and patterns and some other factors about disease, public health issues can be improved with analytics approach. Large amounts of data helps determine the requirements, services and also helps predict and prevent the future problems to benefit the people.


Electronic Medical Record or EMR: 

An EMR contains both structured and unstructured standard medical data. It helps predict patient’s problem and provide effective care by EMR medical data that can be evaluated with data analytic approach.


Patient Profile Analytics:

Patient profile analytics used to identify individuals who could take advantage from positive approach. This may include lifestyle changes.


Fraud Analysis: 

This data analytics approach helps identify and analyze a large number of claim requests to reduce the fraud cases.


Safety Monitoring:

This Data Analytics approach helps in the safety monitoring and we can also identify hazardous things that may happen in the future. And it can also be used to analyze a huge volume of brisk data in hospitals in real time.

Big data also helps reduce administrative costs, maintain the better care coordination in hospital systems and enhance the way clinicians make decisions about their patients.

Why Data Analytics used in Healthcare?

Data analysis techniques have the capability to capture, process, distribute and manage the analysis in specific form that makes it easy to get information. With data analytics techniques, a vast amount of patient-related health data is analyzed in a better way to get a better understanding of outcomes, which may be applied at the point of care for better facilities.

Data analytics technique provides facility for different healthcare organizations ranging from individual physician to large healthcare organizations. The technique can be used:

  • To find specific individual and population health issues
  • To enable fraud management in an effective way
  • For determining affordable ways to diagnose and treat patients
  • To bring out effective R&D methods for the drugs and devices
  • New treatments that are introduced to the market


The most important thing to use big data analytics technologies in healthcare industries is giving the best treatment with the right care at the proper time by the best provider, which will result in cost and time effective solution.


If you are looking for big data solutions in USA, USM Business Systems will help you.

USM Business Systems is a leading company big data and business intelligence in USA. USM’s end-to-end big data services include building data management strategy, implementation of data warehouses, building analytics and visualization frameworks. USM also provides services on data analytics tools in USA such as Handoop, MongoDB, Talend, and Pentaho.