Categories
erwin Expert Blog

Managing Emerging Technology Disruption with Enterprise Architecture

Emerging technology has always played an important role in business transformation. In the race to collect and analyze data, provide superior customer experiences, and manage resources, new technologies always interest IT and business leaders.

KPMG’s The Changing Landscape of Disruptive Technologies found that today’s businesses are showing the most interest in emerging technology like the Internet of Things (IoT), artificial intelligence (AI) and robotics. Other emerging technologies that are making headlines include natural language processing (NLP) and blockchain.

In many cases, emerging technologies such as these are not fully embedded into business environments. Before they enter production, organizations need to test and pilot their projects to help answer some important questions:

  • How do these technologies disrupt?
  • How do they provide value?

Enterprise Architecture’s Role in Managing Emerging Technology

Pilot projects that take a small number of incremental steps, with small funding increases along the way, help provide answers to these questions. If the pilot proves successful, it’s then up to the enterprise architecture team to explore what it takes to integrate these technologies into the IT environment.

This is the point where new technologies go from “emerging technologies” to becoming another solution in the stack the organization relies on to create the business outcomes it’s seeking.

One of the easiest, quickest ways to try to pilot and put new technologies into production is to use cloud-based services. All of the major public cloud platform providers have AI and machine learning capabilities.

Integrating new technologies based in the cloud will change the way the enterprise architecture team models the IT environment, but that’s actually a good thing.

Modeling can help organizations understand the complex integrations that bring cloud services into the organization, and help them better understand the service level agreements (SLAs), security requirements and contracts with cloud partners.

When done right, enterprise architecture modeling also will help the organization better understand the value of emerging technology and even cloud migrations that increasingly accompany them. Once again, modeling helps answer important questions, such as:

  • Does the model demonstrate the benefits that the business expects from the cloud?
  • Do the benefits remain even if some legacy apps and infrastructure need to remain on premise?
  • What type of savings do you see if you can’t consolidate enough close an entire data center?
  • How does the risk change?

Many of the emerging technologies garnering attention today are on their way to becoming a standard part of the technology stack. But just as the web came before mobility, and mobility came before AI,  other technologies will soon follow in their footsteps.

To most efficiently evaluate these technologies and decide if they are right for the business, organizations need to provide visibility to both their enterprise architecture and business process teams so everyone understands how their environment and outcomes will change.

When the enterprise architecture and business process teams use a common platform and model the same data, their results will be more accurate and their collaboration seamless. This will cut significant time off the process of piloting, deploying and seeing results.

Outcomes like more profitable products and better customer experiences are the ultimate business goals. Getting there first is important, but only if everything runs smoothly on the customer side. The disruption of new technologies should take place behind the scenes, after all.

And that’s where investing in pilot programs and enterprise architecture modeling demonstrate value as you put emerging technology to work.

Emerging technology - Data-driven business transformation

Categories
erwin Expert Blog

Data Modeling in a Jargon-filled World – Internet of Things (IoT)

In the first post of this blog series, we focused on jargon related to the “volume” aspect of Big Data and its impact on data modeling and data-driven organizations. In this post, we’ll focus on “velocity,” the second of Big Data’s “three Vs.”

In particular, we’re going to explore the Internet of Things (IoT), the constellation of web-connected devices, vehicles, buildings and related sensors and software. It’s a great time for this discussion too, as IoT devices are proliferating at a dizzying pace in both number and variety.

Though IoT devices typically generate small “chunks” of data, they often do so at a rapid pace, hence the term “velocity.” Some of these devices generate data from multiple sensors for each time increment. For example, we recently worked with a utility that embedded sensors in each transformer in its electric network and then generated readings every 4 seconds for voltage, oil pressure and ambient temperature, among others.

While the transformer example is just one of many, we can quickly see two key issues that arise when IoT devices are generating data at high velocity. First, organizations need to be able to process this data at high speed.  Second, organizations need a strategy to manage and integrate this never-ending data stream. Even small chunks of data will accumulate into large volumes if they arrive fast enough, which is why it’s so important for businesses to have a strong data management platform.

It’s worth noting that the idea of managing readings from network-connected devices is not new. In industries like utilities, petroleum and manufacturing, organizations have used SCADA systems for years, both to receive data from instrumented devices to help control processes and to provide graphical representations and some limited reporting.

More recently, many utilities have introduced smart meters in their electricity, gas and/or water networks to make the collection of meter data easier and more efficient for a utility company, as well as to make the information more readily available to customers and other stakeholders.

For example, you may have seen an energy usage dashboard provided by your local electric utility, allowing customers to view graphs depicting their electricity consumption by month, day or hour, enabling each customer to make informed decisions about overall energy use.

Seems simple and useful, but have you stopped to think about the volume of data underlying this feature? Even if your utility only presents information on an hourly basis, if you consider that it’s helpful to see trends over time and you assume that a utility with 1.5 million customers decides to keep these individual hourly readings for 13 months for each customer, then we’re already talking about over 14 billion individual readings for this simple example (1.5 million customers x 13 months x over 30 days/month x 24 hours/day).

Now consider the earlier example I mentioned of each transformer in an electrical grid with sensors generating multiple readings every 4 seconds. You can get a sense of the cumulative volume impact of even very small chunks of data arriving at high speed.

With experts estimating the IoT will consist of almost 50 billion devices by 2020, businesses across every industry must prepare to deal with IoT data.

But I have good news because IoT data is generally very simple and easy to model. Each connected device typically sends one or more data streams with each having a value for the type of reading and the time at which it occurred. Historically, large volumes of simple sensor data like this were best stored in time-series databases like the very popular PI System from OSIsoft.

While this continues to be true for many applications, alternative architectures, such as storing the raw sensor readings in a data lake, are also being successfully implemented. Though organizations need to carefully consider the pros and cons of home-grown infrastructure versus time-tested industrial-grade solutions like the PI System.

Regardless of how raw IoT data is stored once captured, the real value of IoT for most organizations is only realized when IoT data is “contextualized,” meaning it is modeled in the context of the broader organization.

The value of modeled data eclipses that of “edge analytics” (where the value is inspected by a software program while inflight from the sensor, typically to see if it falls within an expected range, and either acted upon if required or allowed simply to pass through) or simple reporting like that in the energy usage dashboard example.

It is straightforward to represent a reading of a particular type from a particular sensor or device in a data model or process model. It starts to get interesting when we take it to the next step and incorporate entities into the data model to represent expected ranges –  both for readings under various conditions and representations of how the devices relate to one another.

If the utility in the transformer example has modeled that IoT data well, it might be able to prevent a developing problem with a transformer and also possibly identify alternate electricity paths to isolate the problem before it has an impact on network stability and customer service.

Hopefully this overview of IoT in the utility industry helps you see how your organization can incorporate high-velocity IoT data to become more data-driven and therefore more successful in achieving larger corporate objectives.

Subscribe and join us next time for Data Modeling in a Jargon-filled World – NoSQL/NewSQL.

Data-Driven Business Transformation

Categories
erwin Expert Blog

Data-Driven Business Transformation: the Data Foundation

In light of data’s prominence in modern business, organizations need to ensure they have a strong data foundation in place.

The ascent of data’s value has been as steep as it is staggering. In 2016, it was suggested that more data would be created in 2017 than in the previous 5000 years of humanity.

But what’s even more shocking is that the peak still not may not even be in sight.

To put its value into context, the five most valuable businesses in the world all deal in data (Alphabet/Google, Amazon, Apple, Facebook and Microsoft). It’s even overtaken oil as the world’s most valuable resource.

Yet, even with data’s value being as high as it is, there’s still a long way to go. Many businesses are still getting to grips with data storage, management and analysis.

Fortune 1000 companies, for example, could earn another $65 million in net income, with access to just 10 percent more of their data (from Data-Driven Business Transformation 2017).

We’re already witnessing the beginnings of this increased potential across various industries. Data-driven businesses such as Airbnb, Uber and Netflix are all dominating, disrupting and revolutionizing their respective sectors.

Interestingly, although they provide very different services for the consumer, the organizations themselves all identify as data companies. This simple change in perception and outlook stresses the importance of data to their business models. For them, data analysis isn’t just an arm of the business… It’s the core.

Data foundation

The dominating data-driven businesses use data to influence almost everything. How decisions are made, how processes could be improved, and where the business should focus its innovation efforts.

However, simply establishing that your business could (and should) be getting more out of data, doesn’t necessarily mean you’re ready to reap the rewards.

In fact, a pre-emptive dive into a data strategy could in fact, slow your digital transformation efforts down. Hurried software investments in response to disruption can lead to teething problems in your strategy’s adoption, and shelfware, wasting time and money.

Additionally, oversights in the strategy’s implementation will stifle the very potential effectiveness you’re hoping to benefit from.

Therefore, when deciding to bolster your data efforts, a great place to start is to consider the ‘three Vs’.

The three Vs

The three Vs of data are volume, variety and velocity. Volume references the amount of data; variety, its different sources; and velocity, the speed in which it must be processed.

When you’re ready to start focusing on the business outcomes that you hope data will provide, you can also stretch those three Vs, to five. The five Vs include the aforementioned, and also acknowledge veracity (confidence in the data’s accuracy) and value, but for now we’ll stick to three.

As discussed, the total amount of data in the world is staggering. But the total data available to any one business can be huge in its own right (depending on the extent of your data strategy).

Unsurprisingly, vast volumes of data are sourced from a vast amount of potential sources. It takes dedicated tools to be processed. Even then, the sources are often disparate, and very unlikely to offer worthwhile insight in a vacuum.

This is why it’s so important to have an assured data foundation upon which to build a data platform on.

A solid data foundation

The Any2 approach is a strategy for housing, sorting and analysing data that aims to be that very foundation on which you build your data strategy.

Shorthand for Any Data, Anywhere, Anycan help clean up the disparate noise, and let businesses drill down on, and effectively analyze the data in order to yield more reliable and informative results.

It’s especially important today, as data sources are becoming increasingly unstructured, and so more difficult to manage.

Big data for example, can consist of click stream data, Internet of Things data, machine data and social media data. The sources need to be rationalized and correlated so they can be analyzed more effectively.

When it comes to actioning an Anyapproach, a fluid relationship between the various data initiative involved is essential. Those being, Data ModelingEnterprise ArchitectureBusiness Process, and Data Governance.

It also requires collaboration, both in between the aforementioned initiatives, and with the wider business to ensure everybody is working towards the same goal.

With a solid data foundation platform in place, your business can really begin to start realizing data’s potential for itself. You also ensure you’re not left behind as new disruptors enter the market, and your competition continues to evolve.

For more data advice and best practices, follow us on Twitter, and LinkedIn to stay up to date with the blog.

For a deeper dive into best practices for data, its benefits, and its applications, get the FREE whitepaper below.

Data-Driven Business Transformation