Categories
erwin Expert Blog Data Governance

For Pharmaceutical Companies Data Governance Shouldn’t Be a Hard Pill to Swallow

Using data governance in the pharmaceutical industry is a critical piece of the data management puzzle.

Pharmaceutical and life sciences companies face many of the same digital transformation pressures as other industries, such as financial services and healthcare that we have explored previously.

In response, they are turning to technologies like advanced analytics platforms and cloud-based resources to help better inform their decision-making and create new efficiencies and better processes.

Among the conditions that set digital transformation in pharmaceuticals and life sciences apart from other sectors are the regulatory environment and the high incidence of mergers and acquisitions (M&A).

Data Governance, GDPR and Your Business

Protecting sensitive data in these industries is a matter of survival, in terms of the potential penalties for failing to comply with any number of industry and government regulations and because of the near-priceless value of data around research and development (R&D).

The high costs and huge potential of R&D is one of the driving factors of M&A activity in the pharmaceutical and life sciences space. With roughly $156 billion in M&A deals in healthcare in the first quarter of 2018 alone – many involving drug companies – the market is the hottest it’s been in more than a decade. Much of the M&A activity is being driven by companies looking to buy competitors, acquire R&D, and offset losses from expiring drug patents.

 

[GET THE FREE E-BOOK]: APPLICATION PORTFOLIO MANAGEMENT FOR MERGERS & ACQUISITIONS IN THE FINANCIAL SERVICES SECTOR

 

With M&A activity comes the challenge of integrating two formerly separate companies into one. That means integrating technology platforms, business processes, and, of course, the data each organization brings to the deal.

Data Integrity for Risk Management and More

As in virtual every other industry, data is quickly becoming one of the most valuable assets within pharmaceutical and life science companies. In its 2018 Global Life Sciences Outlook, Deloitte speaks to the importance of “data integrity,” which it defines as data that is complete, consistent and accurate throughout the data lifecycle.

Data integrity helps manage risk in pharmaceutical and life sciences by making it easier to comply with a complex web of regulations that touch many different parts of these organizations, from finance to the supply chain and beyond. Linking these cross-functional teams to data they can trust eases the burden of compliance by supplying team members with what many industries now refer to as “a single version of truth” – which is to say, data with integrity.

Data integrity also helps deliver insights for important initiatives in the pharmaceutical and life sciences industries like value-based pricing and market access.

Developing data integrity and taking advantage of it to reduce risk and identify opportunities in pharmaceuticals and life sciences isn’t possible without a holistic approach to data governance that permeates every part of these companies, including business processes and enterprise architecture.

Healthcare Data

Data Governance in the Pharmaceutical Industry Maximizes Value

Data governance gives businesses the visibility they need to understand where their data is, where it came from, its value, its quality and how it can be used by people and software applications. This type of understanding of your data is, of course, essential to compliance. In fact, according to a 2017 survey by erwin, Inc. and UBM, 60 percent of organizations said compliance is driving their data governance initiatives.

Using data governance in the pharmaceutical industry helps organizations contemplating M&A, not only by helping them understand the data they are acquiring, but also by informing decisions around complex IT infrastructures and applications that need to be integrated. Decisions about application rationalization and business processes are easier to make when they are viewed through the lens of a pervasive data governance strategy.

Data governance in the pharmaceutical industry can be leveraged to hone data integrity and move toward what Deloitte refers to as end-to-end evidence management (E2E), which unifies the data in pharmaceuticals and life sciences from R&D to clinical trials and through commercialization.

Once implemented, Deloitte predicts E2E will help organizations maximize the value of their data by:

  • Providing a better understanding of emerging risks
  • Enabling collaboration with health systems, patient advocacy groups, and other constituents
  • Streamlining the development of new therapies
  • Driving down costs

If that list of benefits sounds familiar, it’s because it matches up nicely with the goals of digital transformation at many organizations – more efficient processes, better collaboration, improved visibility and better cost management. And it’s all built on a foundation of data and data governance.

To learn more, download our free whitepaper on the Regulatory Rationale for Integrating Data Management & Data Governance.

Data Modeling Data Goverance

 

Categories
erwin Expert Blog

Data Modeling and Data Mapping: Results from Any Data Anywhere

A unified approach to data modeling and data mapping could be the breakthrough that many data-driven organizations need.

In most of the conversations I have with clients, they express the need for a viable solution to model their data, as well as the ability to capture and document the metadata within their environments.

Data modeling is an integral part of any data management initiative. Organizations use data models to tame “data at rest” for business use, governance and technical management of databases of all types.

However, once an organization understands what data it has and how it’s structured via data models, it needs answers to other critical questions: Where did it come from? Did it change along the journey? Where does it go from here?

Data Mapping: Taming “Data in Motion”

Knowing how data moves throughout technical and business data architectures is key for true visibility, context and control of all data assets.

Managing data in motion has been a difficult, time-consuming task that involves mapping source elements to the data model, defining the required transformations, and/or providing the same for downstream targets.

Historically, it either has been outsourced to ETL/ELT developers who often create a siloed, technical infrastructure opaque to the business, or business-friendly mappings have been kept in an assortment of unwieldy spreadsheets difficult to consolidate and reuse much less capable of accommodating new requirements.

What if you could combine data at rest and data in motion to create an efficient, accurate and real-time data pipeline that also includes lineage? Then you can spend your time finding the data you need and using it to produce meaningful business outcomes.

Good news … you can.

erwin Mapping Manager: Connected Data Platform

Automated Data Mapping

Your data modelers can continue to use erwin Data Modeler (DM) as the foundation of your database management system, documenting, enforcing and improving those standards. But instead of relying on data models to disseminate metadata information, you can scan and integrate any data source and present it to all interested parties – automatically.

erwin Mapping Manager (MM) shifts the management of metadata away from data models to a dedicated, automated platform. It can collect metadata from any source, including JSON documents, erwin data models, databases and ERP systems, out of the box.

This functionality underscores our Any2 data approach by collecting any data from anywhere. And erwin MM can schedule data collection and create versions for comparison to clearly identify any changes.

Metadata definitions can be enhanced using extended data properties, and detailed data lineages can be created based on collected metadata. End users can quickly search for information and see specific data in the context of business processes.

To summarize the key features current data modeling customers seem to be most excited about:

  • Easy import of legacy mappings, plus share and reuse mappings and transformations
  • Metadata catalog to automatically harvest any data from anywhere
  • Comprehensive upstream and downstream data lineage
  • Versioning with comparison features
  • Impact analysis

And all of these features support and can be integrated with erwin Data Governance. The end result is knowing what data you have and where it is so you can fuel a fast, high-quality and complete pipeline of any data from anywhere to accomplish your organizational objectives.

Want to learn more about a unified approach to data modeling and data mapping? Join us for our weekly demo to see erwin MM in action for yourself.

erwin Mapping Manager

Categories
erwin Expert Blog

Healthcare Data Governance: What’s the Prognosis?

Healthcare data governance has far more applications than just meeting compliance standards. Healthcare costs are always a topic of discussion, as is the state of health insurance and policies like the Affordable Care Act (ACA).

Costs and policy are among a number of significant trends called out in the executive summary of the Stanford Medicine 2017 Health Trend Report. But the summary also included a common thread that connects them all:

“Behind these trends is one fundamental force driving health care transformation: the power of data.”

Indeed, data is essential to healthcare in areas like:

  • Medical research – Collecting and reviewing increasingly large data sets has the potential to introduce new levels of speed and efficiency into what has been an often slow and laborious process.
  • Preventative care – Wearable devices help consumers track exercise, diet, weight and nutrition, as well as clinical applications like genetic sequencing.
  • The patient experience – Healthcare is not immune to issues of customer service and the need to provide timely, accurate responses to questions or complaints.
  • Disease and outbreak prevention – Data and analysis can help spot patterns, so clinicians get ahead of big problems before they become epidemics.

Data Management and Data Governance

Data Vulnerabilities in Healthcare

Data is valuable to the healthcare industry. But it also carries risks because of the volume and velocity with which it is collected and stored. Foremost among these are regulatory compliance and security.

Because healthcare data is so sensitive, the ways in which it is secured and shared are watched closely by regulators. HIPAA (Health Information Portability and Accountability Act) is probably the most recognized regulation governing data in healthcare, but it is not the only one.

In addition to privacy and security policies, other challenges that prevent the healthcare industry from maximizing the ways it puts data to work include:

  • High costs, which are further exacerbated by expected lower commercial health insurance payouts and higher payouts from low-margin services like Medicare, as well as rising labor costs. Data and analytics can potentially help hospitals better plan for these challenges, but thin margins might prevent the investments necessary in this area.
  • Electronic medical records, which the Stanford report cited as a cause of frustration that negatively impacts relationships between patients and healthcare providers.
  • Silos of data, which often are caused by mergers and acquisitions within the industry, but that are also emblematic of the number of platforms and applications used by providers, insurers and other players in the healthcare market.

Early 2018 saw a number of mergers and acquisitions in the healthcare industry, including hospital systems in New England, as well as in the Philadelphia area of the United States. The $69 billion dollar merger of Aetna and CVS also was approved by shareholders in early 2018, making it one of the most significant deals of the past decade.

Each merger and acquisition requires careful and difficult decisions concerning the application portfolio and data of each organization. Redundancies need to identified, as do gaps, so the patient experience and care continues without serious disruption.

Truly understanding healthcare data requires a holistic approach to data governance that is embedded in business processes and enterprise architecture. When implemented properly, data governance initiatives help healthcare organizations understand what data they have, where it is, where it came from, its value, its quality and how it’s used and accessed by people and applications.

Healthcare Data Governance

Improving Healthcare Analytics and Patient Care with Healthcare Data Governance

Data governance plays a vital role in compliance because data is easier to protect when you know where it is stored, what it is, and how it needs to be governed. According to a 2017 survey by erwin, Inc. and UBM, 60 percent of organizations said compliance was driving their data governance initiatives.

With a solid understand of their data and the ways it is collected and consumed throughout their organizations, healthcare players are better positioned to reap the benefits of analytics. As Deloitte pointed out in a perspectives piece about healthcare analytics, the shift to value-based care makes analytics within the industry more essential than ever.

With increasing pressure on margins, the combination of data governance and analytics is critical to creating value and finding efficiencies. Investments in analytics are only as valuable as the data they are fed, however.

Poor decisions based on poor data will lead to bad outcomes, but they also diminish trust in the analytics platform, which will ruin the ROI as it is used less and less.

Most important, healthcare data governance plays a critical role in helping improve patient outcomes and value. In healthcare, the ability to make timely, accurate decisions based on quality data can be a matter of life or death.

In areas like preventative care and the patient experience, good data can mean better advice to patients, more accurate programs for follow-up care, and the ability to meet their medical and lifestyle needs within a healthcare facility or beyond.

As healthcare organizations look to improve efficiencies, lower costs and provide quality, value-based care, healthcare data governance will be essential to better outcomes for patients, providers and the industry at large.

For more information, please download our latest whitepaper, The Regulatory Rationale for Integrating Data Management and Data Governance.

If you’re interested in healthcare data governance, or evaluating new data governance technologies for another industry, you can schedule a demo of erwin’s data mapping and data governance solutions.

Data Mapping Demo CTA

Michael Pastore is the Director, Content Services at QuinStreet B2B Tech.

Categories
erwin Expert Blog

Financial Services Data Governance: Helping Value ‘the New Currency’

For organizations operating in financial services data governance is becoming increasingly more important. When financial services industry board members and executives gathered for EY’s Financial Services Leadership Summit in early 2018, data was a major topic of conversation.

Attendees referred to data as “the new oil” and “the new currency,” and with good reason. Financial services organizations, including banks, brokerages, insurance companies, asset management firms and more, collect and store massive amounts of data.

But data is only part of the bigger picture in financial services today. Many institutions are investing heavily in IT to help transform their businesses to serve customers and partners who are quickly adopting new technologies. For example, Gartner research expects the global banking industry will spend $519 billion on IT in 2018.

The combination of more data and technology and fewer in-person experiences puts a premium on trust and customer loyalty. Trust has long been at the heart of the financial services industry. It’s why bank buildings in a bygone era were often erected as imposing stone structures that signified strength at a time before deposit insurance, when poor management or even a bank robbery could have devastating effects on a local economy.

Trust is still vital to the health of financial institutions, except today’s worst-case scenario often involves faceless hackers pillaging sensitive data to use or re-sell on the dark web. That’s why governing all of the industry’s data, and managing the risks that comes with collecting and storing such vast amounts of information, is increasingly a board-level issue.

The boards of modern financial services institutions understand three important aspects of data:

  1. Data has a tremendous amount of value to the institution in terms of helping identify the wants and needs of customers.
  2. Data is central to security and compliance, and there are potentially severe consequences for organizations that run afoul of either.
  3. Data is central to the transformation underway at many financial institutions as they work to meet the needs of the modern customer and improve their own efficiencies.

Data Management and Data Governance: Solving the Enterprise Data Dilemma

Data governance helps organizations in financial services understand their data. It’s essential to protecting that data and to helping comply with the many government and industry regulations in the industry. But financial services data governance – all data governance in fact – is about more than security and compliance; it’s about understanding the value and quality of data.

When done right and deployed in a holistic manner that’s woven into the business processes and enterprise architecture, data governance helps financial services organizations better understand where their data is, where it came from, its value, its quality, and how the data is accessed and used by people and applications.

Financial Services Data Governance: It’s Complicated

Financial services data governance is getting increasingly complicated for a number of reasons.

Mergers & Acquisitions

Deloitte’s 2018 Banking and Securities M&A Outlook described 2017 as “stuck in neutral,” but there is reason to believe the market picks up steam in 2018 and beyond, especially when it comes to financial technology (or fintech) firms. Bringing in new sets of data, new applications and new processes through mergers and acquisitions creates a great deal of complexity.

The integrations can be difficult, and there is an increased likelihood of data sprawl and data silos. Data governance not only helps organizations better understand the data, but it also helps make sense of the application portfolios of merging institutions to discover gaps and redundancies.

Regulatory Environment

There is a lengthy list of regulations and governing bodies that oversee the financial services industry, covering everything from cybersecurity to fraud protection to payment processing, all in an effort to minimize risk and protect customers.

The holistic view of data that results from a strong data governance initiative is becoming essential to regulatory compliance. According to a 2017 survey by erwin, Inc. and UBM, 60 percent of organizations said compliance drives their data governance initiatives.

More Partnerships and Networks

According to research by IBM, 45 percent of bankers say partnerships and alliances help improve their agility and competitiveness. Like consumers, today’s financial institutions are more connected than ever before, and it’s no longer couriers and cash that are being transferred in these partnerships; it’s data.

Understanding the value, quality and risk of the data shared in these alliances is essential – not only to be a good partner and derive a business benefit from the relationship, but also to evaluate whether or not an alliance or partnership makes good business sense.

Financial Services Data Governance

More Sources of Data, More Touch Points

Financial services institutions are at the forefront of the multi-channel customer experience and have been for years. People do business with institutions by phone, in person, via the Web, and using mobile devices.

All of these touch points generate data, and it is essential that organizations can tie them all together to understand their customers. This information is not only important to customer service, but also to finding opportunities to grow relationships with customers by identifying where it makes sense to upsell and cross-sell products and services.

Grow the Business, Manage the Risk

In the end, financial services organizations need to understand the ways their data can help grow the business and manage risk. Data governance plays an important role in both.

Financial services data governance can better enable:

  • The personalized, self-service, applications customers want
  • The machine learning solutions that automate decision-making and create more efficient business processes
  • Faster and more accurate identification of cross-sell and upsell opportunities
  • Better decision-making about the application portfolio, M&A targets, M&A success and more

If you’re interested in financial services data governance, or evaluating new data governance technologies for another industry, you can schedule a demo of erwin’s data mapping and data governance solutions.

Data Mapping Demo CTA

And you also might want to download our latest e-book, Solving the Enterprise Data Dilemma.

Michael Pastore is the Director, Content Services at QuinStreet B2B Tech.

Categories
erwin Expert Blog

Defining DG: What Can Data Governance Do for You?

Data governance (DG) is becoming more commonplace because of data-driven business, yet defining DG and putting into sound practice is still difficult for many organizations.

Defining DG

The absence of a standard approach to defining DG could be down to its history of missed expectations, false starts and negative perceptions about it being expensive, intrusive, impeding innovation and not delivering any value. Without success stories to point to, the best way of doing and defining DG wasn’t clear.

On the flip side, the absence of a standard approach to defining DG could be the reason for its history of lacklustre implementation efforts, because those responsible for overseeing it had different ideas about what should be done.

Therefore, it’s been difficult to fully fund a data governance initiative that is underpinned by an effective data management capability. And many organizations don’t distinguish between data governance and data management, using the terms interchangeably and so adding to the confusion.

Defining DG: The Data Governance Conundrum

While research indicates most view data governance as “critically important” or they recognize the value of data, the large percentage without a formal data governance strategy in place indicates there are still significant teething problems.

How Important is Data Governance

And that’s the data governance conundrum. It is essential but unwanted and/or painful.

It is a complex chore, so organizations have lacked the motivation to start and effectively sustain it. But faced with the General Data Protection Regulation (GDPR) and other compliance requirements, they have been doing the bare minimum to avoid the fines and reputational damage.

And arguably, herein lies the problem. Organizations look at data governance as something they have to do rather than seeing what it could do for them.

Data governance has its roots in the structure of business terms and technical metadata, but it has tendrils and deep associations with many other components of a data management strategy and should serve as the foundation of that platform.

With data governance at the heart of data management, data can be discovered and made available throughout the organization for both IT and business stakeholders with approved access. This means enterprise architecture, business process, data modeling and data mapping all can draw from a central metadata repository for a single source of data truth, which improves data quality, trust and use to support organizational objectives.

But this “data nirvana” requires a change in approach to data governance. First, recognizing that Data Governance 1.0 was made for a different time when the volume, variety and velocity of the data an organization had to manage was far lower and when data governance’s reach only extended to cataloging data to support search and discovery. 

Data Governance Evolution

Modern data governance needs to meet the needs of data-driven business. We call this adaptation “Evolving DG.” It is the journey to a cost-effective, mature, repeatable process that permeates the whole organization.

The primary components of Evolving DG are:

  • Evaluate
  • Plan
  • Configure
  • Deliver
  • Feedback

The final step in such an evolution is the implementation of the erwin Enterprise Data Governance Experience (EDGE) platform.

The erwin EDGE places data governance at the heart of the larger data management suite. By unifying the data management suite at a fundamental level, an organization’s data is no longer marred by departmental and software silos. It brings together both IT and the business for data-driven insights, regulatory compliance, agile innovation and business transformation.

It allows every critical piece of the data management and data governance lifecycle to draw from a single source of data truth and ensure quality throughout the data pipeline, helping organizations achieve their strategic objectives including:

  • Operational efficiency
  • Revenue growth
  • Compliance, security and privacy
  • Increased customer satisfaction
  • Improved decision-making

To learn how you can evolve your data governance practice and get an EDGE on your competition, click here.

Solving the Enterprise Data Dilemma

Categories
erwin Expert Blog

Solving the Enterprise Data Dilemma

Due to the adoption of data-driven business, organizations across the board are facing their own enterprise data dilemmas.

This week erwin announced its acquisition of metadata management and data governance provider AnalytiX DS. The combined company touches every piece of the data management and governance lifecycle, enabling enterprises to fuel automated, high-quality data pipelines for faster speed to accurate, actionable insights.

Why Is This a Big Deal?

From digital transformation to AI, and everything in between, organizations are flooded with data. So, companies are investing heavily in initiatives to use all the data at their disposal, but they face some challenges. Chiefly, deriving meaningful insights from their data – and turning them into actions that improve the bottom line.

This enterprise data dilemma stems from three important but difficult questions to answer: What data do we have? Where is it? And how do we get value from it?

Large enterprises use thousands of unharvested, undocumented databases, applications, ETL processes and procedural code that make it difficult to gather business intelligence, conduct IT audits, and ensure regulatory compliance – not to mention accomplish other objectives around customer satisfaction, revenue growth and overall efficiency and decision-making.

The lack of visibility and control around “data at rest” combined with “data in motion”, as well as difficulties with legacy architectures, means these organizations spend more time finding the data they need rather than using it to produce meaningful business outcomes.

To remedy this, enterprises need smarter and faster data management and data governance capabilities, including the ability to efficiently catalog and document their systems, processes and the associated data without errors. In addition, business and IT must collaborate outside their traditional operational silos.

But this coveted state of data nirvana isn’t possible without the right approach and technology platform.

Enterprise Data: Making the Data Management-Data Governance Love Connection

Enterprise Data: Making the Data Management-Data Governance Love Connection

Bringing together data management and data governance delivers greater efficiencies to technical users and better analytics to business users. It’s like two sides of the same coin:

  • Data management drives the design, deployment and operation of systems that deliver operational and analytical data assets.
  • Data governance delivers these data assets within a business context, tracks their physical existence and lineage, and maximizes their security, quality and value.

Although these disciplines approach data from different perspectives and are used to produce different outcomes, they have a lot in common. Both require a real-time, accurate picture of an organization’s data landscape, including data at rest in data warehouses and data lakes and data in motion as it is integrated with and used by key applications.

However, creating and maintaining this metadata landscape is challenging because this data in its various forms and from numerous sources was never designed to work in concert. Data infrastructures have been cobbled together over time with disparate technologies, poor documentation and little thought for downstream integration, so the applications and initiatives that depend on data infrastructure are often out-of-date and inaccurate, rendering faulty insights and analyses.

Organizations need to know what data they have and where it’s located, where it came from and how it got there, what it means in common business terms [or standardized business terms] and be able to transform it into useful information they can act on – all while controlling its access.

To support the total enterprise data management and governance lifecycle, they need an automated, real-time, high-quality data pipeline. Then every stakeholder – data scientist, ETL developer, enterprise architect, business analyst, compliance officer, CDO and CEO – can fuel the desired outcomes with reliable information on which to base strategic decisions.

Enterprise Data: Creating Your “EDGE”

At the end of the day, all industries are in the data business and all employees are data people. The success of an organization is not measured by how much data it has, but by how well it’s used.

Data governance enables organizations to use their data to fuel compliance, innovation and transformation initiatives with greater agility, efficiency and cost-effectiveness.

Organizations need to understand their data from different perspectives, identify how it flows through and impacts the business, aligns this business view with a technical view of the data management infrastructure, and synchronizes efforts across both disciplines for accuracy, agility and efficiency in building a data capability that impacts the business in a meaningful and sustainable fashion.

The persona-based erwin EDGE creates an “enterprise data governance experience” that facilitates collaboration between both IT and the business to discover, understand and unlock the value of data both at rest and in motion.

By bringing together enterprise architecture, business process, data mapping and data modeling, erwin’s approach to data governance enables organizations to get a handle on how they handle their data. With the broadest set of metadata connectors and automated code generation, data mapping and cataloging tools, the erwin EDGE Platform simplifies the total data management and data governance lifecycle.

This single, integrated solution makes it possible to gather business intelligence, conduct IT audits, ensure regulatory compliance and accomplish any other organizational objective by fueling an automated, high-quality and real-time data pipeline.

With the erwin EDGE, data management and data governance are unified and mutually supportive, with one hand aware and informed by the efforts of the other to:

  • Discover data: Identify and integrate metadata from various data management silos.
  • Harvest data: Automate the collection of metadata from various data management silos and consolidate it into a single source.
  • Structure data: Connect physical metadata to specific business terms and definitions and reusable design standards.
  • Analyze data: Understand how data relates to the business and what attributes it has.
  • Map data flows: Identify where to integrate data and track how it moves and transforms.
  • Govern data: Develop a governance model to manage standards and policies and set best practices.
  • Socialize data: Enable stakeholders to see data in one place and in the context of their roles.

An integrated solution with data preparation, modeling and governance helps businesses reach data governance maturity – which equals a role-based, collaborative data governance system that serves both IT and business users equally. Such maturity may not happen overnight, but it will ultimately deliver the accurate and actionable insights your organization needs to compete and win.

Your journey to data nirvana begins with a demo of the enhanced erwin Data Governance solution. Register now.

erwin ADS webinar

Categories
erwin Expert Blog

Once You Understand Your Data, Everything Is Easier

As a data-driven organization in the modern, hyper-competetive business landscape, it’s imperative that employees, business leaders and decision makers can understand your data.

In a previous article, I argued that business process management without data governance is a perilous experiment. The same can be said for enterprise architecture initiatives that traditionally stop at the process level with disregard for the data element.

Therefore, an organization that ignores the data layer of both its process and enterprise architectures isn’t tapping into their full potential. You run the risk of being siloed, confined to traditional and qualitative structures that will make improvement and automation more difficult, time-consuming and ultimately ineffective. However, it does not have to be this way.

I’m going to make a bold statement, and then we can explore it together. Ready? Without data governance, a process management or enterprise architecture initiative will result in a limited enterprise architecture and any efforts that may stem from it (process improvement, consolidation, automation, etc.) also will be limited.

So how exactly does data governance fit within the larger world of enterprise architecture, and why is it so critical?

Understand Your Data – What Lies Beneath

A constant source of unpleasant surprise for most medium and large-scale enterprise architecture and process management initiatives is discovering just how tricky it is to establish interconnectivity between low-level processes and procedures in a way that is easy sustainable and above all, objective.

Traditionally, most projects focus on some type of top-down classification, using either organizational or capability-based groupings. These structures are usually qualitative or derived from process owners, subject matter experts (SMEs) or even department heads and business analysts. While usually accurate, they are primarily isolated, top-down views contained within organizational silos.

But more and more enterprise architecture initiatives are attempting to move beyond these types of groupings to create horizontal, interconnected processes. To do so, process owners must rely on handover points – inputs and outputs, documents and, you guessed it, data. The issue is that these handover points are still qualitative and unsustainable in the long term, which is where data and data governance comes in.

By providing an automated, fully integrated view of data ownership, lineage and interconnectivity, data governance serves as the missing link between disparate and disconnected processes in a way that has always proved elusive and time consuming. It is an objective link, driven by factual relationships that brings everything together.

Data governance also provides an unparalleled level of process connectivity, so organizations can see how processes truly interact with each other, across any type of organizational silo, enabling realistic and objective impact analysis. How is this possible? By conducting data governance in conjunction with any enterprise architecture initiative, using both a clear methodology and specialized system.

Understand Your Data – Data Is Everywhere

Everyone knows that processes generate, use and own data. The problem is understanding what “process data” is and how to incorporate that information into standard business process management.

Most process owners, SMEs and business analysts view data as a collection of information, usually in terms of documents and data groups such as “customer information” or “request details,” that is generated and completed through a series of processes or process steps. Links between documents/data groups and processes are assumed to be communicated to other processes that use them and so on. But this picture is not accurate.

Most of the time, a document or data group is not processed in its entirety by any subsequent/recipient processes; some information is processed by one process while the remainder is reserved for another or is entirely useless. This means that only single, one-dimensional connections are made, ignoring derived or more complex connections.

Therefore, any attempted impact analysis would ignore additional dimensions, which account for most of the process improvement and re-engineering projects that either fail or present significant overruns in terms of both time and budget.

With data governance, data is identified and tracked with ownership, lineage and use established and associated with the appropriate process elements, showing the connections between processes at the most practical informational level.

In addition and possibly most important, process ownership/responsibility becomes more objective and clear because it can be determined according to who owns/is responsible for the information the process generates and uses – as opposed to the more arbitrary/qualitative assignment that tends to be the norm. RACI and CRUD matrix analyses become faster, more efficient and infinitely more effective, and for the first time, they encompass elements of derived ownership (through data lineage).

Process automation projects also become more efficient, effective and shorter in duration because the right data is known from the beginning, process interconnectivity is mapped objectively, and responsibilities are clearly visible from the initial design phase.

With requirements accompanied by realistic process mapping information, development of workflows is easier, faster and less prone to errors, making process automation more attractive and feasible, even to smaller organizations for which even the scoping and requirements-gathering exercise has traditionally proved too complicated.

Understand Your Data – More Upside to Data Governance

Data governance enables an organization to manage and mitigate data risks, protecting itself from legal and reputational challenges to ensure continued operation. And once data is connected with business processes through effective, proactive data governance, additional benefits are realized:

  • Process risk management and mitigation becomes more factual and less qualitative, making the organization more effective;
  • Process compliance also becomes more factual, empowering not only compliance efforts but also compliance control and assessment with easier impact analyses; and
  • Organizational redesign can be based on what groupings actually do.

While the above benefits may appear vague and far-removed from either a pure enterprise architecture or data governance initiative, they are more tangible and achievable than ever before as organizations begin to rely more on object-oriented management systems.

Combining the strategic, informational-level detail and management provided by data governance with the more functional, organizational view of enterprise architecture proves that one plus one really does equal more than two.

To learn more about how data governance underpins organization’s data strategies click here.

Categories
erwin Expert Blog

Big Data Posing Challenges? Data Governance Offers Solutions

Big Data is causing complexity for many organizations, not just because of the volume of data they’re collecting, but because of the variety of data they’re collecting.

Big Data often consists of unstructured data that streams into businesses from social media networks, internet-connected sensors, and more. But the data operations at many organizations were not designed to handle this flood of unstructured data.

Dealing with the volume, velocity and variety of Big Data is causing many organizations to re-think how they store and govern their data. A perfect example is the data warehouse. The people who built and manage the data warehouse at your organization built something that made sense to them at the time. They understood what data was stored where and why, as well how it was used by business units and applications.

The era of Big Data introduced inexpensive data lakes to some organizations’ data operations, but as vast amounts of data pour into these lakes, many IT departments found themselves managing a data swamp instead.

In a perfect world, your organization would treat Big Data like any other type of data. But, alas, the world is not perfect. In reality, practicality and human nature intervene. Many new technologies, when first adopted, are separated from the rest of the infrastructure.

“New technologies are often looked at in a vacuum, and then built in a silo,” says Danny Sandwell, director of product marketing for erwin, Inc.

That leaves many organizations with parallel collections of data: one for so-called “traditional” data and one for the Big Data.

There are a few problems with this outcome. For one, silos in IT have a long history of keeping organizations from understanding what they have, where it is, why they need it, and whether it’s of any value. They also have a tendency to increase costs because they don’t share common IT resources, leading to redundant infrastructure and complexity. Finally, silos usually mean increased risk.

But there’s another reason why parallel operations for Big Data and traditional data don’t make much sense: The users simply don’t care.

At the end of the day, your users want access to the data they need to do their jobs, and whether IT considers it Big Data, little data, or medium-sized data isn’t important. What’s most important is that the data is the right data – meaning it’s accurate, relevant and can be used to support or oppose a decision.

Reputation Management - What's Driving Data Governance

How Data Governance Turns Big Data into Just Plain Data

According to a November 2017 survey by erwin and UBM, 21 percent of respondents cited Big Data as a driver of their data governance initiatives.

In today’s data-driven world, data governance can help your business understand what data it has, how good it is, where it is, and how it’s used. The erwin/UBM survey found that 52 percent of respondents said data is critically important to their organization and they have a formal data governance strategy in place. But almost as many respondents (46 percent) said they recognize the value of data to their organization but don’t have a formal governance strategy.

A holistic approach to data governance includes thesekey components.

  • An enterprise architecture component is important because it aligns IT and the business, mapping a company’s applications and the associated technologies and data to the business functions they enable. By integrating data governance with enterprise architecture, businesses can define application capabilities and interdependencies within the context of their connection to enterprise strategy to prioritize technology investments so they align with business goals and strategies to produce the desired outcomes.
  • A business process and analysis component defines how the business operates and ensures employees understand and are accountable for carrying out the processes for which they are responsible. Enterprises can clearly define, map and analyze workflows and build models to drive process improvements, as well as identify business practices susceptible to the greatest security, compliance or other risks and where controls are most needed to mitigate exposures.
  • A data modeling component is the best way to design and deploy new databases with high-quality data sources and support application development. Being able to cost-effectively and efficiently discover, visualize and analyze “any data” from “anywhere” underpins large-scale data integration, master data management, Big Data and business intelligence/analytics with the ability to synthesize, standardize and store data sources from a single design, as well as reuse artifacts across projects.

When data governance is done right, and it’s woven into the structure and architecture of your business, it helps your organization accept new technologies and the new sources of data they provide as they come along. This makes it easier to see ROI and ROO from your Big Data initiatives by managing Big Data in the same manner your organization treats all of its data – by understanding its metadata, defining its relationships, and defining its quality.

Furthermore, businesses that apply sound data governance will find themselves with a template or roadmap they can use to integrate Big Data throughout their organizations.

If your business isn’t capitalizing on the Big Data it’s collecting, then it’s throwing away dollars spent on data collection, storage and analysis. Just as bad, however, is a situation where all of that data and analysis is leading to the wrong decisions and poor business outcomes because the data isn’t properly governed.

Previous posts:

You can determine how effective your current data governance initiative is by taking erwin’s DG RediChek.

Categories
erwin Expert Blog

Avoiding Operational Disasters with a Process-Based Approach to Risk Management

Risk avoidance and risk management are hot topics that seem to govern decision-making – and with good reason. Risk comes with potentially massive operational, financial, reputational and legal repercussions, so it makes absolute sense to do everything possible to model it, understand it, analyse it and ultimately mitigate it.

But not all risk is created equal. Nothing illustrates this point better than recent research showing how much global financial institutions lost to different types of operational risk during the last six years. As shown in the chart below, they lost $210 billion between 2011 and 2016, with more than $180 billion of that amount attributed to execution, delivery and process management combined with clients, products and business practices.

So, major banks lost more money because of bad process management than all other risks combined. I’d argue that client, product and business practices, which comprise the largest risk category, essentially come down to process application and management as well.

Business Process-Based Approach to Risk Management

The Data Disconnect

Despite the actual statistics, we hear more about data/technology and compliance risk. While these are significant and justified concerns, financial institutions don’t seem to realize they are losing more money due to other types of risks.

Therefore, I want to remind them – and all of us – that managing operational risk is an ongoing initiative, which needs to include better risk analysis, documentation, process impact analysis and mitigation.

While dozens of methodologies and systems are available in today’s marketplace, they only focus on  or attempt to address the smaller, individual components of operational risk. However, all the risk categories listed above require an effective, practical and – most important – easy-to-implement system to address all the underlying components in a collaborative effort – not in isolation.

According to ORX, the largest operational risk association in the financial services sector, managing  and thereby reducing risk involves managing four different but interconnected layers: people, IT, organizational processes and regulations.

More and more organizations seem to believe that once IT embeds their applications with the necessary controls to meet regulatory requirements, then all is right with the world. But experience has shown that isn’t true. Without adapting the processes using the applications, training employees, and putting sufficient controls in place to ensure all regulatory elements are not only applied but applied correctly, then technical controls alone will ever be effective.

And many will argue that little can be done within an organization regarding regulations, but that’s not true either. While regulations are developed and passed by governments and other external regulatory bodies, what really matters is how organizations adopt those regulations and embed them into their culture and daily operations – which is where all the layers of risk management intersect.

Avoiding Heisenberg’s Uncertainty Principle in Risk Management

As part of his Nobel Prize-winning work, physicist and quantum mechanics pioneer Werner Heisenberg developed the eponymous uncertainty principle that asserts it is only possible to know either the position or movement of a particle but not both. This theory applies to many aspects of everyday life, including organizational operations. It’s difficult to know both an organization’s current state and where it’s headed, and every organization struggles with the same risk management question in this vein: how do we manage risk while also being agile enough to support growth?

ORX is clear that effective risk management requires implementing controls throughout the entire process ecosystem by integrating risk management into the organization’s very fabric. This means clearly defining roles and responsibilities, embedding process improvements, and regularly controlling process performance. Of course, the common thread here is more streamlined and controlled processes.

Make no mistake – effort is still required, but all the above is much simpler today. Thanks to new methodologies and comprehensive business process modeling systems, you can identify which risks are applicable, where they are most likely to occur, and who is responsible for managing them to reduce their probability and impact. Therefore, operational risk can be viewed and then addressed quickly and effectively.

In fact, erwin has worked with an increasing number of financial institutions launching process improvement, automation and management initiatives specifically designed to restructure their processes to promote flexibility as a growth driver without sacrificing traceability and control.

We can help you do the same, regardless of your industry.

To find out about how erwin can help in empowering your data-driven business initiatives, please click here.

Data-Driven Business Transformation whitepaper

Categories
erwin Expert Blog

Why Data Governance and Business Process Management Must Be Linked

Data governance and business process management must be linked.

Following the boom in data-driven business data governance (DG) has taken the modern enterprise by storm, garnering the attention of both the business and technical realms with an explosion of methodologies, targeted systems and training courses. That’s because a major gap needs to be addressed.

But despite all the admonitions and cautionary tales, little attention has focused on what can literally make or break any data governance initiative, turning it from a springboard for competitive advantage to a recipe for waste, anger and ultimately failure. The two key pivot points on which success hinges are business process management (BPM) and enterprise architecture. This article focuses on the critical connections between data governance and business process management.

Based on a True Story: Data Governance Without Process Is Not Data Governance

The following is based on a true story about a global pharmaceutical company implementing a cloud-based, enterprise-wide CRM system with a third-party provider.

Given the system’s nature, the data it would process, and the scope of the deployment, data security and governance was front and center. There were countless meetings – some with more than 50 participants – with protocols sent, reviewed, adjusted and so on. In fact, more than half a dozen outside security companies and advisors (and yes, data governance experts) came in to help design the perfect data protection system around which the CRM system would be implemented.

The framework was truly mind-boggling: hundreds of security measures, dozens of different file management protocols, data security software appearing every step of the way.  Looking at it as an external observer, it appeared to be an ironclad net of absolute safety and effective governance.

But as the CRM implementation progressed, holes began to appear. They were small at first but quickly grew to the size of trucks, effectively rendering months of preparatory work pointless.

Detailed data transfer protocols were subverted daily by consultants and company employees who thought speed was more important than safety. Software locks and systems were overridden with passwords freely communicated through emails and even written on Post-It Notes. And a two-factor authentication principle was reduced to one person entering half a password, with a piece of paper taped over half the computer screen, while another person entered the other half of the password before a third person read the entire password and pressed enter.

While these examples of security holes might seem funny – in a sad way – when you read them here, they represent a $500,000 failure that potentially could lead to a multi-billion-dollar security breach.

Why? Because there were no simple, effective and clearly defined processes to govern the immense investment in security protocols and software to ensure employees would follow them and management could audit and control them. Furthermore, the organization failed to realize how complex this implementation was and that process changes would be paramount.

Both such failures could have been avoided if the organization had a simple system of managing, adjusting and monitoring its processes. More to the point, the implementation of the entire security and governance framework would have cost less and been completed in half the time. Furthermore, if a failure or breach were discovered, it would be easy to trace and correct.

Gartner Magic Quadrant

Data Governance Starts with BPM

In a rush to implement a data governance methodology and system, you can forget that a system must serve a process – and be governed/controlled by one.

To choose the correct system and implement it effectively and efficiently, you must know – in every detail – all the processes it will impact, how it will impact them, who needs to be involved and when. Do these questions sound familiar? They should because they are the same ones we ask in data governance. They involve impact analysis, ownership and accountability, control and traceability – all of which effectively documented and managed business processes enable.

Data sets are not important in and of themselves. Data sets become important in terms of how they are used, who uses them and what their use is – and all this information is described in the processes that generate, manipulate and use them. So, unless we know what those processes are, how can any data governance implementation be complete or successful?

Consider this scenario: We’ve perfectly captured our data lineage, so we know what our data sets mean, how they’re connected, and who’s responsible for them – not a simple task but a massive win for any organization.  Now a breach occurs. Will any of the above information tell us why it happened? Or where? No! It will tell us what else is affected and who can manage the data layer(s), but unless we find and address the process failure that led to the breach, it is guaranteed to happen again.

By knowing where data is used – the processes that use and manage it – we can quickly, even instantly, identify where a failure occurs. Starting with data lineage (meaning our forensic analysis starts from our data governance system), we can identify the source and destination processes and the associated impacts throughout the organization. We can know which processes need to change and how. We can anticipate the pending disruptions to our operations and, more to the point, the costs involved in mitigating and/or addressing them.

But knowing all the above requires that our processes – our essential and operational business architecture – be accurately captured and modelled. Instituting data governance without processes is like building a castle on sand.

Rethinking Business Process Management

Modern organizations need a simple and easy-to-use BPM system with easy access to all the operational layers across the organization – from high-level business architecture all the way down to data. Sure, most organizations already have various solutions here and there, some with claims of being able to provide a comprehensive picture. But chances are they don’t, so you probably need to rethink your approach.

Modern BPM ecosystems are flexible, adjustable, easy-to-use and can support multiple layers simultaneously, allowing users to start in their comfort zones and mature as they work toward the organization’s goals.

Processes need to be open and shared in a concise, consistent way so all parts of the organization can investigate, ask questions, and then add their feedback and information layers. In other words, processes need to be alive and central to the organization because only then will the use of data and data governance be truly effective.

Are you willing to think outside the traditional boxes or silos that your organization’s processes and data live in?

The erwin EDGE is one of the most comprehensive software platforms for managing an organization’s data governance and business process initiatives, as well as the whole data architecture. It allows natural, organic growth throughout the organization and the assimilation of data governance and business process management under the same platform provides a unique data governance experience because of its integrated, collaborative approach.

To learn more about erwin EDGE, and how data governance underpins and ensures data quality throughout the wider data management-suite, download our resource: Data Governance Is Everyone’s Business.

Data Governance is Everyone's Business