Categories
erwin Expert Blog Data Governance

For Pharmaceutical Companies Data Governance Shouldn’t Be a Hard Pill to Swallow

Using data governance in the pharmaceutical industry is a critical piece of the data management puzzle.

Pharmaceutical and life sciences companies face many of the same digital transformation pressures as other industries, such as financial services and healthcare that we have explored previously.

In response, they are turning to technologies like advanced analytics platforms and cloud-based resources to help better inform their decision-making and create new efficiencies and better processes.

Among the conditions that set digital transformation in pharmaceuticals and life sciences apart from other sectors are the regulatory environment and the high incidence of mergers and acquisitions (M&A).

Data Governance, GDPR and Your Business

Protecting sensitive data in these industries is a matter of survival, in terms of the potential penalties for failing to comply with any number of industry and government regulations and because of the near-priceless value of data around research and development (R&D).

The high costs and huge potential of R&D is one of the driving factors of M&A activity in the pharmaceutical and life sciences space. With roughly $156 billion in M&A deals in healthcare in the first quarter of 2018 alone – many involving drug companies – the market is the hottest it’s been in more than a decade. Much of the M&A activity is being driven by companies looking to buy competitors, acquire R&D, and offset losses from expiring drug patents.

 

[GET THE FREE E-BOOK]: APPLICATION PORTFOLIO MANAGEMENT FOR MERGERS & ACQUISITIONS IN THE FINANCIAL SERVICES SECTOR

 

With M&A activity comes the challenge of integrating two formerly separate companies into one. That means integrating technology platforms, business processes, and, of course, the data each organization brings to the deal.

Data Integrity for Risk Management and More

As in virtual every other industry, data is quickly becoming one of the most valuable assets within pharmaceutical and life science companies. In its 2018 Global Life Sciences Outlook, Deloitte speaks to the importance of “data integrity,” which it defines as data that is complete, consistent and accurate throughout the data lifecycle.

Data integrity helps manage risk in pharmaceutical and life sciences by making it easier to comply with a complex web of regulations that touch many different parts of these organizations, from finance to the supply chain and beyond. Linking these cross-functional teams to data they can trust eases the burden of compliance by supplying team members with what many industries now refer to as “a single version of truth” – which is to say, data with integrity.

Data integrity also helps deliver insights for important initiatives in the pharmaceutical and life sciences industries like value-based pricing and market access.

Developing data integrity and taking advantage of it to reduce risk and identify opportunities in pharmaceuticals and life sciences isn’t possible without a holistic approach to data governance that permeates every part of these companies, including business processes and enterprise architecture.

Healthcare Data

Data Governance in the Pharmaceutical Industry Maximizes Value

Data governance gives businesses the visibility they need to understand where their data is, where it came from, its value, its quality and how it can be used by people and software applications. This type of understanding of your data is, of course, essential to compliance. In fact, according to a 2017 survey by erwin, Inc. and UBM, 60 percent of organizations said compliance is driving their data governance initiatives.

Using data governance in the pharmaceutical industry helps organizations contemplating M&A, not only by helping them understand the data they are acquiring, but also by informing decisions around complex IT infrastructures and applications that need to be integrated. Decisions about application rationalization and business processes are easier to make when they are viewed through the lens of a pervasive data governance strategy.

Data governance in the pharmaceutical industry can be leveraged to hone data integrity and move toward what Deloitte refers to as end-to-end evidence management (E2E), which unifies the data in pharmaceuticals and life sciences from R&D to clinical trials and through commercialization.

Once implemented, Deloitte predicts E2E will help organizations maximize the value of their data by:

  • Providing a better understanding of emerging risks
  • Enabling collaboration with health systems, patient advocacy groups, and other constituents
  • Streamlining the development of new therapies
  • Driving down costs

If that list of benefits sounds familiar, it’s because it matches up nicely with the goals of digital transformation at many organizations – more efficient processes, better collaboration, improved visibility and better cost management. And it’s all built on a foundation of data and data governance.

To learn more, download our free whitepaper on the Regulatory Rationale for Integrating Data Management & Data Governance.

Data Modeling Data Goverance

 

Categories
erwin Expert Blog

Data Modeling and Data Mapping: Results from Any Data Anywhere

A unified approach to data modeling and data mapping could be the breakthrough that many data-driven organizations need.

In most of the conversations I have with clients, they express the need for a viable solution to model their data, as well as the ability to capture and document the metadata within their environments.

Data modeling is an integral part of any data management initiative. Organizations use data models to tame “data at rest” for business use, governance and technical management of databases of all types.

However, once an organization understands what data it has and how it’s structured via data models, it needs answers to other critical questions: Where did it come from? Did it change along the journey? Where does it go from here?

Data Mapping: Taming “Data in Motion”

Knowing how data moves throughout technical and business data architectures is key for true visibility, context and control of all data assets.

Managing data in motion has been a difficult, time-consuming task that involves mapping source elements to the data model, defining the required transformations, and/or providing the same for downstream targets.

Historically, it either has been outsourced to ETL/ELT developers who often create a siloed, technical infrastructure opaque to the business, or business-friendly mappings have been kept in an assortment of unwieldy spreadsheets difficult to consolidate and reuse much less capable of accommodating new requirements.

What if you could combine data at rest and data in motion to create an efficient, accurate and real-time data pipeline that also includes lineage? Then you can spend your time finding the data you need and using it to produce meaningful business outcomes.

Good news … you can.

erwin Mapping Manager: Connected Data Platform

Automated Data Mapping

Your data modelers can continue to use erwin Data Modeler (DM) as the foundation of your database management system, documenting, enforcing and improving those standards. But instead of relying on data models to disseminate metadata information, you can scan and integrate any data source and present it to all interested parties – automatically.

erwin Mapping Manager (MM) shifts the management of metadata away from data models to a dedicated, automated platform. It can collect metadata from any source, including JSON documents, erwin data models, databases and ERP systems, out of the box.

This functionality underscores our Any2 data approach by collecting any data from anywhere. And erwin MM can schedule data collection and create versions for comparison to clearly identify any changes.

Metadata definitions can be enhanced using extended data properties, and detailed data lineages can be created based on collected metadata. End users can quickly search for information and see specific data in the context of business processes.

To summarize the key features current data modeling customers seem to be most excited about:

  • Easy import of legacy mappings, plus share and reuse mappings and transformations
  • Metadata catalog to automatically harvest any data from anywhere
  • Comprehensive upstream and downstream data lineage
  • Versioning with comparison features
  • Impact analysis

And all of these features support and can be integrated with erwin Data Governance. The end result is knowing what data you have and where it is so you can fuel a fast, high-quality and complete pipeline of any data from anywhere to accomplish your organizational objectives.

Want to learn more about a unified approach to data modeling and data mapping? Join us for our weekly demo to see erwin MM in action for yourself.

erwin Mapping Manager

Categories
erwin Expert Blog

Healthcare Data Governance: What’s the Prognosis?

Healthcare data governance has far more applications than just meeting compliance standards. Healthcare costs are always a topic of discussion, as is the state of health insurance and policies like the Affordable Care Act (ACA).

Costs and policy are among a number of significant trends called out in the executive summary of the Stanford Medicine 2017 Health Trend Report. But the summary also included a common thread that connects them all:

“Behind these trends is one fundamental force driving health care transformation: the power of data.”

Indeed, data is essential to healthcare in areas like:

  • Medical research – Collecting and reviewing increasingly large data sets has the potential to introduce new levels of speed and efficiency into what has been an often slow and laborious process.
  • Preventative care – Wearable devices help consumers track exercise, diet, weight and nutrition, as well as clinical applications like genetic sequencing.
  • The patient experience – Healthcare is not immune to issues of customer service and the need to provide timely, accurate responses to questions or complaints.
  • Disease and outbreak prevention – Data and analysis can help spot patterns, so clinicians get ahead of big problems before they become epidemics.

Data Management and Data Governance

Data Vulnerabilities in Healthcare

Data is valuable to the healthcare industry. But it also carries risks because of the volume and velocity with which it is collected and stored. Foremost among these are regulatory compliance and security.

Because healthcare data is so sensitive, the ways in which it is secured and shared are watched closely by regulators. HIPAA (Health Information Portability and Accountability Act) is probably the most recognized regulation governing data in healthcare, but it is not the only one.

In addition to privacy and security policies, other challenges that prevent the healthcare industry from maximizing the ways it puts data to work include:

  • High costs, which are further exacerbated by expected lower commercial health insurance payouts and higher payouts from low-margin services like Medicare, as well as rising labor costs. Data and analytics can potentially help hospitals better plan for these challenges, but thin margins might prevent the investments necessary in this area.
  • Electronic medical records, which the Stanford report cited as a cause of frustration that negatively impacts relationships between patients and healthcare providers.
  • Silos of data, which often are caused by mergers and acquisitions within the industry, but that are also emblematic of the number of platforms and applications used by providers, insurers and other players in the healthcare market.

Early 2018 saw a number of mergers and acquisitions in the healthcare industry, including hospital systems in New England, as well as in the Philadelphia area of the United States. The $69 billion dollar merger of Aetna and CVS also was approved by shareholders in early 2018, making it one of the most significant deals of the past decade.

Each merger and acquisition requires careful and difficult decisions concerning the application portfolio and data of each organization. Redundancies need to identified, as do gaps, so the patient experience and care continues without serious disruption.

Truly understanding healthcare data requires a holistic approach to data governance that is embedded in business processes and enterprise architecture. When implemented properly, data governance initiatives help healthcare organizations understand what data they have, where it is, where it came from, its value, its quality and how it’s used and accessed by people and applications.

Healthcare Data Governance

Improving Healthcare Analytics and Patient Care with Healthcare Data Governance

Data governance plays a vital role in compliance because data is easier to protect when you know where it is stored, what it is, and how it needs to be governed. According to a 2017 survey by erwin, Inc. and UBM, 60 percent of organizations said compliance was driving their data governance initiatives.

With a solid understand of their data and the ways it is collected and consumed throughout their organizations, healthcare players are better positioned to reap the benefits of analytics. As Deloitte pointed out in a perspectives piece about healthcare analytics, the shift to value-based care makes analytics within the industry more essential than ever.

With increasing pressure on margins, the combination of data governance and analytics is critical to creating value and finding efficiencies. Investments in analytics are only as valuable as the data they are fed, however.

Poor decisions based on poor data will lead to bad outcomes, but they also diminish trust in the analytics platform, which will ruin the ROI as it is used less and less.

Most important, healthcare data governance plays a critical role in helping improve patient outcomes and value. In healthcare, the ability to make timely, accurate decisions based on quality data can be a matter of life or death.

In areas like preventative care and the patient experience, good data can mean better advice to patients, more accurate programs for follow-up care, and the ability to meet their medical and lifestyle needs within a healthcare facility or beyond.

As healthcare organizations look to improve efficiencies, lower costs and provide quality, value-based care, healthcare data governance will be essential to better outcomes for patients, providers and the industry at large.

For more information, please download our latest whitepaper, The Regulatory Rationale for Integrating Data Management and Data Governance.

If you’re interested in healthcare data governance, or evaluating new data governance technologies for another industry, you can schedule a demo of erwin’s data mapping and data governance solutions.

Data Mapping Demo CTA

Michael Pastore is the Director, Content Services at QuinStreet B2B Tech.

Categories
erwin Expert Blog

Financial Services Data Governance: Helping Value ‘the New Currency’

For organizations operating in financial services data governance is becoming increasingly more important. When financial services industry board members and executives gathered for EY’s Financial Services Leadership Summit in early 2018, data was a major topic of conversation.

Attendees referred to data as “the new oil” and “the new currency,” and with good reason. Financial services organizations, including banks, brokerages, insurance companies, asset management firms and more, collect and store massive amounts of data.

But data is only part of the bigger picture in financial services today. Many institutions are investing heavily in IT to help transform their businesses to serve customers and partners who are quickly adopting new technologies. For example, Gartner research expects the global banking industry will spend $519 billion on IT in 2018.

The combination of more data and technology and fewer in-person experiences puts a premium on trust and customer loyalty. Trust has long been at the heart of the financial services industry. It’s why bank buildings in a bygone era were often erected as imposing stone structures that signified strength at a time before deposit insurance, when poor management or even a bank robbery could have devastating effects on a local economy.

Trust is still vital to the health of financial institutions, except today’s worst-case scenario often involves faceless hackers pillaging sensitive data to use or re-sell on the dark web. That’s why governing all of the industry’s data, and managing the risks that comes with collecting and storing such vast amounts of information, is increasingly a board-level issue.

The boards of modern financial services institutions understand three important aspects of data:

  1. Data has a tremendous amount of value to the institution in terms of helping identify the wants and needs of customers.
  2. Data is central to security and compliance, and there are potentially severe consequences for organizations that run afoul of either.
  3. Data is central to the transformation underway at many financial institutions as they work to meet the needs of the modern customer and improve their own efficiencies.

Data Management and Data Governance: Solving the Enterprise Data Dilemma

Data governance helps organizations in financial services understand their data. It’s essential to protecting that data and to helping comply with the many government and industry regulations in the industry. But financial services data governance – all data governance in fact – is about more than security and compliance; it’s about understanding the value and quality of data.

When done right and deployed in a holistic manner that’s woven into the business processes and enterprise architecture, data governance helps financial services organizations better understand where their data is, where it came from, its value, its quality, and how the data is accessed and used by people and applications.

Financial Services Data Governance: It’s Complicated

Financial services data governance is getting increasingly complicated for a number of reasons.

Mergers & Acquisitions

Deloitte’s 2018 Banking and Securities M&A Outlook described 2017 as “stuck in neutral,” but there is reason to believe the market picks up steam in 2018 and beyond, especially when it comes to financial technology (or fintech) firms. Bringing in new sets of data, new applications and new processes through mergers and acquisitions creates a great deal of complexity.

The integrations can be difficult, and there is an increased likelihood of data sprawl and data silos. Data governance not only helps organizations better understand the data, but it also helps make sense of the application portfolios of merging institutions to discover gaps and redundancies.

Regulatory Environment

There is a lengthy list of regulations and governing bodies that oversee the financial services industry, covering everything from cybersecurity to fraud protection to payment processing, all in an effort to minimize risk and protect customers.

The holistic view of data that results from a strong data governance initiative is becoming essential to regulatory compliance. According to a 2017 survey by erwin, Inc. and UBM, 60 percent of organizations said compliance drives their data governance initiatives.

More Partnerships and Networks

According to research by IBM, 45 percent of bankers say partnerships and alliances help improve their agility and competitiveness. Like consumers, today’s financial institutions are more connected than ever before, and it’s no longer couriers and cash that are being transferred in these partnerships; it’s data.

Understanding the value, quality and risk of the data shared in these alliances is essential – not only to be a good partner and derive a business benefit from the relationship, but also to evaluate whether or not an alliance or partnership makes good business sense.

Financial Services Data Governance

More Sources of Data, More Touch Points

Financial services institutions are at the forefront of the multi-channel customer experience and have been for years. People do business with institutions by phone, in person, via the Web, and using mobile devices.

All of these touch points generate data, and it is essential that organizations can tie them all together to understand their customers. This information is not only important to customer service, but also to finding opportunities to grow relationships with customers by identifying where it makes sense to upsell and cross-sell products and services.

Grow the Business, Manage the Risk

In the end, financial services organizations need to understand the ways their data can help grow the business and manage risk. Data governance plays an important role in both.

Financial services data governance can better enable:

  • The personalized, self-service, applications customers want
  • The machine learning solutions that automate decision-making and create more efficient business processes
  • Faster and more accurate identification of cross-sell and upsell opportunities
  • Better decision-making about the application portfolio, M&A targets, M&A success and more

If you’re interested in financial services data governance, or evaluating new data governance technologies for another industry, you can schedule a demo of erwin’s data mapping and data governance solutions.

Data Mapping Demo CTA

And you also might want to download our latest e-book, Solving the Enterprise Data Dilemma.

Michael Pastore is the Director, Content Services at QuinStreet B2B Tech.

Categories
erwin Expert Blog Data Governance Data Intelligence

Demystifying Data Lineage: Tracking Your Data’s DNA

Getting the most out of your data requires getting a handle on data lineage. That’s knowing what data you have, where it is, and where it came from – plus understanding its quality and value to the organization.

But you can’t understand your data in a business context much less track data lineage, its physical existence and maximize its security, quality and value if it’s scattered across different silos in numerous applications.

Data lineage provides a way of tracking data from its origin to destination across its lifespan and all the processes it’s involved in. It also plays a vital role in data governance. Beyond the simple ability to know where the data came from and whether or not it can be trusted, there’s an element of statutory reporting and compliance that often requires a knowledge of how that same data (known or unknown, governed or not) has changed over time.

A platform that provides insights like data lineage, impact analysis, full-history capture, and other data management features serves as a central hub from which everything can be learned and discovered about the data – whether a data lake, a data vault or a traditional data warehouse.

In a traditional data management organization, Excel spreadsheets are used to manage the incoming data design, what’s known as the “pre-ETL” mapping documentation, but this does not provide any sort of visibility or auditability. In fact, each unit of work represented in these ‘mapping documents’ becomes an independent variable in the overall system development lifecycle, and therefore nearly impossible to learn from much less standardize.

The key to accuracy and integrity in any exercise is to eliminate the opportunity for human error – which does not mean eliminating humans from the process but incorporating the right tools to reduce the likelihood of error as the human beings apply their thought processes to the work.

Data Lineage

Data Lineage: A Crucial First Step for Data Governance

Knowing what data you have and where it lives and where it came from is complicated. The lack of visibility and control around “data at rest” combined with “data in motion,” as well as difficulties with legacy architectures, means organizations spend more time finding the data they need rather than using it to produce meaningful business outcomes.

Organizations need to create and sustain an enterprise-wide view of and easy access to underlying metadata, but that’s a tall order with numerous data types and data sources that were never designed to work together and data infrastructures that have been cobbled together over time with disparate technologies, poor documentation and little thought for downstream integration. So the applications and initiatives that depend on a solid data infrastructure may be compromised, resulting in faulty analyses.

These issues can be addressed with a strong data management strategy underpinned by technology that enables the data quality the business requires, which encompasses data cataloging (integration of data sets from various sources), mapping, versioning, business rules and glossaries maintenance and metadata management (associations and lineage).

An automated, metadata-driven framework for cataloging data assets and their flows across the business provides an efficient, agile and dynamic way to generate data lineage from operational source systems (databases, data models, file-based systems, unstructured files and more) across the information management architecture; construct business glossaries; assess what data aligns with specific business rules and policies; and inform how that data is transformed, integrated and federated throughout business processes – complete with full documentation.

Centralized design, immediate lineage and impact analysis, and change-activity logging means you will always have answers readily available, or just a few clicks away. Subsets of data can be identified and generated via predefined templates, generic designs generated from standard mapping documents, and pushed via ETL process for faster processing via automation templates.

With automation, data quality is systemically assured and the data pipeline is seamlessly governed and operationalized to the benefit of all stakeholders. Without such automation, business transformation will be stymied. Companies, especially large ones with thousands of systems, files and processes, will be particularly challenged by a manual approach. And outsourcing these data management efforts to professional services firms only increases costs and schedule delays.

With erwin Mapping Manager, organizations can automate enterprise data mapping and code generation for faster time-to-value and greater accuracy when it comes to data movement projects, as well as synchronize “data in motion” with data management and governance efforts.

Map data elements to their sources within a single repository to determine data lineage, deploy data warehouses and other Big Data solutions, and harmonize data integration across platforms. The web-based solution reduces the need for specialized, technical resources with knowledge of ETL and database procedural code, while making it easy for business analysts, data architects, ETL developers, testers and project managers to collaborate for faster decision-making.

Data Lineage

Categories
erwin Expert Blog

Six Reasons Business Glossary Management Is Crucial to Data Governance

A business glossary is crucial to any data governance strategy, yet it is often overlooked.

Consider this – no one likes unpleasant surprises, especially in business. So when it comes to objectively understanding what’s happening from the top of the sales funnel to the bottom line of finance, everyone wants – and needs – to trust the data they have.

That’s why you can’t underestimate the importance of a business glossary. Sometimes the business folks say IT or marketing speaks a different language. Or in the case of mergers and acquisitions, different companies call the same thing something else.

A business glossary solves this complexity by creating a common business vocabulary. Regardless of the industry you’re in or the type of data initiative you’re undertaking, the ability for an organization to have a unified, common language is a key component of data governance, ensuring you can trust your data.

Are we speaking the same language?

How can two reports show different results for the same region? A quick analysis of invoices will likely reveal that some of the data fed into the report wasn’t based on a clear understanding of business terms.

Business Glossary Management is Crucial to Data Governance

In such embarrassing scenarios, a business glossary and its ongoing management has obvious significance. And with the complexity of today’s business environment, organizations need the right solution to make sense out of their data and govern it properly.

Here are six reasons a business glossary is vital to data governance:

  1. Bridging the gap between Business & IT

A sound data governance initiative bridges the gap between the business and IT. By understanding the underlying metadata associated with business terms and the associated data lineage, a business glossary helps bridge this gap to deliver greater value to the organization.

  1. Integrated search

The biggest appeal of business glossary management is that it helps establish relationships between business terms to drive data governance across the entire organization. A good business glossary should provide an integrated search feature that can find context-specific results, such as business terms, definitions, technical metadata, KPIs and process areas.

  1. The ability to capture business terms and all associated artifacts

What good is a business term if it can’t be associated with other business terms and KPIs? Capturing relationships between business terms as well as between technical and business entities is essential in today’s regulatory and compliance-conscious environment. A business glossary defines the relationship between the business terms and their underlying metadata for faster analysis and enhanced decision-making.

  1. Integrated project management and workflow

When the business and cross-functional teams operate in silos, users start defining business terms according to their own preferences rather than following standard policies and best practices. To be effective, a business glossary should enable a collaborative workflow management and approval process so stakeholders have visibility with established data governance roles and responsibilities. With this ability, business glossary users can provide input during the entire data definition process prior to publication.

  1. The ability to publish business terms

Successful businesses not only capture business terms and their definitions, they also publish them so that the business-at-large can access it. Business glossary users, who are typically members of the data governance team, should be assigned roles for creating, editing, approving and publishing business glossary content. A workflow feature will show which users are assigned which roles, including those with publishing permissions.

After initial publication, business glossary content can be revised and republished on an ongoing basis, based on the needs of your enterprise.

  1. End-to-end traceability

Capturing business terms and establishing relationships are key to glossary management. However, it is far from a complete solution without traceability. A good business glossary can help generate enterprise-level traceability in the form of mind maps or tabular reports to the business community once relationships have been established.

Business Glossary, the Heart of Data Governance

With a business glossary at the heart of your regulatory compliance and data governance initiatives, you can help break down organizational and technical silos for data visibility, context, control and collaboration across domains. It ensures that you can trust your data.

Plus, you can unify the people, processes and systems that manage and protect data through consistent exchange, understanding and processing to increase quality and trust.

By building a glossary of business terms in taxonomies with synonyms, acronyms and relationships, and publishing approved standards and prioritizing them, you can map data in all its forms to the central catalog of data elements.

That answers the vital question of “where is our data?” Then you can understand who and what is using your data to ensure adherence to usage standards and rules.

Value of Data Intelligence IDC Report

Categories
erwin Expert Blog

Top 10 Reasons to Automate Data Mapping and Data Preparation

Data preparation is notorious for being the most time-consuming area of data management. It’s also expensive.

“Surveys show the vast majority of time is spent on this repetitive task, with some estimates showing it takes up as much as 80% of a data professional’s time,” according to Information Week. And a Trifacta study notes that overreliance on IT resources for data preparation costs organizations billions.

The power of collecting your data can come in a variety of forms, but most often in IT shops around the world, it comes in a spreadsheet, or rather a collection of spreadsheets often numbering in the hundreds or thousands.

Most organizations, especially those competing in the digital economy, don’t have enough time or money for data management using manual processes. And outsourcing is also expensive, with inevitable delays because these vendors are dependent on manual processes too.

Automate Data Mapping

Taking the Time and Pain Out of Data Preparation: 10 Reasons to Automate Data Preparation/Data Mapping

  1. Governance and Infrastructure

Data governance and a strong IT infrastructure are critical in the valuation, creation, storage, use, archival and deletion of data. Beyond the simple ability to know where the data came from and whether or not it can be trusted, there is an element of statutory reporting and compliance that often requires a knowledge of how that same data (known or unknown, governed or not) has changed over time.

A design platform that allows for insights like data lineage, impact analysis, full history capture, and other data management features can provide a central hub from which everything can be learned and discovered about the data – whether a data lake, a data vault, or a traditional warehouse.

  1. Eliminating Human Error

In the traditional data management organization, excel spreadsheets are used to manage the incoming data design, or what is known as the “pre-ETL” mapping documentation – this does not lend to any sort of visibility or auditability. In fact, each unit of work represented in these ‘mapping documents’ becomes an independent variable in the overall system development lifecycle, and therefore nearly impossible to learn from much less standardize.

The key to creating accuracy and integrity in any exercise is to eliminate the opportunity for human error – which does not mean eliminating humans from the process but incorporating the right tools to reduce the likelihood of error as the human beings apply their thought processes to the work.  

  1. Completeness

The ability to scan and import from a broad range of sources and formats, as well as automated change tracking, means that you will always be able to import your data from wherever it lives and track all of the changes to that data over time.

  1. Adaptability

Centralized design, immediate lineage and impact analysis, and change activity logging means that you will always have the answer readily available, or a few clicks away.  Subsets of data can be identified and generated via predefined templates, generic designs generated from standard mapping documents, and pushed via ETL process for faster processing via automation templates.

  1. Accuracy

Out-of-the-box capabilities to map your data from source to report, make reconciliation and validation a snap, with auditability and traceability built-in.  Build a full array of validation rules that can be cross checked with the design mappings in a centralized repository.

  1. Timeliness

The ability to be agile and reactive is important – being good at being reactive doesn’t sound like a quality that deserves a pat on the back, but in the case of regulatory requirements, it is paramount.

  1. Comprehensiveness

Access to all of the underlying metadata, source-to-report design mappings, source and target repositories, you have the power to create reports within your reporting layer that have a traceable origin and can be easily explained to both IT, business, and regulatory stakeholders.

  1. Clarity

The requirements inform the design, the design platform puts those to action, and the reporting structures are fed the right data to create the right information at the right time via nearly any reporting platform, whether mainstream commercial or homegrown.

  1. Frequency

Adaptation is the key to meeting any frequency interval. Centralized designs, automated ETL patterns that feed your database schemas and reporting structures will allow for cyclical changes to be made and implemented in half the time of using conventional means. Getting beyond the spreadsheet, enabling pattern-based ETL, and schema population are ways to ensure you will be ready, whenever the need arises to show an audit trail of the change process and clearly articulate who did what and when through the system development lifecycle.

  1. Business-Friendly

A user interface designed to be business-friendly means there’s no need to be a data integration specialist to review the common practices outlined and “passively enforced” throughout the tool. Once a process is defined, rules implemented, and templates established, there is little opportunity for error or deviation from the overall process. A diverse set of role-based security options means that everyone can collaborate, learn and audit while maintaining the integrity of the underlying process components.

Faster, More Accurate Analysis with Fewer People

What if you could get more accurate data preparation 50% faster and double your analysis with less people?

erwin Mapping Manager (MM) is a patented solution that automates data mapping throughout the enterprise data integration lifecycle, providing data visibility, lineage and governance – freeing up that 80% of a data professional’s time to put that data to work.

With erwin MM, data integration engineers can design and reverse-engineer the movement of data implemented as ETL/ELT operations and stored procedures, building mappings between source and target data assets and designing the transformation logic between them. These designs then can be exported to most ETL and data asset technologies for implementation.

erwin MM is 100% metadata-driven and used to define and drive standards across enterprise integration projects, enable data and process audits, improve data quality, streamline downstream work flows, increase productivity (especially over geographically dispersed teams) and give project teams, IT leadership and management visibility into the ‘real’ status of integration and ETL migration projects.

If an automated data preparation/mapping solution sounds good to you, please check out erwin MM here.

Solving the Enterprise Data Dilemma

Categories
erwin Expert Blog

Defining DG: What Can Data Governance Do for You?

Data governance (DG) is becoming more commonplace because of data-driven business, yet defining DG and putting into sound practice is still difficult for many organizations.

Defining DG

The absence of a standard approach to defining DG could be down to its history of missed expectations, false starts and negative perceptions about it being expensive, intrusive, impeding innovation and not delivering any value. Without success stories to point to, the best way of doing and defining DG wasn’t clear.

On the flip side, the absence of a standard approach to defining DG could be the reason for its history of lacklustre implementation efforts, because those responsible for overseeing it had different ideas about what should be done.

Therefore, it’s been difficult to fully fund a data governance initiative that is underpinned by an effective data management capability. And many organizations don’t distinguish between data governance and data management, using the terms interchangeably and so adding to the confusion.

Defining DG: The Data Governance Conundrum

While research indicates most view data governance as “critically important” or they recognize the value of data, the large percentage without a formal data governance strategy in place indicates there are still significant teething problems.

How Important is Data Governance

And that’s the data governance conundrum. It is essential but unwanted and/or painful.

It is a complex chore, so organizations have lacked the motivation to start and effectively sustain it. But faced with the General Data Protection Regulation (GDPR) and other compliance requirements, they have been doing the bare minimum to avoid the fines and reputational damage.

And arguably, herein lies the problem. Organizations look at data governance as something they have to do rather than seeing what it could do for them.

Data governance has its roots in the structure of business terms and technical metadata, but it has tendrils and deep associations with many other components of a data management strategy and should serve as the foundation of that platform.

With data governance at the heart of data management, data can be discovered and made available throughout the organization for both IT and business stakeholders with approved access. This means enterprise architecture, business process, data modeling and data mapping all can draw from a central metadata repository for a single source of data truth, which improves data quality, trust and use to support organizational objectives.

But this “data nirvana” requires a change in approach to data governance. First, recognizing that Data Governance 1.0 was made for a different time when the volume, variety and velocity of the data an organization had to manage was far lower and when data governance’s reach only extended to cataloging data to support search and discovery. 

Data Governance Evolution

Modern data governance needs to meet the needs of data-driven business. We call this adaptation “Evolving DG.” It is the journey to a cost-effective, mature, repeatable process that permeates the whole organization.

The primary components of Evolving DG are:

  • Evaluate
  • Plan
  • Configure
  • Deliver
  • Feedback

The final step in such an evolution is the implementation of the erwin Enterprise Data Governance Experience (EDGE) platform.

The erwin EDGE places data governance at the heart of the larger data management suite. By unifying the data management suite at a fundamental level, an organization’s data is no longer marred by departmental and software silos. It brings together both IT and the business for data-driven insights, regulatory compliance, agile innovation and business transformation.

It allows every critical piece of the data management and data governance lifecycle to draw from a single source of data truth and ensure quality throughout the data pipeline, helping organizations achieve their strategic objectives including:

  • Operational efficiency
  • Revenue growth
  • Compliance, security and privacy
  • Increased customer satisfaction
  • Improved decision-making

To learn how you can evolve your data governance practice and get an EDGE on your competition, click here.

Solving the Enterprise Data Dilemma

Categories
erwin Expert Blog

The Data Governance (R)Evolution

Data governance continues to evolve – and quickly.

Historically, Data Governance 1.0 was siloed within IT and mainly concerned with cataloging data to support search and discovery. However, it fell short in adding value because it neglected the meaning of data assets and their relationships within the wider data landscape.

Then the push for digital transformation and Big Data created the need for DG to come out of IT’s shadows – Data Governance 2.0 was ushered in with principles designed for  modern, data-driven business. This approach acknowledged the demand for collaborative data governance, the tearing down of organizational silos, and spreading responsibilities across more roles.

But this past year we all witnessed a data governance awakening – or as the Wall Street Journal called it, a “global data governance reckoning.” There was tremendous data drama and resulting trauma – from Facebook to Equifax and from Yahoo to Aetna. The list goes on and on. And then, the European Union’s General Data Protection Regulation (GDPR) took effect, with many organizations scrambling to become compliant.

So where are we today?

Simply put, data governance needs to be a ubiquitous part of your company’s culture. Your stakeholders encompass both IT and business users in collaborative relationships, so that makes data governance everyone’s business.

Data Governance is Everyone's Business

Data governance underpins data privacy, security and compliance. Additionally, most organizations don’t use all the data they’re flooded with to reach deeper conclusions about how to grow revenue, achieve regulatory compliance, or make strategic decisions. They face a data dilemma: not knowing what data they have or where some of it is—plus integrating known data in various formats from numerous systems without a way to automate that process.

To accelerate the transformation of business-critical information into accurate and actionable insights, organizations need an automated, real-time, high-quality data pipeline. Then every stakeholder—data scientist, ETL developer, enterprise architect, business analyst, compliance officer, CDO and CEO—can fuel the desired outcomes based on reliable information.

Connecting Data Governance to Your Organization

  1. Data Mapping & Data Governance

The automated generation of the physical embodiment of data lineage—the creation, movement and transformation of transactional and operational data for harmonization and aggregation—provides the best route for enabling stakeholders to understand their data, trust it as a well-governed asset and use it effectively. Being able to quickly document lineage for a standardized, non-technical environment brings business alignment and agility to the task of building and maintaining analytics platforms.

  1. Data Modeling & Data Governance

Data modeling discovers and harvests data schema, and analyzes, represents and communicates data requirements. It synthesizes and standardizes data sources for clarity and consistency to back up governance requirements to use only controlled data. It benefits from the ability to automatically map integrated and cataloged data to and from models, where they can be stored in a central repository for re-use across the organization.

  1. Business Process Modeling & Data Governance

Business process modeling reveals the workflows, business capabilities and applications that use particular data elements. That requires that these assets be appropriately governed components of an integrated data pipeline that rests on automated data lineage and business glossary creation.

  1. Enterprise Architecture & Data Governance

Data flows and architectural diagrams within enterprise architecture benefit from the ability to automatically assess and document the current data architecture. Automatically providing and continuously maintaining business glossary ontologies and integrated data catalogs inform a key part of the governance process.

The EDGE Revolution

 By bringing together enterprise architecturebusiness processdata mapping and data modeling, erwin’s approach to data governance enables organizations to get a handle on how they handle their data and realize its maximum value. With the broadest set of metadata connectors and automated code generation, data mapping and cataloging tools, the erwin EDGE Platform simplifies the total data management and data governance lifecycle.

This single, integrated solution makes it possible to gather business intelligence, conduct IT audits, ensure regulatory compliance and accomplish any other organizational objective by fueling an automated, high-quality and real-time data pipeline.

The erwin EDGE creates an “enterprise data governance experience” that facilitates collaboration between both IT and the business to discover, understand and unlock the value of data both at rest and in motion.

With the erwin EDGE, data management and data governance are unified and mutually supportive of business stakeholders and IT to:

  • Discover data: Identify and integrate metadata from various data management silos.
  • Harvest data: Automate the collection of metadata from various data management silos and consolidate it into a single source.
  • Structure data: Connect physical metadata to specific business terms and definitions and reusable design standards.
  • Analyze data: Understand how data relates to the business and what attributes it has.
  • Map data flows: Identify where to integrate data and track how it moves and transforms.
  • Govern data: Develop a governance model to manage standards and policies and set best practices.
  • Socialize data: Enable stakeholders to see data in one place and in the context of their roles.

If you’ve enjoyed this latest blog series, then you’ll want to request a copy of Solving the Enterprise Data Dilemma, our new e-book that highlights how to answer the three most important data management and data governance questions: What data do we have? Where is it? And how do we get value from it?

Solving the Enterprise Data Dilemma

Categories
erwin Expert Blog

Compliance First: How to Protect Sensitive Data

The ability to more efficiently govern, discover and protect sensitive data is something that all prospering data-driven organizations are constantly striving for.

It’s been almost four months since the European Union’s General Data Protection Regulation (GDPR) took effect. While no fines have been issued yet, the Information Commissioner’s Office has received upwards of 500 calls per week since the May 25 effective date.

However, the fine-free streak may be ending soon with British Airways (BA) as the first large company to pay a GDPR penalty because of a data breach. The hack at BA in August and early September lasted for more than two weeks, with intruders getting away with account numbers and personal information of customers making reservations on the carrier’s website and mobile app. If regulators conclude that BA failed to take measures to prevent the incident— a significant fine may follow.

Additionally, complaints against Google in the EU have started. For example, internet browser provider Brave claims that Google and other advertising companies expose user data during a process called “bid request.” A data breach occurs because a bid request fails to protect sensitive data against unauthorized access, which is unlawful under the GDPR.

Per Brave’s announcement, bid request data can include the following personal data:

  • What you are reading or watching
  • Your location
  • Description of your device
  • Unique tracking IDs or a “cookie match,” which allows advertising technology companies to try to identify you the next time you are seen, so that a long-term profile can be built or consolidated with offline data about you
  • Your IP address,depending on the version of “real-time bidding” system
  • Data broker segment ID, if available, which could denote things like your income bracket, age and gender, habits, social media influence, ethnicity, sexual orientation, religion, political leaning, etc., depending on the version of bidding system

Obviously, GDPR isn’t the only regulation that organizations need to comply with. From HIPAA in healthcare to FINRA, PII and BCBS in financial services to the upcoming California Consumer Privacy Act (CCPA) taking effect January 1, 2020, regulatory compliance is part of running – and staying in business.

The common denominator in compliance across all industry sectors is the ability to protect sensitive data. But if organizations are struggling to understand what data they have and where it’s located, how do they protect it? Where do they begin?

Protect sensitive data

Discover and Protect Sensitive Data

Data is a critical asset used to operate, manage and grow a business. While sometimes at rest in databases, data lakes and data warehouses; a large percentage is federated and integrated across the enterprise, introducing governance, manageability and risk issues that must be managed.

Knowing where sensitive data is located and properly governing it with policy rules, impact analysis and lineage views is critical for risk management, data audits and regulatory compliance.

However, when key data isn’t discovered, harvested, cataloged, defined and standardized as part of integration processes, audits may be flawed and therefore putting your organization at risk.

Sensitive data – at rest or in motion – that exists in various forms across multiple systems must be automatically tagged, its lineage automatically documented, and its flows depicted so that it is easily found and its usage across workflows easily traced.

Thankfully, tools are available to help automate the scanning, detection and tagging of sensitive data by:

  • Monitoring and controlling sensitive data: Better visibility and control across the enterprise to identify data security threats and reduce associated risks
  • Enriching business data elements for sensitive data discovery: Comprehensive mechanism to define business data element for PII, PHI and PCI across database systems, cloud and Big Data stores to easily identify sensitive data based on a set of algorithms and data patterns
  • Providing metadata and value-based analysis: Discovery and classification of sensitive data based on metadata and data value patterns and algorithms. Organizations can define business data elements and rules to identify and locate sensitive data including PII, PHI, PCI and other sensitive information.


A Regulatory Rationale for Integrating Data Management and Data Governance

Data management and data governance, together, play a vital role in compliance. It’s easier to protect sensitive data when you know where it’s stored, what it is, and how it needs to be governed.

Truly understanding an organization’s data, including the data’s value and quality, requires a harmonized approach embedded in business processes and enterprise architecture. Such an integrated enterprise data governance experience helps organizations understand what data they have, where it is, where it came from, its value, its quality and how it’s used and accessed by people and applications.

But how is all this possible? Again, it comes back to the right technology for IT and business collaboration that will enable you to:

  • Discover data: Identify and interrogate metadata from various data management silos
  • Harvest data: Automate the collection of metadata from various data management silos and consolidate it into a single source
  • Structure data: Connect physical metadata to specific business terms and definitions and reusable design standards
  • Analyze data: Understand how data relates to the business and what attributes it has
  • Map data flows: Identify where to integrate data and track how it moves and transforms
  • Govern data: Develop a governance model to manage standards and policies and set best practices
  • Socialize data: Enable all stakeholders to see data in one place in their own context