Categories
erwin Expert Blog

Top 5 Data Catalog Benefits

A data catalog benefits organizations in a myriad of ways. With the right data catalog tool, organizations can automate enterprise metadata management – including data cataloging, data mapping, data quality and code generation for faster time to value and greater accuracy for data movement and/or deployment projects.

Data cataloging helps curate internal and external datasets for a range of content authors. Gartner says this doubles business benefits and ensures effective management and monetization of data assets in the long-term if linked to broader data governance, data quality and metadata management initiatives.

But even with this in mind, the importance of data cataloging is growing. In the regulated data world (GDPR, HIPAA etc) organizations need to have a good understanding of their data lineage – and the data catalog benefits to data lineage are substantial.

Data lineage is a core operational business component of data governance technology architecture, encompassing the processes and technology to provide full-spectrum visibility into the ways data flows across an enterprise.

There are a number of different approaches to data lineage. Here, I outline the common approach, and the approach incorporating data cataloging – including the top 5 data catalog benefits for understanding your organization’s data lineage.

Data Catalog Benefits

Data Lineage – The Common Approach

The most common approach for assembling a collection of data lineage mappings traces data flows in a reverse manner. The process begins with the target or data end-point, and then traversing the processes, applications, and ETL tasks in reverse from the target.

For example, to determine the mappings for the data pipelines populating a data warehouse, a data lineage tool might begin with the data warehouse and examine the ETL tasks that immediately proceed the loading of the data into the target warehouse.

The data sources that feed the ETL process are added to a “task list,” and the process is repeated for each of those sources. At each stage, the discovered pieces of lineage are documented. At the end of the sequence, the process will have reverse-mapped the pipelines for populating that warehouse.

While this approach does produce a collection of data lineage maps for selected target systems, there are some drawbacks.

  • First, this approach focuses only on assembling the data pipelines populating the selected target system but does not necessarily provide a comprehensive view of all the information flows and how they interact.
  • Second, this process produces the information that can be used for a static view of the data pipelines, but the process needs to be executed on a regular basis to account for changes to the environment or data sources.
  • Third, and probably most important, this process produces a technical view of the information flow, but it does not necessarily provide any deeper insights into the semantic lineage, or how the data assets map to the corresponding business usage models.

A Data Catalog Offers an Alternate Data Lineage Approach

An alternate approach to data lineage combines data discovery and the use of a data catalog that captures data asset metadata with a data mapping framework that documents connections between the data assets.

This data catalog approach also takes advantage of automation, but in a different way: using platform-specific data connectors, the tool scans the environment for storing each data asset and imports data asset metadata into the data catalog.

When data asset structures are similar, the tool can compare data element domains and value sets, and automatically create the data mapping.

In turn, the data catalog approach performs data discovery using the same data connectors to parse the code involved in data movement, such as major ETL environments and procedural code – basically any executable task that moves data.

The information collected through this process is reverse engineered to create mappings from source data sets to target data sets based on what was discovered.

For example, you can map the databases used for transaction processing, determine that subsets of the transaction processing database are extracted and moved to a staging area, and then parse the ETL code to infer the mappings.

These direct mappings also are documented in the data catalog. In cases where the mappings are not obvious, a tool can help a data steward manually map data assets into the catalog.

The result is a data catalog that incorporates the structural and semantic metadata associated with each data asset as well as the direct mappings for how that data set is populated.

Learn more about data cataloging.

Value of Data Intelligence IDC Report

And this is a powerful representative paradigm – instead of capturing a static view of specific data pipelines, it allows a data consumer to request a dynamically-assembled lineage from the documented mappings.

By interrogating the catalog, the current view of any specific data lineage can be rendered on the fly that shows all points of the data lineage: the origination points, the processing stages, the sequences of transformations, and the final destination.

Materializing the “current active lineage” dynamically reduces the risk of having an older version of the lineage that is no longer relevant or correct. When new information is added to the data catalog (such as a newly-added data source of a modification to the ETL code), dynamically-generated views of the lineage will be kept up-to-date automatically.

Top 5 Data Catalog Benefits for Understanding Data Lineage

A data catalog benefits data lineage in the following five distinct ways:

1. Accessibility

The data catalog approach allows the data consumer to query the tool to materialize specific data lineage mappings on demand.

2. Currency

The data lineage is rendered from the most current data in the data catalog.

3. Breadth

As the number of data assets documented in the data catalog increases, the scope of the materializable lineage expands accordingly. With all corporate data assets cataloged, any (or all!) data lineage mappings can be produced on demand.

4. Maintainability and Sustainability

Since the data lineage mappings are not managed as distinct artifacts, there are no additional requirements for maintenance. As long as the data catalog is kept up to date, the data lineage mappings can be materialized.

5. Semantic Visibility

In addition to visualizing the physical movement of data across the enterprise, the data catalog approach allows the data steward to associate business glossary terms, data element definitions, data models, and other semantic details with the different mappings. Additional visualization methods can demonstrate where business terms are used, how they are mapped to different data elements in different systems, and the relationships among these different usage points.

One can impose additional data governance controls with project management oversight, which allows you to designate data lineage mappings in terms of the project life cycle (such as development, test or production).

Aside from these data catalog benefits, this approach allows you to reduce the amount of manual effort for accumulating the information for data lineage and continually reviewing the data landscape to maintain consistency, thus providing a greater return on investment for your data intelligence budget.

Learn more about data cataloging.

Categories
erwin Expert Blog

Using Strategic Data Governance to Manage GDPR/CCPA Complexity

In light of recent, high-profile data breaches, it’s past-time we re-examined strategic data governance and its role in managing regulatory requirements.

News broke earlier this week of British Airways being fined 183 million pounds – or $228 million – by the U.K. for alleged violations of the European Union’s General Data Protection Regulation (GDPR). While not the first, it is the largest penalty levied since the GDPR went into effect in May 2018.

Given this, Oppenheimer & Co. cautions:

“European regulators could accelerate the crackdown on GDPR violators, which in turn could accelerate demand for GDPR readiness. Although the CCPA [California Consumer Privacy Act, the U.S. equivalent of GDPR] will not become effective until 2020, we believe that new developments in GDPR enforcement may influence the regulatory framework of the still fluid CCPA.”

With all the advance notice and significant chatter for GDPR/CCPA,  why aren’t organizations more prepared to deal with data regulations?

In a word? Complexity.

The complexity of regulatory requirements in and of themselves is aggravated by the complexity of the business and data landscapes within most enterprises.

So it’s important to understand how to use strategic data governance to manage the complexity of regulatory compliance and other business objectives …

Designing and Operationalizing Regulatory Compliance Strategy

It’s not easy to design and deploy compliance in an environment that’s not well understood and difficult in which to maneuver. First you need to analyze and design your compliance strategy and tactics, and then you need to operationalize them.

Modern, strategic data governance, which involves both IT and the business, enables organizations to plan and document how they will discover and understand their data within context, track its physical existence and lineage, and maximize its security, quality and value. It also helps enterprises put these strategic capabilities into action by:

  • Understanding their business, technology and data architectures and their inter-relationships, aligning them with their goals and defining the people, processes and technologies required to achieve compliance.
  • Creating and automating a curated enterprise data catalog, complete with physical assets, data models, data movement, data quality and on-demand lineage.
  • Activating their metadata to drive agile data preparation and governance through integrated data glossaries and dictionaries that associate policies to enable stakeholder data literacy.

Strategic Data Governance for GDPR/CCPA

Five Steps to GDPR/CCPA Compliance

With the right technology, GDPR/CCPA compliance can be automated and accelerated in these five steps:

  1. Catalog systems

Harvest, enrich/transform and catalog data from a wide array of sources to enable any stakeholder to see the interrelationships of data assets across the organization.

  1. Govern PII “at rest”

Classify, flag and socialize the use and governance of personally identifiable information regardless of where it is stored.

  1. Govern PII “in motion”

Scan, catalog and map personally identifiable information to understand how it moves inside and outside the organization and how it changes along the way.

  1. Manage policies and rules

Govern business terminology in addition to data policies and rules, depicting relationships to physical data catalogs and the applications that use them with lineage and impact analysis views.

  1. Strengthen data security

Identify regulatory risks and guide the fortification of network and encryption security standards and policies by understanding where all personally identifiable information is stored, processed and used.

How erwin Can Help

erwin is the only software provider with a complete, metadata-driven approach to data governance through our integrated enterprise modeling and data intelligence suites. We help customers overcome their data governance challenges, with risk management and regulatory compliance being primary concerns.

However, the erwin EDGE also delivers an “enterprise data governance experience” in terms of agile innovation and business transformation – from creating new products and services to keeping customers happy to generating more revenue.

Whatever your organization’s key drivers are, a strategic data governance approach – through  business process, enterprise architecture and data modeling combined with data cataloging and data literacy – is key to success in our modern, digital world.

If you’d like to get a handle on handling your data, you can sign up for a free, one-on-one demo of erwin Data Intelligence.

For more information on GDPR/CCPA, we’ve also published a white paper on the Regulatory Rationale for Integrating Data Management and Data Governance.

GDPR White Paper

Categories
erwin Expert Blog

Five Benefits of an Automation Framework for Data Governance

Organizations are responsible for governing more data than ever before, making a strong automation framework a necessity. But what exactly is an automation framework and why does it matter?

In most companies, an incredible amount of data flows from multiple sources in a variety of formats and is constantly being moved and federated across a changing system landscape.

Often these enterprises are heavily regulated, so they need a well-defined data integration model that helps avoid data discrepancies and removes barriers to enterprise business intelligence and other meaningful use.

IT teams need the ability to smoothly generate hundreds of mappings and ETL jobs. They need their data mappings to fall under governance and audit controls, with instant access to dynamic impact analysis and lineage.

With an automation framework, data professionals can meet these needs at a fraction of the cost of the traditional manual way.

In data governance terms, an automation framework refers to a metadata-driven universal code generator that works hand in hand with enterprise data mapping for:

  • Pre-ETL enterprise data mapping
  • Governing metadata
  • Governing and versioning source-to-target mappings throughout the lifecycle
  • Data lineage, impact analysis and business rules repositories
  • Automated code generation

Such automation enables organizations to bypass bottlenecks, including human error and the time required to complete these tasks manually.

In fact, being able to rely on automated and repeatable processes can result in up to 50 percent in design savings, up to 70 percent conversion savings and up to 70 percent acceleration in total project delivery.

So without further ado, here are the five key benefits of an automation framework for data governance.

Automation Framework

Benefits of an Automation Framework for Data Governance

  1. Creates simplicity, reliability, consistency and customization for the integrated development environment.

Code automation templates (CATs) can be created – for virtually any process and any tech platform – using the SDK scripting language or the solution’s published libraries to completely automate common, manual data integration tasks.

CATs are designed and developed by senior automation experts to ensure they are compliant with industry or corporate standards as well as with an organization’s best practice and design standards.

The 100-percent metadata-driven approach is critical to creating reliable and consistent CATs.

It is possible to scan, pull in and configure metadata sources and targets using standard or custom adapters and connectors for databases, ERP, cloud environments, files, data modeling, BI reports and Big Data to document data catalogs, data mappings, ETL (XML code) and even SQL procedures of any type.

  1. Provides blueprints anyone in the organization can use.

Stage DDL from source metadata for the target DBMS; profile and test SQL for test automation of data integration projects; generate source-to-target mappings and ETL jobs for leading ETL tools, among other capabilities.

It also can populate and maintain Big Data sets by generating PIG, Scoop, MapReduce, Spark, Python scripts and more.

  1. Incorporates data governance into the system development process.

An organization can achieve a more comprehensive and sustainable data governance initiative than it ever could with a homegrown solution.

An automation framework’s ability to automatically create, version, manage and document source-to-target mappings greatly matters both to data governance maturity and a shorter-time-to-value.

This eliminates duplication that occurs when project teams are siloed, as well as prevents the loss of knowledge capital due to employee attrition.

Another value capability is coordination between data governance and SDLC, including automated metadata harvesting and cataloging from a wide array of sources for real-time metadata synchronization with core data governance capabilities and artifacts.

  1. Proves the value of data lineage and impact analysis for governance and risk assessment.

Automated reverse-engineering of ETL code into natural language enables a more intuitive lineage view for data governance.

With end-to-end lineage, it is possible to view data movement from source to stage, stage to EDW, and on to a federation of marts and reporting structures, providing a comprehensive and detailed view of data in motion.

The process includes leveraging existing mapping documentation and auto-documented mappings to quickly render graphical source-to-target lineage views including transformation logic that can be shared across the enterprise.

Similarly, impact analysis – which involves data mapping and lineage across tables, columns, systems, business rules, projects, mappings and ETL processes – provides insight into potential data risks and enables fast and thorough remediation when needed.

Impact analysis across the organization while meeting regulatory compliance with industry regulators requires detailed data mapping and lineage.

THE REGULATORY RATIONALE FOR INTEGRATING DATA MANAGEMENT & DATA GOVERNANCE

  1. Supports a wide spectrum of business needs.

Intelligent automation delivers enhanced capability, increased efficiency and effective collaboration to every stakeholder in the data value chain: data stewards, architects, scientists, analysts; business intelligence developers, IT professionals and business consumers.

It makes it easier for them to handle jobs such as data warehousing by leveraging source-to-target mapping and ETL code generation and job standardization.

It’s easier to map, move and test data for regular maintenance of existing structures, movement from legacy systems to new systems during a merger or acquisition, or a modernization effort.

erwin’s Approach to Automation for Data Governance: The erwin Automation Framework

Mature and sustainable data governance requires collaboration from both IT and the business, backed by a technology platform that accelerates the time to data intelligence.

Part of the erwin EDGE portfolio for an “enterprise data governance experience,” the erwin Automation Framework transforms enterprise data into accurate and actionable insights by connecting all the pieces of the data management and data governance lifecycle.

 As with all erwin solutions, it embraces any data from anywhere (Any2) with automation for relational, unstructured, on-premise and cloud-based data assets and data movement specifications harvested and coupled with CATs.

If your organization would like to realize all the benefits explained above – and gain an “edge” in how it approaches data governance, you can start by joining one of our weekly demos for erwin Mapping Manager.

Automate Data Mapping