Categories
Data Governance erwin Expert Blog

Data Governance Maturity and Tracking Progress

Data governance is best defined as the strategic, ongoing and collaborative processes involved in managing data’s access, availability, usability, quality and security in line with established internal policies and relevant data regulations.

erwin recently hosted the third in its six-part webinar series on the practice of data governance and how to proactively deal with its complexities. Led by Frank Pörschmann of iDIGMA GmbH, an IT industry veteran and data governance strategist, this latest webinar focused on “Data Governance Maturity & Tracking Progress.”

The webinar looked at how to gauge the maturity and progress of data governance programs and why it is important for both IT and the business to be able to measure success

Data Governance Is Business Transformation

Data governance is about how an organization uses its data. That includes how it creates or collects data, as well as how its data is stored and accessed. It ensures that the right data of the right quality, regardless of where it is stored or what format it is stored in, is available for use – but only by the right people and for the right purpose.

Quite simply, data governance is business transformation, as Mr. Pörschmann highlights in the webinar. Meaning that it is a complex system that changes from one stable state into another stable state.

The basic principles of transformation are:

  • Complexity
  • Predictability
  • Synchronicity

However, the practice of data governance is a relatively new discipline that is still evolving. And while its effects will be felt throughout the entire organization, the weight of its impact will be felt differently across the business.

“You have to deal with this ambiguity and it’s volatile,” said Mr. Pörschmann. “Some business units benefit more from data governance than others, and some business units have to invest more energy and resources into the change than others.”

Maturity Levels

Data governance maturity includes the ability to rely on automated and repeatable processes, which ultimately helps to increase productivity. While it has gained traction over the past few years, many organizations are still formalizing it as a practice.

Implementing a data governance initiative can be tricky, so it is important to have clear goals for what you want it to achieve.

According to Mr. Pörschman, there are six levels of maturity with one being the lowest.

  1. Aware: Partial awareness of data governance but not yet started
  2. Initiated: Some ad-hoc data governance initiatives
  3. Acknowledged: An official acknowledgement of data governance from executive management with budget allocated
  4. Managed: Dedicated resources, managed and adjusted with KPIs
  5. Monitored: Dedicated resources and performance monitoring
  6. Enhanced: Data managed equally

For a fully mature or “enhanced” data governance program, IT and the business need to take responsibility for selling the benefits of data governance across the enterprise and ensure all stakeholders are properly educated about it. However, IT may have to go it alone, at least initially, educating the business on the risks and rewards, as well as the expectations and accountabilities in implementing it.

To move data governance to the next level, organizations need to discover, understand, govern and socialize data assets. Appropriately implemented — with business stakeholders driving alignment between data governance and strategic enterprise goals and IT handling the technical mechanics of data management — the door opens to trusting data, planning for change, and putting it to work for peak organizational performance.

The Medici Maturity Approach

In a rush to implement a data governance methodology and system, you can forget that a system must serve a process – and be governed/controlled by one.

To choose the correct system and implement it effectively and efficiently, you must know – in every detail – all the processes it will impact, how it will impact them, who needs to be involved and when.

Business, data governance and data leaders want a methodology that is lean, scalable and lightweight. This model has been dubbed the Medici maturity model – named after Romina Medici, head of data management and governance for global energy provider E.ON.

Ms. Medici found that the approaches on the market did not cover transformation challenges, and only a few addressed the operational data management disciplines. Her research also found that it doesn’t make sense to look at your functional disciplines unless you already have a minimum maturity (aware plus initiated levels).

People, process, technology and governance structure are on the one side of the axis with functional data management disciplines on the other.

The Medici Maturity Approach is a data governance methodology.

Mr. Pörschman then shared the “Data Maturity Canvas” that incorporates core dimensions, maturity levels and execution disciplines. The first time you run this, you can define the target situation, all the actions needed, and the next best actions.

This methodology gives you a view of the four areas, people, process, technology and governance, so you can link your findings across them. It is an easy method that you can run for different purposes including:

  • Initial assessment
  • Designing a data governance program
  • Monitoring a whole program
  • Beginning strategy processes
  • Benchmarking

Data governance can be many things to many people. Before starting, decide what your primary objectives are: to enable better decision-making or to help you meet compliance objectives. Or are you looking to reduce data management costs and improve data quality through formal, repeatable processes? Whatever your motivation, you need to identify it first and foremost to get a grip on data governance.

Click here to read our success story on how E.ON used erwin Data Intelligence for its digital transformation and innovation efforts.

Register for the fifth webinar in this series “Transparency in Data Governance – Data Catalogs & Business Glossaries” which takes place on April 27. This webinar will discuss how to answer critical questions through data catalogs and business glossaries, powered by effective metadata management. You’ll also see a demo of the erwin Data Intelligence Suite  that includes both data catalog, business glossary and metadata-driven automation.

[blog-cta header=”erwin Data Intelligence” body=”Click here to request a demo of erwin Data Intelligence by Quest.” button=”Request Demo” button_link=”https://s38605.p1254.sites.pressdns.com/erwin-data-intelligence-free-demo/” image=”https://s38605.p1254.sites.pressdns.com/wp-content/uploads/2018/11/iStock-914789708.jpg” ]

Categories
Data Governance erwin Expert Blog

How Data Governance Protects Sensitive Data

 

Data governance reduces the risk of sensitive data.

Organizations are managing more data than ever. In fact, the global datasphere is projected to reach 175 zettabytes by 2025, according to IDC.

With more companies increasingly migrating their data to the cloud to ensure availability and scalability, the risks associated with data management and protection also are growing.

How can companies protect their enterprise data assets, while also ensuring their availability to stewards and consumers while minimizing costs and meeting data privacy requirements?

Data Security Starts with Data Governance

Lack of a solid data governance foundation increases the risk of data-security incidents. An assessment of the data breaches that crop up like weeds each year supports the conclusion that companies, absent data governance, wind up building security architectures strictly from a technical perspective.

Given that every company has in its possession important information about and relationships with people based on the private data they provide, every business should understand the related risks and protect against them under the banner of data governance—and avoid the costs and reputation damage that data breaches can inflict more intelligently and better. That’s especially true as the data-driven enterprise momentum grows along with self-service analytics that enable users to have greater access to information, often using it without IT’s knowledge.

Indeed, with nearly everyone in the enterprise involved either in maintaining or using the company’s data, it only makes sense that both business and IT begin to work together to discover, understand, govern and socialize those assets. This should come as part of a data governance plan that emphasizes making all stakeholders responsible not only for enhancing data for business benefit, but also for reducing the risks that unfettered access to and use of it can pose.

With data catalog and literacy capabilities, you provide the context to keep relevant data private and secure – the assets available, their locations, the relationships between them, associated systems and processes, authorized users and guidelines for usage.

Without data governance, organizations lack the ability to connect the dots across data governance, security and privacy – and to act accordingly. So they can’t answer these fundamental questions:

  • What data do we have and where is it now?
  • Where did it come from and how has it changed?
  • Is it sensitive data or are there any risks associated with it?
  • Who is authorized to use it and how?

When an organization knows what data it has, it can define that data’s business purpose. And knowing the business purpose translates into actively governing personal data against potential privacy and security violations.

Do You Know Where Your Sensitive Data Is?

Data is a valuable asset used to operate, manage and grow a business. While sometimes at rest in databases, data lakes and data warehouses; a large percentage is federated and integrated across the enterprise, management and governance issues that must be addressed.

Knowing where sensitive data is located and properly governing it with policy rules, impact analysis and lineage views is critical for risk management, data audits and regulatory compliance.

For example, understanding and protecting sensitive data is especially critical for complying with privacy regulations like the European Union’s General Data Protection Regulation (GDPR).

The demands GDPR places on organizations are all-encompassing. Protecting what traditionally has been considered personally identifiable information (PII) — people’s names, addresses, government identification numbers and so forth — that a business collects, and hosts is just the beginning of GDPR mandates. Personal data now means anything collected or stored that can be linked to an individual (right down to IP addresses), and the term doesn’t only apply to individual pieces of information but also to how they may be combined in revealing relationships. And it isn’t just about protecting the data your business gathers, processes and stores but also any data it may leverage from third-party sources.

When key data isn’t discovered, harvested, cataloged, defined and standardized as part of integration processes, audits may be flawed putting your organization at risk.

Sensitive data – at rest or in motion – that exists in various forms across multiple systems must be automatically tagged, its lineage automatically documented, and its flows depicted so that it is easily found, and its usage easily traced across workflows.

Fortunately, tools are available to help automate the scanning, detection and tagging of sensitive data by:

  • Monitoring and controlling sensitive data: Better visibility and control across the enterprise to identify data security threats and reduce associated risks
  • Enriching business data elements for sensitive data discovery: Comprehensive mechanism to define business data element for PII, PHI and PCI across database systems, cloud and Big Data stores to easily identify sensitive data based on a set of algorithms and data patterns
  • Providing metadata and value-based analysis: Discovery and classification of sensitive data based on metadata and data value patterns and algorithms. Organizations can define business data elements and rules to identify and locate sensitive data including PII, PHI, PCI and other sensitive information.

Minimizing Risk Exposure with Data Intelligence

Organizations suffering data losses won’t benefit from the money spent on security technologies nor the time invested in developing data privacy classifications if they can’t get a handle on how they handle their data.

They also may face heavy fines and other penalties – not to mention bad PR.

Don’t let that happen to your organization.

A well-formed security architecture that is driven by and aligned by data intelligence is your best defense. Being prepared means you can minimize your risk exposure.

With erwin Data Intelligence by Quest, you’ll have an unfettered view of where sensitive data resides with the ability to seamlessly apply privacy rules and create access privileges.

Additionally, with Quest’s acquisition of erwin comes the abilities to mask, encrypt, redact and audit sensitive data for an automated and comprehensive solution to resolve sensitive-data issues.

When an organization knows what data it has, it can define that data’s business purpose. And knowing the business purpose translates into actively governing personal data against potential privacy and security violations.

From risk management and regulatory compliance to innovation and digital transformation, you need data intelligence. With erwin by Quest, you will know your data so you can fully realize its business benefits.

[blog-cta header=”erwin Data Intelligence” body=”Click here to request a demo of erwin Data Intelligence by Quest.” button=”Request Demo” button_link=”https://s38605.p1254.sites.pressdns.com/erwin-data-intelligence-free-demo/” image=”https://s38605.p1254.sites.pressdns.com/wp-content/uploads/2018/11/iStock-914789708.jpg” ]

Categories
Data Governance erwin Expert Blog

The Value of Data Governance and How to Quantify It

erwin recently hosted the second in its six-part webinar series on the practice of data governance and how to proactively deal with its complexities. Led by Frank Pörschmann of iDIGMA GmbH, an IT industry veteran and data governance strategist, the second webinar focused on “The Value of Data Governance & How to Quantify It.”

As Mr. Pörschmann highlighted at the beginning of the series, data governance works best when it is strongly aligned with the drivers, motivations and goals of the business.

The business drivers and motivation should be the starting point for any data governance initiative. If there is no clear end goal in sight, it will be difficult to get stakeholders on board. And with many competing projects and activities vying for people’s time, it must be clear to people why choosing data governance activities will have a direct benefit to them.

“Usually we talk about benefits which are rather qualitative measures, but what we need for decision-making processes are values,” Pörschmann says. “We need quantifiable results or expected results that are fact-based. And the interesting thing with data governance, it seems to be easier for organizations and teams to state the expected benefits.”

The Data Governance Productivity Matrix

In terms of quantifying data governance, Pörschmann cites the productivity matrix as a relatively simple way to calculate real numbers. He says, “the basic assumption is if an organization equips their managers with the appropriate capabilities and instruments, then it’s management’s obligation to realize productivity potential over time.”

According to IDC, professionals who work with data spend 80 percent of their time looking for and preparing data and only 20 percent of their time on analytics.

Specifically, 80 percent of data professionals’ time is spent on data discovery, preparation and protection, and only 20 percent on analysis leading to insights.

Data governance maturity includes the ability to rely on automated and repeatable processes, which ultimately helps to increase productivity.

For example, automatically importing mappings from developers’ Excel sheets, flat files, Access and ETL tools into a comprehensive mappings inventory, complete with automatically generated and meaningful documentation of the mappings, is a powerful way to support governance while providing real insight into data movement — for data lineage and impact analysis — without interrupting system developers’ normal work methods.

When data movement has been tracked and version-controlled, it’s possible to conduct data archeology — that is, reverse-engineering code from existing XML within the ETL layer — to uncover what has happened in the past and incorporating it into a mapping manager for fast and accurate recovery.

With automation, data professionals can meet the above needs at a fraction of the cost of the traditional, manual way. To summarize, just some of the benefits of data automation are:

  • Centralized and standardized code management with all automation templates stored in a governed repository
  • Better quality code and minimized rework
  • Business-driven data movement and transformation specifications
  • Superior data movement job designs based on best practices
  • Greater agility and faster time to value in data preparation, deployment and governance
  • Cross-platform support of scripting languages and data movement technologies

For example, one global pharmaceutical giant reduced cost by 70 percent and generated 95 percent of production code with “zero touch.” With automation, the company improved the time to business value and significantly reduced the costly re-work associated with error-prone manual processes.

Risk Management and Regulatory Compliance

Risk management, specifically around regulatory compliance, is an important use case to demonstrate the true value of data governance.

According to Pörschmann, risk management asks two main questions.

  1. How likely is a specific event to happen?
  2. What is the impact or damage if this event happens? (e.g.m, cost of repair, cost of reputation, etc.)

“You have to understand the concept or thinking of risk officers or the risk teams,” he says. The risk teams are process-oriented, and they understand how to calculate and how to cover IT risks. But to be successful in communicating data risks with the risk management team, you need to understand how your risk teams are thinking in terms of the risk matrix.

Take the European Union’s General Data Protection Regulation (GDPR) as an example of a data cost. Your team needs to ask, “what is the likelihood that we will fail on data-based activities related to GDPR?” And then ask, “what can we do from the data side to reduce the impact or the total damage?”

But it’s not easy to design and deploy compliance in an environment that’s not well understood and difficult in which to maneuver. Data governance enables organizations to plan and document how they will discover and understand their data within context, track its physical existence and lineage, and maximize its security, quality and value.

With the right technology, organizations can automate and accelerate regulatory compliance in five steps:

  1. Catalog systems. Harvest, enrich/transform and catalog data from a wide array of sources to enable any stakeholder to see the interrelationships of data assets across the organization.
  2. Govern PII “at rest”. Classify, flag and socialize the use and governance of personally identifiable information regardless of where it is stored.
  3. Govern PII “in motion”. Scan, catalog and map personally identifiable information to understand how it moves inside and outside the organization and how it changes along the way.
  4. Manage policies and rules. Govern business terminology in addition to data policies and rules, depicting relationships to physical data catalogs and the applications that use them with lineage and impact analysis views.
  5. Strengthen data security. Identify regulatory risks and guide the fortification of network and encryption security standards and policies by understanding where all personally identifiable information is stored, processed and used.

It’s also important to understand that the benefits of data governance don’t stop with regulatory compliance.

A better understanding of what data you have, where it’s stored and the history of its use and access isn’t only beneficial in fending off non-compliance repercussions. In fact, such an understanding is arguably better put to use proactively.

Data governance improves data quality standards, it enables better decision-making and ensures businesses can have more confidence in the data informing those decisions.

[blog-cta header=”erwin DG Webinar Series” body=”Register now for the March 30 webinar ‘Data Governance Maturity & Tracking Progress.'” button=”Register Now” button_link=”https://register.gotowebinar.com/register/8531817018173466635″ image=”https://s38605.p1254.sites.pressdns.com/wp-content/uploads/2018/11/iStock-914789708.jpg” ]

Categories
Data Governance erwin Expert Blog

Top Data Management Trends for Chief Data Officers (CDOs)

Chief Data Officer (CDOs) 2021 Study

The role of chief data officer (CDO) is becoming essential at forward-thinking organizations — especially those in financial services — according to “The Evolving Role of the CDO at Financial Organizations: 2021 Chief Data Officer (CDO) Study” just released by FIMA and sponsored by erwin.

The e-guide takes a deep dive into the evolving role of CDOs at financial organizations, tapping into the minds of 100+ financial global financial leaders and C-suite executives to look at the latest trends and provide a roadmap for developing an offensive data management strategy.

Data Governance Is Not Just About Compliance

Interestingly, the report found that 45% of respondents say compliance is now handled so well that it is no longer the top driver for data governance, while 38% say they have fully realized a “governance 2.0” model in which the majority of their compliance concerns are fully automated.”

Chief data officers and other data professionals have taken significant steps toward a data governance model that doesn’t just safeguard data but also drives business improvements.

erwin also found this to be the case as revealed in our 2020 “State of Data Governance and Automation” report.

However, while compliance is no longer the top driver of data governance, it still requires a significant investment. According to the CDO report, 88% of organizations devote 40% or more of their data practice’s operating budget to compliance activities.

COVID’s Impact on Data Management

FIMA also looked at 2020 and the pandemic’s impact on data management.

Some financial organizations that were approaching a significant level of data management maturity had to put their initiatives on hold to address more immediate issues. But it led some sectors to innovate, moving processes that were once manual to the digital realm.

The research team asked respondents to describe how their data practices were impacted by the need to adapt to changes in the work environment created by COVID-19. “Overall, most respondents said they avoided any catastrophic impact on their data operations. Most of these respondents note the fact that they had been updating their tools and programs ahead of time to prepare for such risks, and those investments inevitably paid off.”

The respondents who did note that the pandemic caused a disruption repeatedly said that they nonetheless managed to “keep everything in check.” As one CIO at an investment bank puts it, “Data practices became more precise and everyone got more conscious as the pandemic reached its first peak. Key programs have been kept in check and have been restarted securely.”

What Keeps CDOs Up at Night

Financial services organizations are usually at the forefront of data management and governance because they operate in such a heavily regulated environment. So it’s worth knowing what’s on those data executives’ minds, even if your organization is in another sector.

For example, the FIMA study indicates that:

  • 70% of CDOs say risk data aggregation is a primary regulatory concern within the IT departments.
  • Compliance is secondary to overall business improvement, but 88% of organizations devote 40%+ of their data practice’s operating budget to it.
  • Lack of downstream visibility into data consumption (69%) and unclear data provenance and tagging information (65%) are significant challenges.
  • They struggle to apply metadata.
  • Manual processes remain.

The e-guide discusses how data executives must not only secure data and meet rigorous data requirements but also find ways to create new business value with it.

All CDOs and other data professionals likely must deal with the challenges mentioned above – plus improve customer outcomes and boost profitability.

Both mitigating risk and unleashing potential is possible with the right tools, including data catalog, data literacy and metadata-driven automation capabilities for data governance and any other data-centric use case.

Harmonizing Data Management and Data Governance Processes

With erwin Data Intelligence by Quest, your organization can harness and activate your data in a single, unified catalog and then make it available to your data communities in context and in line with business requirements.

The solution harmonizes data management and governance processes to fuel an automated, real-time, high-quality data pipeline enterprise stakeholders can tap into for the information they need to achieve results. Such data intelligence leads to faster, smarter decisions to improve overall organizational performance.

Data is governed properly throughout its lifecycle, meeting both offensive and defensive data management needs. erwin Data Intelligence provides total data visibility, end-to-end data lineage and provenance.

To download the full “The Evolving Role of the CDO at Financial Organizations: 2021 Chief Data Officer (CDO) Study,” please visit: https://go.erwin.com/the-evolving-role-of-the-cdo-at-financial-organizations-report.

[blog-cta header=”Free trial of erwin Data Intelligence” body=”Improve enterprise data access, literacy and knowledge to support data governance, digital transformation and other critical initiatives.” button=”Start Free Demo” button_link=”https://s38605.p1254.sites.pressdns.com/erwin-data-intelligence-free-demo/” image=”https://s38605.p1254.sites.pressdns.com/wp-content/uploads/2018/11/iStock-914789708.jpg” ]

Categories
Data Governance erwin Expert Blog

The What & Why of Data Governance

Modern data governance is a strategic, ongoing and collaborative practice that enables organizations to discover and track their data, understand what it means within a business context, and maximize its security, quality and value.

It is the foundation for regulatory compliance and de-risking operations for competitive differentiation and growth.

However, while digital transformation and other data-driven initiatives are desired outcomes, few organizations know what data they have or where it is, and they struggle to integrate known data in various formats and numerous systems – especially if they don’t have a way to automate those processes.

But when IT-driven data management and business-oriented data governance work together in terms of both personnel, processes and technology, decisions can be made and their impacts determined based on a full inventory of reliable information.

Recently, erwin held the first in a six-part webinar series on the practice of data governance and how to proactively deal with its complexities. Led by Frank Pörschmann of iDIGMA GmbH, an IT industry veteran and data governance strategist, it examined “The What & Why of Data Governance.”

The What: Data Governance Defined

Data governance has no standard definition. However, Dataversity defines it as “the practices and processes which help to ensure the formal management of data assets within an organization.”

At erwin by Quest, we further break down this definition by viewing data governance as a strategic, continuous commitment to ensuring organizations are able to discover and track data, accurately place it within the appropriate business context(s), and maximize its security, quality and value.

Mr. Pörschmann asked webinar attendees to stop trying to explain what data governance is to executives and clients. Instead, he suggests they put data governance in real-world scenarios to answer these questions: “What is the problem you believe data governance is the answer to?” Or “How would you recognize having effective data governance in place?”

In essence, Mr. Pörschmann laid out the “enterprise data dilemma,” which stems from three important but difficult questions for an enterprise to answer: What data do we have? Where is it? And how do we get value from it?

Asking how you recognize having effective data governance in place is quite helpful in executive discussions, according to Mr. Pörschmann. And when you talk about that question at a high level, he says, you get a very “simple answer,”– which is ‘the only thing we want to have is the right data with the right quality to the right person at the right time at the right cost.’

The Why: Data Governance Drivers

Why should companies care about data governance?

erwin’s 2020 State of Data Governance and Automation report found that better decision-making is the primary driver for data governance (62 percent), with analytics secondary (51 percent), and regulatory compliance coming in third (48 percent).

In the webinar, Mr. Pörschmann called out that the drivers of data governance are the same as those for digital transformation initiatives. “This is not surprising at all,” he said. “Because data is one of the success elements of a digital agenda or digital transformation agenda. So without having data governance and data management in place, no full digital transformation will be possible.”

Drivers of data governance

Data Privacy Regulations

While compliance is not the No. 1 driver for data governance, it’s still a major factor – especially since the rollout of the European Union’s General Data Protection Regulation (GDPR) in 2018.

According to Mr. Pörschmann, many decision-makers believe that if they get GDPR right, they’ll be fine and can move onto other projects. But he cautions “this [notion] is something which is not really likely to happen.”

For the EU, he warned, organizations need to prepare for the Digital Single Market, agreed on last year by the European Parliament and commission. With it comes clear definitions or rules on data access and exchange, especially across digital platforms, as well as clear regulations and also instruments to execute on data ownership. He noted, “Companies will be forced to share some specific data which is relevant for public security, i.e., reduction of carbon dioxide. So companies will be forced to classify their data and to find mechanisms to share it with such platforms.”

GDPR is also proving to be the de facto model for data privacy across the United States. The new Virginia Consumer Data Privacy Act, which was modeled on the California Consumer Privacy Act (CCPA), and the California Privacy Rights Act (CPRA), all share many of the same requirements as GDPR.

Like CCPA, the Virginia bill would give consumers the right to access their data, correct inaccuracies, and request the deletion of information. Virginia residents also would be able to opt out of data collection.

Nevada, Vermont, Maine, New York, Washington, Oklahoma and Utah also are leading the way with some type of consumer privacy regulation. Several other bills are on the legislative docket in Alabama, Arizona, Florida, Connecticut and Kentucky, all of which follow a similar format to the CCPA.

Stop Wasting Time

In addition to drivers like digital transformation and compliance, it’s really important to look at the effect of poor data on enterprise efficiency/productivity.

Respondents to McKinsey’s 2019 Global Data Transformation Survey reported that an average of 30 percent of their total enterprise time was spent on non-value-added tasks because of poor data quality and availability.

Wasted time is also an unfortunate reality for many data stewards, who spend 80 percent of their time finding, cleaning and reorganizing huge amounts of data, and only 20 percent of their time on actual data analysis.

According to erwin’s 2020 report, about 70 percent of respondents – a combination of roles from data architects to executive managers – said they spent an average of 10 or more hours per week on data-related activities.

The Benefits of erwin Data Intelligence

erwin Data Intelligence by Quest supports enterprise data governance, digital transformation and any effort that relies on data for favorable outcomes.

The software suite combines data catalog and data literacy capabilities for greater awareness of and access to available data assets, guidance on their use, and guardrails to ensure data policies and best practices are followed.

erwin Data Intelligence automatically harvests, transforms and feeds metadata from a wide array of data sources, operational processes, business applications and data models into a central catalog. Then it is accessible and understandable via role-based, contextual views so stakeholders can make strategic decisions based on accurate insights.

You can request a demo of erwin Data Intelligence here.

[blog-cta header=”Webinar: The Value of Data Governance & How to Quantify It” body=”Join us March 15 at 10 a.m. ET for the second webinar in this series, “The Value of Data Governance & How to Quantify It.” Mr. Pörschmann will discuss how justifying a data governance program requires building a solid business case in which you can prove its value.” button=”Register Now” button_link=”https://attendee.gotowebinar.com/register/5489626673791671307″ image=”https://s38605.p1254.sites.pressdns.com/wp-content/uploads/2018/11/iStock-914789708.jpg” ]

Categories
erwin Expert Blog Data Governance

Are Data Governance Bottlenecks Holding You Back?

Better decision-making has now topped compliance as the primary driver of data governance. However, organizations still encounter a number of bottlenecks that may hold them back from fully realizing the value of their data in producing timely and relevant business insights.

While acknowledging that data governance is about more than risk management and regulatory compliance may indicate that companies are more confident in their data, the data governance practice is nonetheless growing in complexity because of more:

  • Data to handle, much of it unstructured
  • Sources, like IoT
  • Points of integration
  • Regulations

Without an accurate, high-quality, real-time enterprise data pipeline, it will be difficult to uncover the necessary intelligence to make optimal business decisions.

So what’s holding organizations back from fully using their data to make better, smarter business decisions?

Data Governance Bottlenecks

erwin’s 2020 State of Data Governance and Automation report, based on a survey of business and technology professionals at organizations of various sizes and across numerous industries, examined the role of automation in  data governance and intelligence  efforts.  It uncovered a number of obstacles that organizations have to overcome to improve their data operations.

The No.1 bottleneck, according to 62 percent of respondents, was documenting complete data lineage. Understanding the quality of source data is the next most serious bottleneck (58 percent); followed by finding, identifying, and harvesting data (55 percent); and curating assets with business context (52 percent).

The report revealed that all but two of the possible bottlenecks were marked by more than 50 percent of respondents. Clearly, there’s a massive need for a data governance framework to keep these obstacles from stymying enterprise innovation.

As we zeroed in on the bottlenecks of day-to-day operations, 25 percent of respondents said length of project/delivery time was the most significant challenge, followed by data quality/accuracy is next at 24 percent, time to value at 16 percent, and reliance on developer and other technical resources at 13 percent.

Are Data Governance Bottlenecks Holding You Back?

Overcoming Data Governance Bottlenecks

The 80/20 rule describes the unfortunate reality for many data stewards: they spend 80 percent of their time finding, cleaning and reorganizing huge amounts of data and only 20 percent on actual data analysis.

In fact, we found that close to 70 percent of our survey respondents spent an average of 10 or more hours per week on data-related activities, most of it searching for and preparing data.

What can you do to reverse the 80/20 rule and subsequently overcome data governance bottlenecks?

1. Don’t ignore the complexity of data lineage: It’s a risky endeavor to support data lineage using a manual approach, and businesses that attempt it that way will find it’s not sustainable given data’s constant movement from one place to another via multiple routes – and doing it correctly down to the column level. Adopting automated end-to-end lineage makes it possible to view data movement from the source to reporting structures, providing a comprehensive and detailed view of data in motion.

2. Automate code generation: Alleviate the need for developers to hand code connections from data sources to target schema. Mapping data elements to their sources within a single repository to determine data lineage and harmonize data integration across platforms reduces the need for specialized, technical resources with knowledge of ETL and database procedural code. It also makes it easier for business analysts, data architects, ETL developers, testers and project managers to collaborate for faster decision-making.

3. Use an integrated impact analysis solution: By automating data due diligence for IT you can deliver operational intelligence to the business. Business users benefit from automating impact analysis to better examine value and prioritize individual data sets. Impact analysis has equal importance to IT for automatically tracking changes and understanding how data from one system feeds other systems and reports. This is an aspect of data lineage, created from technical metadata, ensuring nothing “breaks” along the change train.

4. Put data quality first: Users must have confidence in the data they use for analytics. Automating and matching business terms with data assets and documenting lineage down to the column level are critical to good decision-making. If this approach hasn’t been the case to date, enterprises should take a few steps back to review data quality measures before jumping into automating data analytics.

5. Catalog data using a solution with a broad set of metadata connectors: All data sources will be leveraged, including big data, ETL platforms, BI reports, modeling tools, mainframe, and relational data as well as data from many other types of systems. Don’t settle for a data catalog from an emerging vendor that only supports a narrow swath of newer technologies, and don’t rely on a catalog from a legacy provider that may supply only connectors for standard, more mature data sources.

6. Stress data literacy: You want to ensure that data assets are used strategically. Automation expedites the benefits of data cataloging. Curated internal and external datasets for a range of content authors doubles business benefits and ensures effective management and monetization of data assets in the long-term if linked to broader data governance, data quality and metadata management initiatives. There’s a clear connection to data literacy here because of its foundation in business glossaries and socializing data so all stakeholders can view and understand it within the context of their roles.

7. Make automation the norm across all data governance processes: Too many companies still live in a world where data governance is a high-level mandate, not practically implemented. To fully realize the advantages of data governance and the power of data intelligence, data operations must be automated across the board. Without automated data management, the governance housekeeping load on the business will be so great that data quality will inevitably suffer. Being able to account for all enterprise data and resolve disparity in data sources and silos using manual approaches is wishful thinking.

8. Craft your data governance strategy before making any investments: Gather multiple stakeholders—both business and IT— with multiple viewpoints to discover where their needs mesh and where they diverge and what represents the greatest pain points to the business. Solve for these first, but build buy-in by creating a layered, comprehensive strategy that ultimately will address most issues. From there, it’s on to matching your needs to an automated data governance solution that squares with business and IT – both for immediate requirements and future plans.

Register now for the first of a new, six-part webinar series on the practice of data governance and how to proactively deal with the complexities. “The What & Why of Data Governance” webinar on Tuesday, Feb. 23rd at 3 pm GMT/10 am ET.

Categories
Data Governance erwin Expert Blog

erwin Positioned as a Leader in Gartner’s 2020 Magic Quadrant for Metadata Management Solutions for Second Year in a Row

erwin has once again been positioned as a Leader in the Gartner “2020 Magic Quadrant for Metadata Management Solutions.”

This year, erwin had the largest move of any player on the Quadrant and moved up significantly in terms of “Ability to Execute” and also in “Vision.”

This recognition affirms our efforts in developing an integrated platform for enterprise modeling and data intelligence to support data governance, digital transformation and any other effort that relies on data for favorable outcomes.

erwin’s metadata management offering, the erwin Data Intelligence Suite (erwin DI), includes data catalog, data literacy and automation capabilities for greater awareness of and access to data assets, guidance on their use, and guardrails to ensure data policies and best practices are followed.

With erwin DI’s automated, metadata-driven framework, organizations have visibility and control over their disparate data streams – from harvesting to aggregation and integration, including transformation with complete upstream and downstream lineage and all the associated documentation.

We’re super proud of this achievement and the value erwin DI provides.

We invite you to download the report and quadrant graphic.

Categories
Data Governance erwin Expert Blog

There’s More to erwin Data Governance Automation Than Meets the AI

Prashant Parikh, erwin’s Senior Vice President of Software Engineering, talks about erwin’s vision to automate every aspect of the data governance journey to increase speed to insights. The clear benefit is that data stewards spend less time building and populating the data governance framework and more time realizing value and ROI from it. 

Industry analysts and other people who write about data governance and automation define it narrowly, with an emphasis on artificial intelligence (AI) and machine learning (ML). Although AI and ML are massive fields with tremendous value, erwin’s approach to data governance automation is much broader.

Automation adds a lot of value by making processes more effective and efficient. For data governance, automation ensures the framework is always accurate and up to date; otherwise the data governance initiative itself falls apart.

From our perspective, the key to data governance success is meeting the needs of both IT and business users in the discovery and application of enterprise “data truths.” We do this through an open, configurable and flexible metamodel across data catalog, business glossary, and self-service data discovery capabilities with built-in automation.

To better explain our vision for automating data governance, let’s look at some of the different aspects of how the erwin Data Intelligence Suite (erwin DI) incorporates automation.

Metadata Harvesting and Ingestion: Automatically harvest, transform and feed metadata from virtually any source to any target to activate it within the erwin Data Catalog (erwin DC). erwin provides this metadata-driven automation through two types of data connectors: 1) erwin Standard Data Connectors for data at rest or JDBC-compliant data sources and 2) optional erwin Smart Data Connectors for data in motion or a broad variety of code types and industry-standard languages, including ELT/ETL platforms, business intelligence reports, database procedural code, testing automation tools, ecosystem utilities and ERP environments.

Data Cataloging: Catalog and sync metadata with data management and governance artifacts according to business requirements in real time. erwin DC helps organizations learn what data they have and where it’s located, including data at rest and in motion. It’s an inventory of the entire metadata universe, able to tell you the data and metadata available for a certain topic so those particular sources and assets can be found quickly for analysis and decision-making.

Data Mapping: erwin DI’s Mapping Manager provides an integrated development environment for creating and maintaining source-to-target mapping and transformation specifications to centrally version control data movement, integration and transformation. Import existing Excel or CSV files, use the drag-and-drop feature to extract the mappings from your ETL scripts, or manually populate the inventory to then be visualized with the lineage analyzer.

Code Generation: Generate ETL/ELT, Data Vault and code for other data integration components with plug-in SDKs to accelerate project delivery and reduce rework.

Data Lineage: Document and visualize how data moves and transforms across your enterprise. erwin DC generates end-to-end data lineage, down to the column level, between repositories and shows data flows from source systems to reporting layers, including intermediate transformation and business logic. Whether you’re a business user or a technical user, you can understand how data travels and transforms from point A to point B.

Data Profiling: Easily assess the contents and quality of registered data sets and associate these metrics with harvested metadata as part of ongoing data curation. Find hidden inconsistencies and highlight other potential problems using intelligent statistical algorithms and provides robust validation scores to help correct errors.

Business Glossary Management: Curate, associate and govern data assets so all stakeholders can find data relevant to their roles and understand it within a business context. erwin DI’s Business Glossary Manager is a central repository for all terms, policies and rules with out-of-the-box, industry-specific business glossaries with best-practice taxonomies and ontologies.

Semantic and Metadata Associations: erwin AIMatch automatically discovers and suggests relationships and associations between business terms and technical metadata to accelerate the creation and maintenance of governance frameworks.

Sensitive Data Discovery + Mind Mapping: Identify, document and prioritize sensitive data elements, flagging sensitive information to accelerate compliance efforts and reduce data-related risks. For example, we ship out-of-the-box General Data Protection Regulation (GDPR) policies and critical data elements that make up the GDPR policy. 

Additionally, the mind map automatically connects technical and business objects so both sets of stakeholders can easily visualize the organization’s most valuable data assets. It provides a current, holistic and enterprise-wide view of risks, enabling compliance and regulatory managers to quickly update the classifications at one level or at higher levels, if necessary. The mind map also shows you the sensitivity indicator and it allows you to propagate the sensitivity across your related objects to ensure compliance.

Self-Service Data Discovery: With an easy-to-use UI and flexible search mechanisms, business users can look up information and then perform the required analysis for quick and accurate decision-making. It further enables data socialization and collaboration between data functions within the organization.

Data Modeling Integration: By automatically harvesting your models from erwin Data Modeler and all the associated metadata for ingestion into a data catalog you ensure a single source of truth.  Then you can associate metadata with physical assets, develop a business glossary with model-driven naming standards, and socialize data models with a wider range of stakeholders. This integration also helps the business stewards because if your data model has your naming standard convention filled in, we also help them by populating the business glossary.

Enterprise Architecture Integration: erwin DI Harvester for Evolve systemically harvests data assets via smart data connectors for a wide range of data sources, both data at rest and data in motion. The harvested metadata integrates with enterprise architecture providing an accurate picture of the processes, applications and data within an organization.

Why Automating Everything Matters

The bottom line is you do not need to waste precious time, energy and resources to search, manage, analyze, prepare or protect data manually. And unless your data is well-governed, downstream data analysts and data scientists will not be able to generate significant value from it.

erwin DI provides you with the ability to populate your system with the metadata from your enterprise. We help you every step with the built in, out-of-the-box solutions and automation for every aspect of your data governance journey.

By ensuring your environment always stays controlled, you are always on top of your compliance, your tagging of sensitive data, and satisfying your unique governance needs with flexibility built into the product, and automation guiding you each step of the way.

erwin DI also enables and encourages collaboration and democratization of the data that is collected in the system; letting business users mine the data sets, because that is the ultimate value of your data governance solution.

With software-based automation and guidance from humans, the information in your data governance framework will never be outdated or out of sync with your IT and business functions. Stale data can’t fuel a successful data governance program.

Learn more about erwin automation, including what’s on the technology roadmap, by watching “Our Vision to Automate Everything” from the first day of erwin Insights 2020.

Or you can request your own demo of erwin DI.

erwin Insights 2020 on demand

Categories
Data Governance erwin Expert Blog

Automating Data Governance

Automating Data Governance

Automating data governance is key to addressing the exponentially growing volume and variety of data.

erwin CMO Mariann McDonagh recounts erwin’s vision to automate everything from day 1 of erwin Insights 2020.

Data readiness is everything. Whether driving digital experiences, mapping customer journeys, enhancing digital operations, developing digital innovations, finding new ways to interact with customers, or building digital ecosystems or marketplaces – all of this digital transformation is powered by data.

In a COVID and post-COVID world, organizations need to radically change as we look to reimagine business models and reform the way we approach almost everything.

The State of Data Automation

Data readiness depends on automation to create the data pipeline. Earlier this year, erwin conducted a research project in partnership with Dataversity, the 2020 State of Data Governance and Automation.

We asked participants to “talk to us about data value chain bottlenecks.” They told us their number one challenge is documenting complete data lineage (62%), followed by understanding the quality of the data source (58%).

Two other significant bottlenecks are finding, identifying and harvesting data (55%) curating data assets with business content for context and semantics (52%). Every item mentioned here are recurring themes we hear from our customers in terms of what led them to erwin.

We also looked at data preparation, governance and intelligence to see where organizations might be getting stuck and spending lots of time. We found that project length, slow delivery time, is one of the biggest inhibitors. Data quality and accuracy are recurring themes as well.

Reliance on developers and technical resources is another barrier to productivity. Even with data scientists in the front office, the lack of people in the back office to harvest and prepare the data means  time to value is prolonged.

Last but not least, we looked at the amount of time spent on data activities. The great news is that most organizations spend more than 10 hours a week on data-related activities. But the problem is that not enough of that time is spent on analysis because of being stuck in data prep.

IDC talks about this reverse 80/20 rule: 80% of time and effort is spent on data preparation, with only 20% focused on data analysis. This means 80% of your time is left on the cutting-room floor and can’t be used to drive your business forward.

2020 Data Governance and Automation Report

Data Automation Adds Value

Automating data operations adds a lot of value by making a solution more effective and more powerful. Consider a smart home’s thermostat, smoke detectors, lights, doorbell, etc. You have centralized access and control – from anywhere.

At erwin, our goal is to automate the entire data governance journey, whether top down or bottom up. We’re on a mission to automate all the tasks data stewards typically perform so they spend less time building and populating the data governance framework and more time using the framework to realize value and ROI.

Automation also ensures that the data governance framework is always up to date and never stale. Because without current and accurate data, a data governance initiative will fall apart.

Here are some ways erwin adds value by automating the data governance journey:

  • Metadata ingestion into the erwin Data Intelligence Suite (erwin DI) through our standard data connectors. And you can schedule metadata scans to ensure it’s always refreshed and up to date.
  • erwin Smart Data Connectors address data in motion, how it travels and transforms across the enterprise. These custom software solutions document all the traversing and transformations of data and populate the erwin DI’s Metadata Manager with the technical metadata. erwin Smart Data Connectors also document ETL scripts work with the tool of your choice.
  • erwin Lineage Analyzer puts everything together in an easy-to-understand format, making it easy for both business and technical users to visualize how data is traversing the enterprise, how it is getting transformed and the different hops it is taking along the way.
  • erwin DM Connect for DI makes it easy for metadata to be ingested from erwin Data Modeler to erwin DI. erwin DM customers can take advantage of all the rich metadata created and stored in their erwin data models. With just a couple of clicks, some or all data models can be configured and pushed erwin DI’s Metadata Manager.

The automation and integration of erwin DM and erwin DI ensures that your data models are always updated and uploaded, providing a single source of truth for your data governance journey.

This is part one of a two-part series on how erwin is automating data governance. Learn more by watching this session from erwin Insights 2020, which now is available on demand.

erwin Insights 2020

Categories
Data Intelligence Data Governance erwin Expert Blog

Doing Cloud Migration and Data Governance Right the First Time

More and more companies are looking at cloud migration.

Migrating legacy data to public, private or hybrid clouds provide creative and sustainable ways for organizations to increase their speed to insights for digital transformation, modernize and scale their processing and storage capabilities, better manage and reduce costs, encourage remote collaboration, and enhance security, support and disaster recovery.

But let’s be honest – no one likes to move. So if you’re going to move from your data from on-premise legacy data stores and warehouse systems to the cloud, you should do it right the first time. And as you make this transition, you need to understand what data you have, know where it is located, and govern it along the way.

cloud migration

Automated Cloud Migration

Historically, moving legacy data to the cloud hasn’t been easy or fast.

As organizations look to migrate their data from legacy on-prem systems to cloud platforms, they want to do so quickly and precisely while ensuring the quality and overall governance of that data.

The first step in this process is converting the physical table structures themselves. Then you must bulk load the legacy data. No less daunting, your next step is to re-point or even re-platform your data movement processes.

Without automation, this is a time-consuming and expensive undertaking. And you can’t risk false starts or delayed ROI that reduces the confidence of the business and taint this transformational initiative.

By using automated and repeatable capabilities, you can quickly and safely migrate data to the cloud and govern it along the way.

But transforming and migrating enterprise data to the cloud is only half the story – once there, it needs to be governed for completeness and compliance. That means your cloud data assets must be available for use by the right people for the right purposes to maximize their security, quality and value.

Why You Need Cloud Data Governance

Companies everywhere are building innovative business applications to support their customers, partners and employees and are increasingly migrating from legacy to cloud environments. But even with the “need for speed” to market, new applications must be modeled and documented for compliance, transparency and stakeholder literacy.

The desire to modernize technology, over time, leads to acquiring many different systems with various data entry points and transformation rules for data as it moves into and across the organization.

These tools range from enterprise service bus (ESB) products, data integration tools; extract, transform and load (ETL) tools, procedural code, application program interfaces (APIs), file transfer protocol (FTP) processes, and even business intelligence (BI) reports that further aggregate and transform data.

With all these diverse metadata sources, it is difficult to understand the complicated web they form much less get a simple visual flow of data lineage and impact analysis.

Regulatory compliance is also a major driver of data governance (e.g., GDPR, CCPA, HIPAA, SOX, PIC DSS). While progress has been made, enterprises are still grappling with the challenges of deploying comprehensive and sustainable data governance, including reliance on mostly manual processes for data mapping, data cataloging and data lineage.

Introducing erwin Cloud Catalyst

erwin just announced the release of erwin Cloud Catalyst, a suite of automated cloud migration and data governance software and services. It helps organizations quickly and precisely migrate their data from legacy, on-premise databases to the cloud and then govern those data assets throughout their lifecycle.

Only erwin provides software and services that automate the complete cloud migration and data governance lifecycle – from the reverse-engineering and transformation of legacy systems and ETL/ELT code to moving bulk data to cataloging and auto generating lineage. The metadata-driven suite automatically finds, models, ingests, catalogs and governs cloud data assets.

erwin Cloud Catalyst is comprised of erwin Data Modeler (erwin DM), erwin Data Intelligence (erwin DI) and erwin Smart Data Connectors, working together to simplify and accelerate cloud migration by removing barriers, reducing risks and decreasing time to value for your investments in these modern systems, such Snowflake, Microsoft Azure and Google Cloud.

We start with an assessment of your cloud migration strategy to determine what automation and optimization opportunities exist. Then we deliver an automation roadmap and design the appropriate smart data connectors to help your IT services team achieve your future-state cloud architecture, including accelerating data ingestion and ETL conversion.

Once your data reaches the cloud, you’ll have deep and detailed metadata management with full data governance, data lineage and impact analysis. With erwin Cloud Catalyst, you automate these data governance steps:

  • Harvest and catalog cloud data: erwin DM and erwin DI’s Metadata Manager natively scans RDBMS sources to catalog/document data assets.
  • Model cloud data structures: erwin DM converts, modifies and models the new cloud data structures.
  • Map data movement: erwin DI’s Mapping Manager defines data movement and transformation requirements via drag-and-drop functionality.
  • Generate source code: erwin DI’s automation framework generates data migration source code for any ETL/ELT SDK.
  • Test migrated data: erwin DI’s automation framework generates test cases and validation source code to test migrated data.
  • Govern cloud data: erwin DI gives cloud data assets business context and meaning through the Business Glossary Manager, as well as policies and rules for use.
  • Distribute cloud data: erwin DI’s Business User Portal provides self-service access to cloud data asset discovery and reporting tools.

Request an erwin Cloud Catalyst assessment.

And don’t forget to register for erwin Insights 2020 on October 13-14, with sessions on Snowflake, Microsoft and data lake initiatives powered by erwin Cloud Catalyst.

erwin Data Intelligence

Subscribe to the erwin Expert Blog

Once you submit the trial request form, an erwin representative will be in touch to verify your request and help you start data modeling.