Categories
erwin Expert Blog

Very Meta … Unlocking Data’s Potential with Metadata Management Solutions

Untapped data, if mined, represents tremendous potential for your organization. While there has been a lot of talk about big data over the years, the real hero in unlocking the value of enterprise data is metadata, or the data about the data.

However, most organizations don’t use all the data they’re flooded with to reach deeper conclusions about how to drive revenue, achieve regulatory compliance or make other strategic decisions. They don’t know exactly what data they have or even where some of it is.

Quite honestly, knowing what data you have and where it lives is complicated. And to truly understand it, you need to be able to create and sustain an enterprise-wide view of and easy access to underlying metadata.

This isn’t an easy task. Organizations are dealing with numerous data types and data sources that were never designed to work together and data infrastructures that have been cobbled together over time with disparate technologies, poor documentation and with little thought for downstream integration.

As a result, the applications and initiatives that depend on a solid data infrastructure may be compromised, leading to faulty analysis and insights.

Metadata Is the Heart of Data Intelligence

A recent IDC Innovators: Data Intelligence Report says that getting answers to such questions as “where is my data, where has it been, and who has access to it” requires harnessing the power of metadata.

Metadata is generated every time data is captured at a source, accessed by users, moves through an organization, and then is profiled, cleansed, aggregated, augmented and used for analytics to guide operational or strategic decision-making.

In fact, data professionals spend 80 percent of their time looking for and preparing data and only 20 percent of their time on analysis, according to IDC.

To flip this 80/20 rule, they need an automated metadata management solution for:

• Discovering data – Identify and interrogate metadata from various data management silos.
• Harvesting data – Automate the collection of metadata from various data management silos and consolidate it into a single source.
• Structuring and deploying data sources – Connect physical metadata to specific data models, business terms, definitions and reusable design standards.
• Analyzing metadata – Understand how data relates to the business and what attributes it has.
• Mapping data flows – Identify where to integrate data and track how it moves and transforms.
• Governing data – Develop a governance model to manage standards, policies and best practices and associate them with physical assets.
• Socializing data – Empower stakeholders to see data in one place and in the context of their roles.

Addressing the Complexities of Metadata Management

The complexities of metadata management can be addressed with a strong data management strategy coupled with metadata management software to enable the data quality the business requires.

This encompasses data cataloging (integration of data sets from various sources), mapping, versioning, business rules and glossary maintenance, and metadata management (associations and lineage).

erwin has developed the only data intelligence platform that provides organizations with a complete and contextual depiction of the entire metadata landscape.

It is the only solution that can automatically harvest, transform and feed metadata from operational processes, business applications and data models into a central data catalog and then made accessible and understandable within the context of role-based views.

erwin’s ability to integrate and continuously refresh metadata from an organization’s entire data ecosystem, including business processes, enterprise architecture and data architecture, forms the foundation for enterprise-wide data discovery, literacy, governance and strategic usage.

Organizations then can take a data-driven approach to business transformation, speed to insights, and risk management.
With erwin, organizations can:

1. Deliver a trusted metadata foundation through automated metadata harvesting and cataloging
2. Standardize data management processes through a metadata-driven approach
3. Centralize data-driven projects around centralized metadata for planning and visibility
4. Accelerate data preparation and delivery through metadata-driven automation
5. Master data management platforms through metadata abstraction
6. Accelerate data literacy through contextual metadata enrichment and integration
7. Leverage a metadata repository to derive lineage, impact analysis and enable audit/oversight ability

With erwin Data Intelligence as part of the erwin EDGE platform, you know what data you have, where it is, where it’s been and how it transformed along the way, plus you can understand sensitivities and risks.

With an automated, real-time, high-quality data pipeline, enterprise stakeholders can base strategic decisions on a full inventory of reliable information.

Many of our customers are hard at work addressing metadata management challenges, and that’s why erwin was Named a Leader in Gartner’s “2019 Magic Quadrant for Metadata Management Solutions.”

Gartner Magic Quadrant Metadata Management

Categories
erwin Expert Blog

Why EA Needs to Be Part of Your Digital Transformation Strategy

Enterprise architecture (EA) isn’t dead, you’re just using it wrong. Part three of erwin’s digital transformation blog series.  

I’ll let you in on a little secret: the rumor of enterprise architecture’s demise has been greatly exaggerated. However, the truth for many of today’s fast-moving businesses is that enterprise architecture fails. But why?

Enterprise architecture is invaluable for internal business intelligence (but is rarely used for real intelligence), governance (but often has a very narrow focus), management insights (but doesn’t typically provide useful insights), and transformation and planning (ok, now we have something!).

In reality, most organizations do not leverage EA teams to their true potential. Instead they rely on consultants, trends, regulations and legislation to drive strategy.

Why does this happen?

Don’t Put Enterprise Architecture in a Corner

EA has remained in its traditional comfort zone of IT. EA is not only about IT …  but yet, EA lives within IT, focuses on IT and therefore loses its business dimension and support.

It remains isolated and is rarely, if ever, involved in:

  • Assessing, planning and running business transformation initiatives
  • Providing real, enterprise-wide insights
  • Producing actionable initiatives

Instead, it focuses on managing “stuff”:

  • Understanding existing “stuff” by gathering exhaustively detailed information
  • Running “stuff”-deployment projects
  • Managing cost “stuff”
  • “Moving to the cloud” (the solution to … everything)

Enterprise Architecture

What Prevents Enterprise Architecture from Being Successful?

There are three main reasons why EA has been pigeon-holed:

  1. Lack of trust in the available information
    • Information is mostly collected, entered and maintained manually
    • Automated data collection and connection is costly and error-prone
    • Identification of issues can be very difficult and time-consuming
  1. Lack of true asset governance and collaboration
    • Enterprise architecture becomes ring-fenced within a department
    • Few stakeholders willing to be actively involved in owning assets and be responsible for them
    • Collaboration on EA is seen as secondary and mostly focused on reports and status updates
  1. Lack of practical insights (insights, analyses and management views)
    • Too small and narrow thinking of what EA can provide
    • The few analyses performed focus on immediate questions, rarely planning and strategy
    • Collaboration on EA is seen as secondary and mostly focused on reports and status updates

Because of this, EA fails to deliver the relevant insights that management needs to make decisions – in a timely manner – and loses its credibility.

But the fact is EA should be, and was designed to be, about actionable insights leading to innovative architecture, not about only managing “stuff!”

Don’t Slow Your Roll. Elevate Your Role.

It’s clear that the role of EA in driving digital transformation needs to be elevated. It needs to be a strategic partner with the business.

According to a McKinsey report on the “Five Enterprise-Architecture Practices That Add Value to Digital Transformations,” EA teams need to:

“Translate architecture issues into terms that senior executives will understand. Enterprise architects can promote closer alignment between business and IT by helping to translate architecture issues for business leaders and managers who aren’t technology savvy. Engaging senior management in discussions about enterprise architecture requires management to dedicate time and actively work on technology topics. It also requires the EA team to explain technology matters in terms that business leaders can relate to.”

With that said, to further change the perception of EA within the organization you need to serve what management needs. To do this, enterprise architects need to develop innovative business, not IT insights, and make them dynamic. Next, enterprise architects need to gather information you can trust and then maintain.

To provide these strategic insights, you don’t need to focus on everything — you need to focus on what management wants you to focus on. The rest is just IT being IT. And, finally, you need to collaborate – like your life depends on it.

Giving Digital Transformation an Enterprise Architecture EDGE

The job of the enterprise architecture is to provide the tools and insights for the C-suite, and other business stakeholders, to help deploy strategies for business transformation.

Let’s say the CEO has a brilliant idea and wants to test it. This is EA’s sweet spot and opportunity to shine. And this is where erwin lives by providing an easy, automated way to deliver collaboration, speed and responsiveness.

erwin is about providing the right information to the right people at the right time. We are focused on empowering the forward-thinking enterprise architect by providing:

  • Superb, near real-time understanding of information
  • Excellent, intuitive collaboration
  • Dynamic, interactive dashboards (vertical and horizontal)
  • Actual, realistic, business-oriented insights
  • Assessment, planning and implementation support

Data-Driven Business Transformation

Categories
erwin Expert Blog

The Importance of EA/BP for Mergers and Acquisitions

Over the past few weeks several huge mergers and acquisitions (M&A) have been announced, including Raytheon and United Technologies, the Salesforce acquisition of Tableau and the Merck acquisition of Tilos Therapeutics.

According to collated research and a Harvard Business Review report, the M&A failure rate sits between 70 and 90 percent. Additionally, McKinsey estimates that around 70 percent of mergers do not achieve their expected “revenue synergies.”

Combining two organizations into one is complicated. And following a merger or acquisition, businesses typically find themselves with duplicate applications and business capabilities that are costly and obviously redundant, making alignment difficult.

Enterprise architecture is essential to successful mergers and acquisitions. It helps alignment by providing a business- outcome perspective for IT and guiding transformation. It also helps define strategy and models, improving interdepartmental cohesion and communication. Roadmaps can be used to provide a common focus throughout the new company, and if existing roadmaps are in place, they can be modified to fit the new landscape.

Additionally, an organization must understand both sets of processes being brought to the table. Without business process modeling, this is near impossible.

In an M&A scenario, businesses need to ensure their systems are fully documented and rationalized. This way, they can comb through their inventories to make more informed decisions about which systems to cut or phase out to operate more efficiently and then deliver the roadmap to enable those changes.

Mergers and Acquisitions

Getting Rid of Duplications Duplications

Mergers and acquisitions are daunting. Depending on the size of the businesses, hundreds of systems and processes need to be accounted for, which can be difficult, and even impossible to do in advance.

Enterprise architecture aids in rooting out process and operational duplications, making the new entity more cost efficient. Needless to say, the behind-the-scenes complexities are many and can include discovering that the merging enterprises use the same solution but under different names in different parts of the organizations, for example.

Determinations also may need to be made about whether particular functions, that are expected to become business-critical, have a solid, scalable base to build upon. If an existing application won’t be able to handle the increased data load and processing, then those previously planned investments don’t need to be made.

Gaining business-wide visibility of data and enterprise architecture all within a central repository enables relevant parties across merging companies to work from a single source of information. This provides insights to help determine whether, for example, two equally adept applications of the same nature can continue to be used as the companies merge, because they share common underlying data infrastructures that make it possible for them to interoperate across a single source of synched information.

Or, in another scenario, it may be obvious that it is better to keep only one of the applications because it alone serves as the system of record for what the organization has determined are valuable conceptual data entities in its data model.

At the same time, it can reveal the location of data that might otherwise have been unwittingly discharged with the elimination of an application, enabling it to be moved to a lower-cost storage tier for potential future use.

Knowledge Retention – Avoiding Brain Drain

When employees come and go, as they tend to during mergers and acquisitions, they take critical institutional knowledge with them.

Unlocking knowledge and then putting systems in place to retain that knowledge is one key benefit of business process modeling. Knowledge retention and training has become a pivotal area in which businesses will either succeed or fail.

Different organizations tend to speak different languages. For instance, one company might refer to a customer as “customer,” while another might refer to them as a “client.” Business process modeling is a great way to get everybody in the organization using the same language, referring to things in the same way.

Drawing out this knowledge then allows a centralized and uniform process to be adopted across the company. In any department within any company, individuals and teams develop processes for doing things. Business process modeling extracts all these pieces of information from individuals and teams so they can be turned into centrally adopted processes.

 

[FREE EBOOK] Application Portfolio Management For Mergers & Acquisitions 

 

Ensuring Compliance

Industry and government regulations affect businesses that work in or do business with any number of industries or in specific geographies. Industry-specific regulations in areas like healthcare, pharmaceuticals and financial services have been in place for some time.

Now, broader mandates like the European Union’s Generation Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA) require businesses across industries to think about their compliance efforts. Business process modeling helps organizations prove what they are doing to meet compliance requirements and understand how changes to their processes impact compliance efforts (and vice versa).

In highly regulated industries like financial services and pharmaceuticals, where mergers and acquisitions activity is frequent, identifying and standardizing business processes meets the scrutiny of regulatory compliance.

Business process modeling makes it easier to document processes, align documentation within document control and learning management systems, and give R&D employees easy access and intuitive navigation so they can find the information they need.

Introducing Business Architecture

Organizations often interchange the terms “business process” and “enterprise architecture” because both are strategic functions with many interdependencies.

However, business process architecture defines the elements of a business and how they interact with the aim of aligning people, processes, data, technologies and applications. Enterprise architecture defines the structure and operation of an organization with the purpose of determining how it can achieve its current and future objectives most effectively, translating those goals into a blueprint of IT capabilities.

Although both disciplines seek to achieve the organization’s desired outcomes, both have largely operated in silos.

To learn more about how erwin provides modeling and analysis software to support both business process and enterprise architecture practices and enable their broader collaboration, click here.

Cloud-based enterprise architecture and business process

Categories
erwin Expert Blog

Keeping Up with New Data Protection Regulations

Keeping up with new data protection regulations can be difficult, and the latest – the General Data Protection Regulation (GDPR) – isn’t the only new data protection regulation organizations should be aware of.

California recently passed a law that gives residents the right to control the data companies collect about them. Some suggest the California Consumer Privacy Act (CCPA), which takes effect January 1, 2020, sets a precedent other states will follow by empowering consumers to set limits on how companies can use their personal information.

In fact, organizations should expect increasing pressure on lawmakers to introduce new data protection regulations. A number of high-profile data breaches and scandals have increased public awareness of the issue.

Facebook was in the news again last week for another major problem around the transparency of its user data, and the tech-giant also is reportedly facing 10 GDPR investigations in Ireland – along with Apple, LinkedIn and Twitter.

Some industries, such as healthcare and financial services, have been subject to stringent data regulations for years: GDPR now joins the Health Insurance Portability and Accountability Act (HIPAA), the Payment Card Industry Data Security Standard (PCI DSS) and the Basel Committee on Banking Supervision (BCBS).

Due to these pre-existing regulations, organizations operating within these sectors, as well as insurance, had some of the GDPR compliance bases covered in advance.

Other industries had their own levels of preparedness, based on the nature of their operations. For example, many retailers have robust, data-driven e-commerce operations that are international. Such businesses are bound to comply with varying local standards, especially when dealing with personally identifiable information (PII).

Smaller, more brick-and-mortar-focussed retailers may have had to start from scratch.

But starting position aside, every data-driven organization should strive for a better standard of data management — and not just for compliance sake. After all, organizations are now realizing that data is one of their most valuable assets.

New Data Protection Regulations – Always Be Prepared

When it comes to new data protection regulations in the face of constant data-driven change, it’s a matter of when, not if.

As they say, the best defense is a good offense. Fortunately, whenever the time comes, the first point of call will always be data governance, so organizations can prepare.

Effective compliance with new data protection regulations requires a robust understanding of the “what, where and who” in terms of data and the stakeholders with access to it (i.e., employees).

The Regulatory Rationale for Integrating Data Management & Data Governance

This is also true for existing data regulations. Compliance is an on-going requirement, so efforts to become compliant should not be treated as static events.

Less than four months before GDPR came into effect, only 6 percent of enterprises claimed they were prepared for it. Many of these organizations will recall a number of stressful weeks – or even months – tidying up their databases and their data management processes and policies.

This time and money was spent reactionarily, at the behest of proactive efforts to grow the business.

The implementation and subsequent observation of a strong data governance initiative ensures organizations won’t be put on the spot going forward. Should an audit come up, current projects aren’t suddenly derailed as they reenact pre-GDPR panic.

New Data Regulations

Data Governance: The Foundation for Compliance

The first step to compliance with new – or old – data protection regulations is data governance.

A robust and effective data governance initiative ensures an organization understands where security should be focussed.

By adopting a data governance platform that enables you to automatically tag sensitive data and track its lineage, you can ensure nothing falls through the cracks.

Your chosen data governance solution should enable you to automate the scanning, detection and tagging of sensitive data by:

  • Monitoring and controlling sensitive data – Gain better visibility and control across the enterprise to identify data security threats and reduce associated risks.
  • Enriching business data elements for sensitive data discovery – By leveraging a comprehensive mechanism to define business data elements for PII, PHI and PCI across database systems, cloud and Big Data stores, you can easily identify sensitive data based on a set of algorithms and data patterns.
  • Providing metadata and value-based analysis – Simplify the discovery and classification of sensitive data based on metadata and data value patterns and algorithms. Organizations can define business data elements and rules to identify and locate sensitive data, including PII, PHI and PCI.

With these precautionary steps, organizations are primed to respond if a data breach occurs. Having a well governed data ecosystem with data lineage capabilities means issues can be quickly identified.

Additionally, if any follow-up is necessary –  such as with GDPR’s data breach reporting time requirements – it can be handles swiftly and in accordance with regulations.

It’s also important to understand that the benefits of data governance don’t stop with regulatory compliance.

A better understanding of what data you have, where it’s stored and the history of its use and access isn’t only beneficial in fending off non-compliance repercussions. In fact, such an understanding is arguably better put to use proactively.

Data governance improves data quality standards, it enables better decision-making and ensures businesses can have more confidence in the data informing those decisions.

The same mechanisms that protect data by controlling its access also can be leveraged to make data more easily discoverable to approved parties – improving operational efficiency.

All in all, the cumulative result of data governance’s influence on data-driven businesses both drives revenue (through greater efficiency) and reduces costs (less errors, false starts, etc.).

To learn more about data governance and the regulatory rationale for its implementation, get our free guide here.

DG RediChek

Categories
erwin Expert Blog

What’s Business Process Modeling Got to Do with It? – Choosing A BPM Tool

With business process modeling (BPM) being a key component of data governance, choosing a BPM tool is part of a dilemma many businesses either have or will soon face.

Historically, BPM didn’t necessarily have to be tied to an organization’s data governance initiative.

However, data-driven business and the regulations that oversee it are becoming increasingly extensive, so the need to view data governance as a collective effort – in terms of personnel and the tools that make up the strategy – is becoming harder to ignore.

Data governance also relies on business process modeling and analysis to drive improvement, including identifying business practices susceptible to security, compliance or other risks and adding controls to mitigate exposures.

Choosing a BPM Tool: An Overview

As part of a data governance strategy, a BPM tool aids organizations in visualizing their business processes, system interactions and organizational hierarchies to ensure elements are aligned and core operations are optimized.

The right BPM tool also helps organizations increase productivity, reduce errors and mitigate risks to achieve strategic objectives.

With  insights from the BPM tool, you can clarify roles and responsibilities – which in turn should influence an organization’s policies about data ownership and make data lineage easier to manage.

Organizations also can use a BPM tool to identify the staff who function as “unofficial data repositories.” This has both a primary and secondary function:

1. Organizations can document employee processes to ensure vital information isn’t lost should an employee choose to leave.

2. It is easier to identify areas where expertise may need to be bolstered.

Organizations that adopt a BPM tool also enjoy greater process efficiency. This is through a combination of improving existing processes or designing new process flows, eliminating unnecessary or contradictory steps, and documenting results in a shareable format that is easy to understand so the organization is pulling in one direction.

Choosing a BPM Tool

Silo Buster

Understanding the typical use cases for business process modeling is the first step. As with any tech investment, it’s important to understand how the technology will work in the context of your organization/business.

For example, it’s counter-productive to invest in a solution that reduces informational silos only to introduce a new technological silo through a lack of integration.

Ideally, organizations want a BPM tool that works in conjunction with the wider data management platform and data governance initiative – not one that works against them.

That means it must support data imports and integrations from/with external sources, a solution that enables in-tool collaboration to reduce departmental silos, and most crucial, a solution that taps into a central metadata repository to ensure consistency across the whole data management and governance initiatives.

The lack of a central metadata repository is a far too common thorn in an organization’s side. Without it, they have to juggle multiple versions as changes to the underlying data aren’t automatically updated across the platform.

It also means organizations waste crucial time manually manufacturing and maintaining data quality, when an automation framework could achieve the same goal instantaneously, without human error and with greater consistency.

A central metadata repository ensures an organization can acknowledge and get behind a single source of truth. This has a wealth of favorable consequences including greater cohesion across the organization, better data quality and trust, and faster decision-making with less false starts due to plans based on misleading information.

Three Key Questions to Ask When Choosing a BPM Tool

Organizations in the market for a BPM tool should also consider the following:

1. Configurability: Does the tool support the ability to model and analyze business processes with links to data, applications and other aspects of your organization? And how easy is this to achieve?

2. Role-based views: Can the tool develop integrated business models for a single source of truth but with different views for different stakeholders based on their needs – making regulatory compliance more manageable? Does it enable cross-functional and enterprise collaboration through discussion threads, surveys and other social features?

3. Business and IT infrastructure interoperability: How well does the tool integrate with other key components of data governance including enterprise architecture, data modeling, data cataloging and data literacy? Can it aid in providing data intelligence to connect all the pieces of the data management and governance lifecycles?

For more information and to find out how such a solution can integrate with your organization and current data management and data governance initiatives, click here.

BPM Tool - erwin BP powered by Casewise

Categories
erwin Expert Blog

Four Use Cases Proving the Benefits of Metadata-Driven Automation

Organization’s cannot hope to make the most out of a data-driven strategy, without at least some degree of metadata-driven automation.

The volume and variety of data has snowballed, and so has its velocity. As such, traditional – and mostly manual – processes associated with data management and data governance have broken down. They are time-consuming and prone to human error, making compliance, innovation and transformation initiatives more complicated, which is less than ideal in the information age.

So it’s safe to say that organizations can’t reap the rewards of their data without automation.

Data scientists and other data professionals can spend up to 80 percent of their time bogged down trying to understand source data or addressing errors and inconsistencies.

That’s time needed and better used for data analysis.

By implementing metadata-driven automation, organizations across industry can unleash the talents of their highly skilled, well paid data pros to focus on finding the goods: actionable insights that will fuel the business.

Metadata-Driven Automation

Metadata-Driven Automation in the BFSI Industry

The banking, financial services and insurance industry typically deals with higher data velocity and tighter regulations than most. This bureaucracy is rife with data management bottlenecks.

These bottlenecks are only made worse when organizations attempt to get by with systems and tools that are not purpose-built.

For example, manually managing data mappings for the enterprise data warehouse via MS Excel spreadsheets had become cumbersome and unsustainable for one BSFI company.

After embracing metadata-driven automation and custom code automation templates, it saved hundreds of thousands of dollars in code generation and development costs and achieved more work in less time with fewer resources. ROI on the automation solutions was realized within the first year.

Metadata-Driven Automation in the Pharmaceutical Industry

Despite its shortcomings, the Excel spreadsheet method for managing data mappings is common within many industries.

But with the amount of data organizations need to process in today’s business climate, this manual approach makes change management and determining end-to-end lineage a significant and time-consuming challenge.

One global pharmaceutical giant headquartered in the United States experienced such issues until it adopted metadata-driven automation. Then the pharma company was able to scan in all source and target system metadata and maintain it within a single repository. Users now view end-to-end data lineage from the source layer to the reporting layer within seconds.

On the whole, the implementation resulted in extraordinary time savings and a total cost reduction of 60 percent.

Metadata-Driven Automation in the Insurance Industry

Insurance is another industry that has to cope with high data velocity and stringent data regulations. Plus many organizations in this sector find that they’ve outgrown their systems.

For example, an insurance company using a CDMA product to centralize data mappings is probably missing certain critical features, such as versioning, impact analysis and lineage, which adds to costs, times to market and errors.

By adopting metadata-driven automation, organizations can standardize the pre-ETL data mapping process and better manage data integration through the change and release process. As a result, both internal data mapping and cross functional teams now have easy and fast web-based access to data mappings and valuable information like impact analysis and lineage.

Here is the story of a business that adopted such an approach and achieved operational excellence and a delivery time reduction by 80 percent, as well as achieving ROI within 12 months.

Metadata-Driven Automation for a Non-Profit

Another common issue cited by organizations using manual data mapping is ballooning complexity and subsequent confusion.

Any organization expanding its data-driven focus without sufficiently maturing data management initiative(s) will experience this at some point.

One of the world’s largest humanitarian organizations, with millions of members and volunteers operating all over the world, was confronted with this exact issue.

It recognized the need for a solution to standardize the pre-ETL data mapping process to make data integration more efficient and cost-effective.

With metadata-driven automation, the organization would be able to scan and store metadata and data dictionaries in a central repository, as well as manage the business definitions and data dictionary for legacy systems contributing data to the enterprise data warehouse.

By adopting such an approach, the organization realized time savings across all IT development and cross-functional testing teams. Additionally, they were able to more easily manage mappings, code sets, reference data and data validation rules.

Again, ROI was achieved within a year.

A Universal Solution for Metadata-Driven Automation

Metadata-driven automation is a capability any organization can benefit from – regardless of industry, as demonstrated by the various real-world use cases chronicled here.

The erwin Automation Framework is a key component of the erwin EDGE platform for comprehensive data management and data governance.

With it, data professionals realize these industry-agnostic benefits:

  • Centralized and standardized code management with all automation templates stored in a governed repository
  • Better quality code and minimized rework
  • Business-driven data movement and transformation specifications
  • Superior data movement job designs based on best practices
  • Greater agility and faster time-to-value in data preparation, deployment and governance
  • Cross-platform support of scripting languages and data movement technologies

Learn more about metadata-driven automation as it relates to data preparation and enterprise data mapping.

Join one our weekly erwin Mapping Manager demos.

Automate Data Mapping

Categories
erwin Expert Blog

Google’s Record GDPR Fine: Avoiding This Fate with Data Governance

The General Data Protection Regulation (GDPR) made its first real impact as Google’s record GDPR fine dominated news cycles.

Historically, fines had peaked at six figures with the U.K.’s Information Commissioner’s Office (ICO) fines of 500,000 pounds ($650,000 USD) against both Facebook and Equifax for their data protection breaches.

Experts predicted an uptick in GDPR enforcement in 2019, and Google’s recent record GDPR fine has brought that to fruition. France’s data privacy enforcement agency hit the tech giant with a $57 million penalty – more than 80 times the steepest ICO fine.

If it can happen to Google, no organization is safe. Many in fact still lag in the GDPR compliance department. Cisco’s 2019 Data Privacy Benchmark Study reveals that only 59 percent of organizations are meeting “all or most” of GDPR’s requirements.

So many more GDPR violations are likely to come to light. And even organizations that are currently compliant can’t afford to let their data governance standards slip.

Data Governance for GDPR

Google’s record GDPR fine makes the rationale for better data governance clear enough. However, the Cisco report offers even more insight into the value of achieving and maintaining compliance.

Organizations with GDPR-compliant security measures are not only less likely to suffer a breach (74 percent vs. 89 percent), but the breaches suffered are less costly too, with fewer records affected.

However, applying such GDPR-compliant provisions can’t be done on a whim; organizations must expand their data governance practices to include compliance.

GDPR White Paper

A robust data governance initiative provides a comprehensive picture of an organization’s systems and the units of data contained or used within them. This understanding encompasses not only the original instance of a data unit but also its lineage and how it has been handled and processed across an organization’s ecosystem.

With this information, organizations can apply the relevant degrees of security where necessary, ensuring expansive and efficient protection from external (i.e., breaches) and internal (i.e., mismanaged permissions) data security threats.

Although data security cannot be wholly guaranteed, these measures can help identify and contain breaches to minimize the fallout.

Looking at Google’s Record GDPR Fine as An Opportunity

The tertiary benefits of GDPR compliance include greater agility and innovation and better data discovery and management. So arguably, the “tertiary” benefits of data governance should take center stage.

While once exploited by such innovators as Amazon and Netflix, data optimization and governance is now on everyone’s radar.

So organization’s need another competitive differentiator.

An enterprise data governance experience (EDGE) provides just that.

THE REGULATORY RATIONALE FOR INTEGRATING DATA MANAGEMENT & DATA GOVERNANCE

This approach unifies data management and data governance, ensuring that the data landscape, policies, procedures and metrics stem from a central source of truth so data can be trusted at any point throughout its enterprise journey.

With an EDGE, the Any2 (any data from anywhere) data management philosophy applies – whether structured or unstructured, in the cloud or on premise. An organization’s data preparation (data mapping), enterprise modeling (business, enterprise and data) and data governance practices all draw from a single metadata repository.

In fact, metadata from a multitude of enterprise systems can be harvested and cataloged automatically. And with intelligent data discovery, sensitive data can be tagged and governed automatically as well – think GDPR as well as HIPAA, BCBS and CCPA.

Organizations without an EDGE can still achieve regulatory compliance, but data silos and the associated bottlenecks are unavoidable without integration and automation – not to mention longer timeframes and higher costs.

To get an “edge” on your competition, consider the erwin EDGE platform for greater control over and value from your data assets.

Data preparation/mapping is a great starting point and a key component of the software portfolio. Join us for a weekly demo.

Automate Data Mapping

Categories
erwin Expert Blog

Data Preparation and Data Mapping: The Glue Between Data Management and Data Governance to Accelerate Insights and Reduce Risks

Organizations have spent a lot of time and money trying to harmonize data across diverse platforms, including cleansing, uploading metadata, converting code, defining business glossaries, tracking data transformations and so on. But the attempts to standardize data across the entire enterprise haven’t produced the desired results.

A company can’t effectively implement data governance – documenting and applying business rules and processes, analyzing the impact of changes and conducting audits – if it fails at data management.

The problem usually starts by relying on manual integration methods for data preparation and mapping. It’s only when companies take their first stab at manually cataloging and documenting operational systems, processes and the associated data, both at rest and in motion, that they realize how time-consuming the entire data prepping and mapping effort is, and why that work is sure to be compounded by human error and data quality issues.

To effectively promote business transformation, as well as fulfil regulatory and compliance mandates, there can’t be any mishaps.

It’s obvious that the manual road is very challenging to discover and synthesize data that resides in different formats in thousands of unharvested, undocumented databases, applications, ETL processes and procedural code.

Consider the problematic issue of manually mapping source system fields (typically source files or database tables) to target system fields (such as different tables in target data warehouses or data marts).

These source mappings generally are documented across a slew of unwieldy spreadsheets in their “pre-ETL” stage as the input for ETL development and testing. However, the ETL design process often suffers as it evolves because spreadsheet mapping data isn’t updated or may be incorrectly updated thanks to human error. So questions linger about whether transformed data can be trusted.

Data Quality Obstacles

The sad truth is that high-paid knowledge workers like data scientists spend up to 80 percent of their time finding and understanding source data and resolving errors or inconsistencies, rather than analyzing it for real value.

Statistics are similar when looking at major data integration projects, such as data warehousing and master data management with data stewards challenged to identify and document data lineage and sensitive data elements.

So how can businesses produce value from their data when errors are introduced through manual integration processes? How can enterprise stakeholders gain accurate and actionable insights when data can’t be easily and correctly translated into business-friendly terms?

How can organizations master seamless data discovery, movement, transformation and IT and business collaboration to reverse the ratio of preparation to value delivered.

What’s needed to overcome these obstacles is establishing an automated, real-time, high-quality and metadata- driven pipeline useful for everyone, from data scientists to enterprise architects to business analysts to C-level execs.

Doing so will require a hearty data management strategy and technology for automating the timely delivery of quality data that measures up to business demands.

From there, they need a sturdy data governance strategy and technology to automatically link and sync well-managed data with core capabilities for auditing, statutory reporting and compliance requirements as well as to drive business insights.

Creating a High-Quality Data Pipeline

Working hand-in-hand, data management and data governance provide a real-time, accurate picture of the data landscape, including “data at rest” in databases, data lakes and data warehouses and “data in motion” as it is integrated with and used by key applications. And there’s control of that landscape to facilitate insight and collaboration and limit risk.

With a metadata-driven, automated, real-time, high-quality data pipeline, all stakeholders can access data that they now are able to understand and trust and which they are authorized to use. At last they can base strategic decisions on what is a full inventory of reliable information.

The integration of data management and governance also supports industry needs to fulfill regulatory and compliance mandates, ensuring that audits are not compromised by the inability to discover key data or by failing to tag sensitive data as part of integration processes.

Data-driven insights, agile innovation, business transformation and regulatory compliance are the fruits of data preparation/mapping and enterprise modeling (business process, enterprise architecture and data modeling) that revolves around a data governance hub.

erwin Mapping Manager (MM) combines data management and data governance processes in an automated flow through the integration lifecycle from data mapping for harmonization and aggregation to generating the physical embodiment of data lineage – that is the creation, movement and transformation of transactional and operational data.

Its hallmark is a consistent approach to data delivery (business glossaries connect physical metadata to specific business terms and definitions) and metadata management (via data mappings).

Automate Data Mapping

Categories
erwin Expert Blog

Who to Follow in 2019 for Big Data, Data Governance and GDPR Advice

Experts are predicting a surge in GDPR enforcement in 2019 as regulators begin to crackdown on organizations still lagging behind compliance standards.

With this in mind, the erwin team has compiled a list of the most valuable data governance, GDPR and Big data blogs and news sources for data management and data governance best practice advice from around the web.

From regulatory compliance (GDPR, HIPPA, etc.,) to driving revenue through proactive data governance initiatives and Big Data strategies, these accounts cover it all.

Top 7 Data Governance, GDPR and Big Data Blogs and News Sources from Around the Web

Honorable Mention: @BigDataBatman

The Twitter account data professionals deserve, but probably not the one you need right now.

This quirky Twitter bot trawls the web for big data tweets and news stories, and substitutes “big data” for “Batman”. If data is the Bane of your existence, this account will serve up some light relief.

 

1. The erwin Expert Network

Twitter| LinkedIn | Facebook | Blog

For anything data management and data governance related, the erwin Experts should be your first point of call.

The team behind the most connected data management and data governance solutions on the market regularly share best practice advice in guide, whitepaper, blog and social media update form.

 

2. GDPR For Online Entrepreneurs (UK, US, CA, AU)

This community-driven Facebook group is a consistent source of insightful information for data-driven businesses.

In addition to sharing data and GDPR-focused articles from around the web, GDPR For Online Entrepreneurs encourages members to seek GDPR advice from its community’s members.

 

3. GDPR General Data Protection Regulation Technology

LinkedIn also has its own community-driven GDPR advice groups. The most active of these is the, “GDPR General Data Protection Regulation Technology”.

The group aims to be an information hub for anybody responsible for company data, including company leaders, GDPR specialists and consultants, business analysts and process experts. 

Data governance, GDPR, big data blogs

 

 

4. DBTA

Twitter | LinkedIn | Facebook

Database Trends and Applications is a publication that should be on every data professionals’ radar. Alongside news and editorials covering big data, database management, data integrations and more, DBTA is also a great source of advice for professionals looking to research buying options.

Their yearly “Trend-Setting Products in Data and Information Management” list and Product Spotlight featurettes can help data professionals put together proposals, and help give decision makers piece of mind.

 

5. Dataversity

Twitter | LinkedIn

Dataversity is another excellent source for data management and data governance related best practices and think-pieces.

In addition to hosting and sponsoring a number of live events throughout the year, the platform is a regular provider of data leadership webinars and training with a library full of webinars available on-demand.

 

6. WIRED

Twitter | LinkedIn | Facebook

Wired is a physical and digital tech magazine that covers all the bases.

For data professionals that are after the latest news and editorials pertaining to data security and a little extra – from innovations in transport to the applications of Blockchain – Wired is a great publication to keep on your radar.

 

7. TDAN

Twitter | LinkedIn | Facebook

For those looking for something a little more focused, check out TDAN. A subsidiary of Dataversity, TDAN regularly publish new editorial content covering data governance, data management, data modeling and Big Data.

Categories
erwin Expert Blog

The Unified Data Platform – Connecting Everything That Matters

Businesses stand to gain a lot from a unified data platform.

This decade has seen data-driven leaders dominate their respective markets and inspire other organizations across the board to use data to fuel their businesses, leveraging this strategic asset to create more value below the surface. It’s even been dubbed “the new oil,” but data is arguably more valuable than the analogy suggests.

Data governance (DG) is a key component of the data value chain because it connects people, processes and technology as they relate to the creation and use of data. It equips organizations to better deal with  increasing data volumes, the variety of data sources, and the speed in which data is processed.

But for an organization to realize and maximize its true data-driven potential, a unified data platform is required. Only then can all data assets be discovered, understood, governed and socialized to produce the desired business outcomes while also reducing data-related risks.

Benefits of a Unified Data Platform

Data governance can’t succeed in a bubble; it has to be connected to the rest of the enterprise. Whether strategic, such as risk and compliance management, or operational, like a centralized help desk, your data governance framework should span and support the entire enterprise and its objectives, which it can’t do from a silo.

Let’s look at some of the benefits of a unified data platform with data governance as the key connection point.

Understand current and future state architecture with business-focused outcomes:

A unified data platform with a single metadata repository connects data governance to the roles, goals strategies and KPIs of the enterprise. Through integrated enterprise architecture modeling, organizations can capture, analyze and incorporate the structure and priorities of the enterprise and related initiatives.

This capability allows you to plan, align, deploy and communicate a high-impact data governance framework and roadmap that sets manageable expectations and measures success with metrics important to the business.

Document capabilities and processes and understand critical paths:

A unified data platform connects data governance to what you do as a business and the details of how you do it. It enables organizations to document and integrate their business capabilities and operational processes with the critical data that serves them.

It also provides visibility and control by identifying the critical paths that will have the greatest impacts on the business.

Realize the value of your organization’s data:

A unified data platform connects data governance to specific business use cases. The value of data is realized by combining different elements to answer a business question or meet a specific requirement. Conceptual and logical schemas and models provide a much richer understanding of how data is related and combined to drive business value.

2020 Data Governance and Automation Report

Harmonize data governance and data management to drive high-quality deliverables:

A unified data platform connects data governance to the orchestration and preparation of data to drive the business, governing data throughout the entire lifecycle – from creation to consumption.

Governing the data management processes that make data available is of equal importance. By harmonizing the data governance and data management lifecycles, organizations can drive high-quality deliverables that are governed from day one.

Promote a business glossary for unanimous understanding of data terminology:

A unified data platform connects data governance to the language of the business when discussing and describing data. Understanding the terminology and semantic meaning of data from a business perspective is imperative, but most business consumers of data don’t have technical backgrounds.

A business glossary promotes data fluency across the organization and vital collaboration between different stakeholders within the data value chain, ensuring all data-related initiatives are aligned and business-driven.

Instill a culture of personal responsibility for data governance:

A unified data platform is inherently connected to the policies, procedures and business rules that inform and govern the data lifecycle. The centralized management and visibility afforded by linking policies and business rules at every level of the data lifecycle will improve data quality, reduce expensive re-work, and improve the ideation and consumption of data by the business.

Business users will know how to use (and how not to use) data, while technical practitioners will have a clear view of the controls and mechanisms required when building the infrastructure that serves up that data.

Better understand the impact of change:

Data governance should be connected to the use of data across roles, organizations, processes, capabilities, dashboards and applications. Proactive impact analysis is key to efficient and effective data strategy. However, most solutions don’t tell the whole story when it comes to data’s business impact.

By adopting a unified data platform, organizations can extend impact analysis well beyond data stores and data lineage for true visibility into who, what, where and how the impact will be felt, breaking down organizational silos.

Getting the Competitive “EDGE”

The erwin EDGE delivers an “enterprise data governance experience” in which every component of the data value chain is connected.

Now with data mapping, it unifies data preparation, enterprise modeling and data governance to simplify the entire data management and governance lifecycle.

Both IT and the business have access to an accurate, high-quality and real-time data pipeline that fuels regulatory compliance, innovation and transformation initiatives with accurate and actionable insights.