Categories
erwin Expert Blog

Data Governance Makes Data Security Less Scary

Happy Halloween!

Do you know where your data is? What data you have? Who has had access to it?

These can be frightening questions for an organization to answer.

Add to the mix the potential for a data breach followed by non-compliance, reputational damage and financial penalties and a real horror story could unfold.

In fact, we’ve seen some frightening ones play out already:

  1. Google’s record GDPR fine – France’s data privacy enforcement agency hit the tech giant with a $57 million penalty in early 2019 – more than 80 times the steepest fine the U.K.’s Information Commissioner’s Office had levied against both Facebook and Equifax for their data breaches.
  2. In July 2019, British Airways received the biggest GDPR fine to date ($229 million) because the data of more than 500,000 customers was compromised.
  3. Marriot International was fined $123 million, or 1.5 percent of its global annual revenue, because 330 million hotel guests were affected by a breach in 2018.

Now, as Cybersecurity Awareness Month comes to a close – and ghosts and goblins roam the streets – we thought it a good time to resurrect some guidance on how data governance can make data security less scary.

We don’t want you to be caught off guard when it comes to protecting sensitive data and staying compliant with data regulations.

Data Governance Makes Data Security Less Scary

Don’t Scream; You Can Protect Your Sensitive Data

It’s easier to protect sensitive data when you know what it is, where it’s stored and how it needs to be governed.

Data security incidents may be the result of not having a true data governance foundation that makes it possible to understand the context of data – what assets exist and where, the relationship between them and enterprise systems and processes, and how and by what authorized parties data is used.

That knowledge is critical to supporting efforts to keep relevant data secure and private.

Without data governance, organizations don’t have visibility of the full data landscape – linkages, processes, people and so on – to propel more context-sensitive security architectures that can better assure expectations around user and corporate data privacy. In sum, they lack the ability to connect the dots across governance, security and privacy – and to act accordingly.

This addresses these fundamental questions:

  1. What private data do we store and how is it used?
  2. Who has access and permissions to the data?
  3. What data do we have and where is it?

Where Are the Skeletons?

Data is a critical asset used to operate, manage and grow a business. While sometimes at rest in databases, data lakes and data warehouses; a large percentage is federated and integrated across the enterprise, introducing governance, manageability and risk issues that must be managed.

Knowing where sensitive data is located and properly governing it with policy rules, impact analysis and lineage views is critical for risk management, data audits and regulatory compliance.

However, when key data isn’t discovered, harvested, cataloged, defined and standardized as part of integration processes, audits may be flawed and therefore your organization is at risk.

Sensitive data – at rest or in motion – that exists in various forms across multiple systems must be automatically tagged, its lineage automatically documented, and its flows depicted so that it is easily found and its usage across workflows easily traced.

Thankfully, tools are available to help automate the scanning, detection and tagging of sensitive data by:

  • Monitoring and controlling sensitive data: Better visibility and control across the enterprise to identify data security threats and reduce associated risks
  • Enriching business data elements for sensitive data discovery: Comprehensively defining business data element for PII, PHI and PCI across database systems, cloud and Big Data stores to easily identify sensitive data based on a set of algorithms and data patterns
  • Providing metadata and value-based analysis: Discovery and classification of sensitive data based on metadata and data value patterns and algorithms. Organizations can define business data elements and rules to identify and locate sensitive data including PII, PHI, PCI and other sensitive information.

No Hocus Pocus

Truly understanding an organization’s data, including its value and quality, requires a harmonized approach embedded in business processes and enterprise architecture.

Such an integrated enterprise data governance experience helps organizations understand what data they have, where it is, where it came from, its value, its quality and how it’s used and accessed by people and applications.

An ounce of prevention is worth a pound of cure  – from the painstaking process of identifying what happened and why to notifying customers their data and thus their trust in your organization has been compromised.

A well-formed security architecture that is driven by and aligned by data intelligence is your best defense. However, if there is nefarious intent, a hacker will find a way. So being prepared means you can minimize your risk exposure and the damage to your reputation.

Multiple components must be considered to effectively support a data governance, security and privacy trinity. They are:

  1. Data models
  2. Enterprise architecture
  3. Business process models

Creating policies for data handling and accountability and driving culture change so people understand how to properly work with data are two important components of a data governance initiative, as is the technology for proactively managing data assets.

Without the ability to harvest metadata schemas and business terms; analyze data attributes and relationships; impose structure on definitions; and view all data in one place according to each user’s role within the enterprise, businesses will be hard pressed to stay in step with governance standards and best practices around security and privacy.

As a consequence, the private information held within organizations will continue to be at risk.

Organizations suffering data breaches will be deprived of the benefits they had hoped to realize from the money spent on security technologies and the time invested in developing data privacy classifications.

They also may face heavy fines and other financial, not to mention PR, penalties.

Gartner Magic Quadrant Metadata Management

Categories
erwin Expert Blog

2019 Gartner Magic Quadrant for Metadata Management Solutions

erwin positioned as a Leader in Gartner’s “2019 Magic Quadrant for Metadata Management Solutions”

We were excited to announce earlier today that erwin was named as a Leader in the @Gartner_inc “2019 Magic Quadrant for Metadata Management Solutions.”

The report, by the world’s leading research and advisory company, provides a detailed overview of the metadata management market, including evaluations of 18 vendors based on completeness of vision and ability to execute.

The report and the quadrant graphic can be downloaded here: www.erwin.com/GartnerMMMQleader.

This is a significant milestone for erwin on our journey from a data modeling market leader for more than 25 years to an innovator in helping organizations address the enterprise data dilemma: what data do we have and where is it?

We are proud of this achievement and of the value of our Data Intelligence Suite and invite you to download the complete report to learn more.

GET THE REPORT NOW.

Gartner Magic Quadrant Metadata Management Solutions

Gartner, Magic Quadrant for Metadata Management Solutions, Guido De Simoni, Mark Beyer, Ankush Jain, 16 October 2019

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

This graphic was published by Gartner, Inc. as part of a larger research document and should be evaluated in the context of the entire document. The Gartner document is available upon request from www.erwin.com/GartnerMMMQleader.

Categories
erwin Expert Blog

Constructing a Digital Transformation Strategy: Putting the Data in Digital Transformation

Having a clearly defined digital transformation strategy is an essential best practice for successful digital transformation. But what makes a digital transformation strategy viable?

Part Two of the Digital Transformation Journey …

In our last blog on driving digital transformation, we explored how business architecture and process (BP) modeling are pivotal factors in a viable digital transformation strategy.

EA and BP modeling squeeze risk out of the digital transformation process by helping organizations really understand their businesses as they are today. It gives them the ability to identify what challenges and opportunities exist, and provides a low-cost, low-risk environment to model new options and collaborate with key stakeholders to figure out what needs to change, what shouldn’t change, and what’s the most important changes are.

Once you’ve determined what part(s) of your business you’ll be innovating — the next step in a digital transformation strategy is using data to get there.

Digital Transformation Examples

Constructing a Digital Transformation Strategy: Data Enablement

Many organizations prioritize data collection as part of their digital transformation strategy. However, few organizations truly understand their data or know how to consistently maximize its value.

If your business is like most, you collect and analyze some data from a subset of sources to make product improvements, enhance customer service, reduce expenses and inform other, mostly tactical decisions.

The real question is: are you reaping all the value you can from all your data? Probably not.

Most organizations don’t use all the data they’re flooded with to reach deeper conclusions or make other strategic decisions. They don’t know exactly what data they have or even where some of it is, and they struggle to integrate known data in various formats and from numerous systems—especially if they don’t have a way to automate those processes.

How does your business become more adept at wringing all the value it can from its data?

The reality is there’s not enough time, people and money for true data management using manual processes. Therefore, an automation framework for data management has to be part of the foundations of a digital transformation strategy.

Your organization won’t be able to take complete advantage of analytics tools to become data-driven unless you establish a foundation for agile and complete data management.

You need automated data mapping and cataloging through the integration lifecycle process, inclusive of data at rest and data in motion.

An automated, metadata-driven framework for cataloging data assets and their flows across the business provides an efficient, agile and dynamic way to generate data lineage from operational source systems (databases, data models, file-based systems, unstructured files and more) across the information management architecture; construct business glossaries; assess what data aligns with specific business rules and policies; and inform how that data is transformed, integrated and federated throughout business processes—complete with full documentation.

Without this framework and the ability to automate many of its processes, business transformation will be stymied. Companies, especially large ones with thousands of systems, files and processes, will be particularly challenged by taking a manual approach. Outsourcing these data management efforts to professional services firms only delays schedules and increases costs.

With automation, data quality is systemically assured. The data pipeline is seamlessly governed and operationalized to the benefit of all stakeholders.

Constructing a Digital Transformation Strategy: Smarter Data

Ultimately, data is the foundation of the new digital business model. Companies that have the ability to harness, secure and leverage information effectively may be better equipped than others to promote digital transformation and gain a competitive advantage.

While data collection and storage continues to happen at a dramatic clip, organizations typically analyze and use less than 0.5 percent of the information they take in – that’s a huge loss of potential. Companies have to know what data they have and understand what it means in common, standardized terms so they can act on it to the benefit of the organization.

Unfortunately, organizations spend a lot more time searching for data rather than actually putting it to work. In fact, data professionals spend 80 percent of their time looking for and preparing data and only 20 percent of their time on analysis, according to IDC.

The solution is data intelligence. It improves IT and business data literacy and knowledge, supporting enterprise data governance and business enablement.

It helps solve the lack of visibility and control over “data at rest” in databases, data lakes and data warehouses and “data in motion” as it is integrated with and used by key applications.

Organizations need a real-time, accurate picture of the metadata landscape to:

  • Discover data – Identify and interrogate metadata from various data management silos.
  • Harvest data – Automate metadata collection from various data management silos and consolidate it into a single source.
  • Structure and deploy data sources – Connect physical metadata to specific data models, business terms, definitions and reusable design standards.
  • Analyze metadata – Understand how data relates to the business and what attributes it has.
  • Map data flows – Identify where to integrate data and track how it moves and transforms.
  • Govern data – Develop a governance model to manage standards, policies and best practices and associate them with physical assets.
  • Socialize data – Empower stakeholders to see data in one place and in the context of their roles.

The Right Tools

When it comes to digital transformation (like most things), organizations want to do it right. Do it faster. Do it cheaper. And do it without the risk of breaking everything. To accomplish all of this, you need the right tools.

The erwin Data Intelligence (DI) Suite is the heart of the erwin EDGE platform for creating an “enterprise data governance experience.” erwin DI combines data cataloging and data literacy capabilities to provide greater awareness of and access to available data assets, guidance on how to use them, and guardrails to ensure data policies and best practices are followed.

erwin Data Catalog automates enterprise metadata management, data mapping, reference data management, code generation, data lineage and impact analysis. It efficiently integrates and activates data in a single, unified catalog in accordance with business requirements. With it, you can:

  • Schedule ongoing scans of metadata from the widest array of data sources.
  • Keep metadata current with full versioning and change management.
  • Easily map data elements from source to target, including data in motion, and harmonize data integration across platforms.

erwin Data Literacy provides self-service, role-based, contextual data views. It also provides a business glossary for the collaborative definition of enterprise data in business terms, complete with built-in accountability and workflows. With it, you can:

  • Enable data consumers to define and discover data relevant to their roles.
  • Facilitate the understanding and use of data within a business context.
  • Ensure the organization is fluent in the language of data.

With data governance and intelligence, enterprises can discover, understand, govern and socialize mission-critical information. And because many of the associated processes can be automated, you reduce errors and reliance on technical resources while increasing the speed and quality of your data pipeline to accomplish whatever your strategic objectives are, including digital transformation.

Check out our latest whitepaper, Data Intelligence: Empowering the Citizen Analyst with Democratized Data.

Data Intelligence: Empowering the Citizen Analyst with Democratized Data

Categories
erwin Expert Blog

Top 10 Reasons to Automate Data Mapping and Data Preparation

Data preparation is notorious for being the most time-consuming area of data management. It’s also expensive.

“Surveys show the vast majority of time is spent on this repetitive task, with some estimates showing it takes up as much as 80% of a data professional’s time,” according to Information Week. And a Trifacta study notes that overreliance on IT resources for data preparation costs organizations billions.

The power of collecting your data can come in a variety of forms, but most often in IT shops around the world, it comes in a spreadsheet, or rather a collection of spreadsheets often numbering in the hundreds or thousands.

Most organizations, especially those competing in the digital economy, don’t have enough time or money for data management using manual processes. And outsourcing is also expensive, with inevitable delays because these vendors are dependent on manual processes too.

Automate Data Mapping

Taking the Time and Pain Out of Data Preparation: 10 Reasons to Automate Data Preparation/Data Mapping

  1. Governance and Infrastructure

Data governance and a strong IT infrastructure are critical in the valuation, creation, storage, use, archival and deletion of data. Beyond the simple ability to know where the data came from and whether or not it can be trusted, there is an element of statutory reporting and compliance that often requires a knowledge of how that same data (known or unknown, governed or not) has changed over time.

A design platform that allows for insights like data lineage, impact analysis, full history capture, and other data management features can provide a central hub from which everything can be learned and discovered about the data – whether a data lake, a data vault, or a traditional warehouse.

  1. Eliminating Human Error

In the traditional data management organization, excel spreadsheets are used to manage the incoming data design, or what is known as the “pre-ETL” mapping documentation – this does not lend to any sort of visibility or auditability. In fact, each unit of work represented in these ‘mapping documents’ becomes an independent variable in the overall system development lifecycle, and therefore nearly impossible to learn from much less standardize.

The key to creating accuracy and integrity in any exercise is to eliminate the opportunity for human error – which does not mean eliminating humans from the process but incorporating the right tools to reduce the likelihood of error as the human beings apply their thought processes to the work.  

  1. Completeness

The ability to scan and import from a broad range of sources and formats, as well as automated change tracking, means that you will always be able to import your data from wherever it lives and track all of the changes to that data over time.

  1. Adaptability

Centralized design, immediate lineage and impact analysis, and change activity logging means that you will always have the answer readily available, or a few clicks away.  Subsets of data can be identified and generated via predefined templates, generic designs generated from standard mapping documents, and pushed via ETL process for faster processing via automation templates.

  1. Accuracy

Out-of-the-box capabilities to map your data from source to report, make reconciliation and validation a snap, with auditability and traceability built-in.  Build a full array of validation rules that can be cross checked with the design mappings in a centralized repository.

  1. Timeliness

The ability to be agile and reactive is important – being good at being reactive doesn’t sound like a quality that deserves a pat on the back, but in the case of regulatory requirements, it is paramount.

  1. Comprehensiveness

Access to all of the underlying metadata, source-to-report design mappings, source and target repositories, you have the power to create reports within your reporting layer that have a traceable origin and can be easily explained to both IT, business, and regulatory stakeholders.

  1. Clarity

The requirements inform the design, the design platform puts those to action, and the reporting structures are fed the right data to create the right information at the right time via nearly any reporting platform, whether mainstream commercial or homegrown.

  1. Frequency

Adaptation is the key to meeting any frequency interval. Centralized designs, automated ETL patterns that feed your database schemas and reporting structures will allow for cyclical changes to be made and implemented in half the time of using conventional means. Getting beyond the spreadsheet, enabling pattern-based ETL, and schema population are ways to ensure you will be ready, whenever the need arises to show an audit trail of the change process and clearly articulate who did what and when through the system development lifecycle.

  1. Business-Friendly

A user interface designed to be business-friendly means there’s no need to be a data integration specialist to review the common practices outlined and “passively enforced” throughout the tool. Once a process is defined, rules implemented, and templates established, there is little opportunity for error or deviation from the overall process. A diverse set of role-based security options means that everyone can collaborate, learn and audit while maintaining the integrity of the underlying process components.

Faster, More Accurate Analysis with Fewer People

What if you could get more accurate data preparation 50% faster and double your analysis with less people?

erwin Mapping Manager (MM) is a patented solution that automates data mapping throughout the enterprise data integration lifecycle, providing data visibility, lineage and governance – freeing up that 80% of a data professional’s time to put that data to work.

With erwin MM, data integration engineers can design and reverse-engineer the movement of data implemented as ETL/ELT operations and stored procedures, building mappings between source and target data assets and designing the transformation logic between them. These designs then can be exported to most ETL and data asset technologies for implementation.

erwin MM is 100% metadata-driven and used to define and drive standards across enterprise integration projects, enable data and process audits, improve data quality, streamline downstream work flows, increase productivity (especially over geographically dispersed teams) and give project teams, IT leadership and management visibility into the ‘real’ status of integration and ETL migration projects.

If an automated data preparation/mapping solution sounds good to you, please check out erwin MM here.

Solving the Enterprise Data Dilemma

Categories
erwin Expert Blog

The Data Governance (R)Evolution

Data governance continues to evolve – and quickly.

Historically, Data Governance 1.0 was siloed within IT and mainly concerned with cataloging data to support search and discovery. However, it fell short in adding value because it neglected the meaning of data assets and their relationships within the wider data landscape.

Then the push for digital transformation and Big Data created the need for DG to come out of IT’s shadows – Data Governance 2.0 was ushered in with principles designed for  modern, data-driven business. This approach acknowledged the demand for collaborative data governance, the tearing down of organizational silos, and spreading responsibilities across more roles.

But this past year we all witnessed a data governance awakening – or as the Wall Street Journal called it, a “global data governance reckoning.” There was tremendous data drama and resulting trauma – from Facebook to Equifax and from Yahoo to Aetna. The list goes on and on. And then, the European Union’s General Data Protection Regulation (GDPR) took effect, with many organizations scrambling to become compliant.

So where are we today?

Simply put, data governance needs to be a ubiquitous part of your company’s culture. Your stakeholders encompass both IT and business users in collaborative relationships, so that makes data governance everyone’s business.

Data Governance is Everyone's Business

Data governance underpins data privacy, security and compliance. Additionally, most organizations don’t use all the data they’re flooded with to reach deeper conclusions about how to grow revenue, achieve regulatory compliance, or make strategic decisions. They face a data dilemma: not knowing what data they have or where some of it is—plus integrating known data in various formats from numerous systems without a way to automate that process.

To accelerate the transformation of business-critical information into accurate and actionable insights, organizations need an automated, real-time, high-quality data pipeline. Then every stakeholder—data scientist, ETL developer, enterprise architect, business analyst, compliance officer, CDO and CEO—can fuel the desired outcomes based on reliable information.

Connecting Data Governance to Your Organization

  1. Data Mapping & Data Governance

The automated generation of the physical embodiment of data lineage—the creation, movement and transformation of transactional and operational data for harmonization and aggregation—provides the best route for enabling stakeholders to understand their data, trust it as a well-governed asset and use it effectively. Being able to quickly document lineage for a standardized, non-technical environment brings business alignment and agility to the task of building and maintaining analytics platforms.

  1. Data Modeling & Data Governance

Data modeling discovers and harvests data schema, and analyzes, represents and communicates data requirements. It synthesizes and standardizes data sources for clarity and consistency to back up governance requirements to use only controlled data. It benefits from the ability to automatically map integrated and cataloged data to and from models, where they can be stored in a central repository for re-use across the organization.

  1. Business Process Modeling & Data Governance

Business process modeling reveals the workflows, business capabilities and applications that use particular data elements. That requires that these assets be appropriately governed components of an integrated data pipeline that rests on automated data lineage and business glossary creation.

  1. Enterprise Architecture & Data Governance

Data flows and architectural diagrams within enterprise architecture benefit from the ability to automatically assess and document the current data architecture. Automatically providing and continuously maintaining business glossary ontologies and integrated data catalogs inform a key part of the governance process.

The EDGE Revolution

 By bringing together enterprise architecturebusiness processdata mapping and data modeling, erwin’s approach to data governance enables organizations to get a handle on how they handle their data and realize its maximum value. With the broadest set of metadata connectors and automated code generation, data mapping and cataloging tools, the erwin EDGE Platform simplifies the total data management and data governance lifecycle.

This single, integrated solution makes it possible to gather business intelligence, conduct IT audits, ensure regulatory compliance and accomplish any other organizational objective by fueling an automated, high-quality and real-time data pipeline.

The erwin EDGE creates an “enterprise data governance experience” that facilitates collaboration between both IT and the business to discover, understand and unlock the value of data both at rest and in motion.

With the erwin EDGE, data management and data governance are unified and mutually supportive of business stakeholders and IT to:

  • Discover data: Identify and integrate metadata from various data management silos.
  • Harvest data: Automate the collection of metadata from various data management silos and consolidate it into a single source.
  • Structure data: Connect physical metadata to specific business terms and definitions and reusable design standards.
  • Analyze data: Understand how data relates to the business and what attributes it has.
  • Map data flows: Identify where to integrate data and track how it moves and transforms.
  • Govern data: Develop a governance model to manage standards and policies and set best practices.
  • Socialize data: Enable stakeholders to see data in one place and in the context of their roles.

If you’ve enjoyed this latest blog series, then you’ll want to request a copy of Solving the Enterprise Data Dilemma, our new e-book that highlights how to answer the three most important data management and data governance questions: What data do we have? Where is it? And how do we get value from it?

Solving the Enterprise Data Dilemma

Categories
erwin Expert Blog

Compliance First: How to Protect Sensitive Data

The ability to more efficiently govern, discover and protect sensitive data is something that all prospering data-driven organizations are constantly striving for.

It’s been almost four months since the European Union’s General Data Protection Regulation (GDPR) took effect. While no fines have been issued yet, the Information Commissioner’s Office has received upwards of 500 calls per week since the May 25 effective date.

However, the fine-free streak may be ending soon with British Airways (BA) as the first large company to pay a GDPR penalty because of a data breach. The hack at BA in August and early September lasted for more than two weeks, with intruders getting away with account numbers and personal information of customers making reservations on the carrier’s website and mobile app. If regulators conclude that BA failed to take measures to prevent the incident— a significant fine may follow.

Additionally, complaints against Google in the EU have started. For example, internet browser provider Brave claims that Google and other advertising companies expose user data during a process called “bid request.” A data breach occurs because a bid request fails to protect sensitive data against unauthorized access, which is unlawful under the GDPR.

Per Brave’s announcement, bid request data can include the following personal data:

  • What you are reading or watching
  • Your location
  • Description of your device
  • Unique tracking IDs or a “cookie match,” which allows advertising technology companies to try to identify you the next time you are seen, so that a long-term profile can be built or consolidated with offline data about you
  • Your IP address,depending on the version of “real-time bidding” system
  • Data broker segment ID, if available, which could denote things like your income bracket, age and gender, habits, social media influence, ethnicity, sexual orientation, religion, political leaning, etc., depending on the version of bidding system

Obviously, GDPR isn’t the only regulation that organizations need to comply with. From HIPAA in healthcare to FINRA, PII and BCBS in financial services to the upcoming California Consumer Privacy Act (CCPA) taking effect January 1, 2020, regulatory compliance is part of running – and staying in business.

The common denominator in compliance across all industry sectors is the ability to protect sensitive data. But if organizations are struggling to understand what data they have and where it’s located, how do they protect it? Where do they begin?

Protect sensitive data

Discover and Protect Sensitive Data

Data is a critical asset used to operate, manage and grow a business. While sometimes at rest in databases, data lakes and data warehouses; a large percentage is federated and integrated across the enterprise, introducing governance, manageability and risk issues that must be managed.

Knowing where sensitive data is located and properly governing it with policy rules, impact analysis and lineage views is critical for risk management, data audits and regulatory compliance.

However, when key data isn’t discovered, harvested, cataloged, defined and standardized as part of integration processes, audits may be flawed and therefore putting your organization at risk.

Sensitive data – at rest or in motion – that exists in various forms across multiple systems must be automatically tagged, its lineage automatically documented, and its flows depicted so that it is easily found and its usage across workflows easily traced.

Thankfully, tools are available to help automate the scanning, detection and tagging of sensitive data by:

  • Monitoring and controlling sensitive data: Better visibility and control across the enterprise to identify data security threats and reduce associated risks
  • Enriching business data elements for sensitive data discovery: Comprehensive mechanism to define business data element for PII, PHI and PCI across database systems, cloud and Big Data stores to easily identify sensitive data based on a set of algorithms and data patterns
  • Providing metadata and value-based analysis: Discovery and classification of sensitive data based on metadata and data value patterns and algorithms. Organizations can define business data elements and rules to identify and locate sensitive data including PII, PHI, PCI and other sensitive information.


A Regulatory Rationale for Integrating Data Management and Data Governance

Data management and data governance, together, play a vital role in compliance. It’s easier to protect sensitive data when you know where it’s stored, what it is, and how it needs to be governed.

Truly understanding an organization’s data, including the data’s value and quality, requires a harmonized approach embedded in business processes and enterprise architecture. Such an integrated enterprise data governance experience helps organizations understand what data they have, where it is, where it came from, its value, its quality and how it’s used and accessed by people and applications.

But how is all this possible? Again, it comes back to the right technology for IT and business collaboration that will enable you to:

  • Discover data: Identify and interrogate metadata from various data management silos
  • Harvest data: Automate the collection of metadata from various data management silos and consolidate it into a single source
  • Structure data: Connect physical metadata to specific business terms and definitions and reusable design standards
  • Analyze data: Understand how data relates to the business and what attributes it has
  • Map data flows: Identify where to integrate data and track how it moves and transforms
  • Govern data: Develop a governance model to manage standards and policies and set best practices
  • Socialize data: Enable all stakeholders to see data in one place in their own context
Categories
erwin Expert Blog

Solving the Enterprise Data Dilemma

Due to the adoption of data-driven business, organizations across the board are facing their own enterprise data dilemmas.

This week erwin announced its acquisition of metadata management and data governance provider AnalytiX DS. The combined company touches every piece of the data management and governance lifecycle, enabling enterprises to fuel automated, high-quality data pipelines for faster speed to accurate, actionable insights.

Why Is This a Big Deal?

From digital transformation to AI, and everything in between, organizations are flooded with data. So, companies are investing heavily in initiatives to use all the data at their disposal, but they face some challenges. Chiefly, deriving meaningful insights from their data – and turning them into actions that improve the bottom line.

This enterprise data dilemma stems from three important but difficult questions to answer: What data do we have? Where is it? And how do we get value from it?

Large enterprises use thousands of unharvested, undocumented databases, applications, ETL processes and procedural code that make it difficult to gather business intelligence, conduct IT audits, and ensure regulatory compliance – not to mention accomplish other objectives around customer satisfaction, revenue growth and overall efficiency and decision-making.

The lack of visibility and control around “data at rest” combined with “data in motion”, as well as difficulties with legacy architectures, means these organizations spend more time finding the data they need rather than using it to produce meaningful business outcomes.

To remedy this, enterprises need smarter and faster data management and data governance capabilities, including the ability to efficiently catalog and document their systems, processes and the associated data without errors. In addition, business and IT must collaborate outside their traditional operational silos.

But this coveted state of data nirvana isn’t possible without the right approach and technology platform.

Enterprise Data: Making the Data Management-Data Governance Love Connection

Enterprise Data: Making the Data Management-Data Governance Love Connection

Bringing together data management and data governance delivers greater efficiencies to technical users and better analytics to business users. It’s like two sides of the same coin:

  • Data management drives the design, deployment and operation of systems that deliver operational and analytical data assets.
  • Data governance delivers these data assets within a business context, tracks their physical existence and lineage, and maximizes their security, quality and value.

Although these disciplines approach data from different perspectives and are used to produce different outcomes, they have a lot in common. Both require a real-time, accurate picture of an organization’s data landscape, including data at rest in data warehouses and data lakes and data in motion as it is integrated with and used by key applications.

However, creating and maintaining this metadata landscape is challenging because this data in its various forms and from numerous sources was never designed to work in concert. Data infrastructures have been cobbled together over time with disparate technologies, poor documentation and little thought for downstream integration, so the applications and initiatives that depend on data infrastructure are often out-of-date and inaccurate, rendering faulty insights and analyses.

Organizations need to know what data they have and where it’s located, where it came from and how it got there, what it means in common business terms [or standardized business terms] and be able to transform it into useful information they can act on – all while controlling its access.

To support the total enterprise data management and governance lifecycle, they need an automated, real-time, high-quality data pipeline. Then every stakeholder – data scientist, ETL developer, enterprise architect, business analyst, compliance officer, CDO and CEO – can fuel the desired outcomes with reliable information on which to base strategic decisions.

Enterprise Data: Creating Your “EDGE”

At the end of the day, all industries are in the data business and all employees are data people. The success of an organization is not measured by how much data it has, but by how well it’s used.

Data governance enables organizations to use their data to fuel compliance, innovation and transformation initiatives with greater agility, efficiency and cost-effectiveness.

Organizations need to understand their data from different perspectives, identify how it flows through and impacts the business, aligns this business view with a technical view of the data management infrastructure, and synchronizes efforts across both disciplines for accuracy, agility and efficiency in building a data capability that impacts the business in a meaningful and sustainable fashion.

The persona-based erwin EDGE creates an “enterprise data governance experience” that facilitates collaboration between both IT and the business to discover, understand and unlock the value of data both at rest and in motion.

By bringing together enterprise architecture, business process, data mapping and data modeling, erwin’s approach to data governance enables organizations to get a handle on how they handle their data. With the broadest set of metadata connectors and automated code generation, data mapping and cataloging tools, the erwin EDGE Platform simplifies the total data management and data governance lifecycle.

This single, integrated solution makes it possible to gather business intelligence, conduct IT audits, ensure regulatory compliance and accomplish any other organizational objective by fueling an automated, high-quality and real-time data pipeline.

With the erwin EDGE, data management and data governance are unified and mutually supportive, with one hand aware and informed by the efforts of the other to:

  • Discover data: Identify and integrate metadata from various data management silos.
  • Harvest data: Automate the collection of metadata from various data management silos and consolidate it into a single source.
  • Structure data: Connect physical metadata to specific business terms and definitions and reusable design standards.
  • Analyze data: Understand how data relates to the business and what attributes it has.
  • Map data flows: Identify where to integrate data and track how it moves and transforms.
  • Govern data: Develop a governance model to manage standards and policies and set best practices.
  • Socialize data: Enable stakeholders to see data in one place and in the context of their roles.

An integrated solution with data preparation, modeling and governance helps businesses reach data governance maturity – which equals a role-based, collaborative data governance system that serves both IT and business users equally. Such maturity may not happen overnight, but it will ultimately deliver the accurate and actionable insights your organization needs to compete and win.

Your journey to data nirvana begins with a demo of the enhanced erwin Data Governance solution. Register now.

erwin ADS webinar

Categories
erwin Expert Blog

Data Modeling: What the Experts Think

In a recently hosted podcast for Enterprise Management 360˚, Donna Burbank spoke to several experts in data modeling and asked about their views on some of today’s key enterprise data questions.

Categories
erwin Expert Blog

Why Data vs. Process is dead, and why we should look at the two working together

Whether a collection of data could be useful to a business, is all just a matter of perspective. We can view data in its raw form like a tangled set of wires, and for them to be useful again, they need to be separated.

We’ve talked before about how Data Modeling, and Enterprise Architecture can make data easier to manage and decipher, but arguably, there’s still a piece of the equation missing.

To make the most out of Big Data, the data must also be rationalized in the context of the business’ processes, where the data is used, by whom, and how. This is what process modeling aims to achieve. Without process modeling, businesses will find it difficult to quantify, and/or prioritize the data from a business perspective – making a truly business outcome-focused approach harder to realize.

So What is Process Modeling?

“Process modeling is the documentation of an organization’s processes designed to enhance company performance,” said Martin Owen, erwin’s VP of Product Management.

It does this by enabling a business to understand what they do, and how they do it.

As is commonplace for disciplines of this nature, there are multiple industry standards that provide the basis of the approach to how this documentation is handled.

The most common of which, is the “business process modeling notation” (BPMN) standard. With BPMN, businesses can analyze their processes from different perspectives, such as a human capital perspective, shining a light on the roles and competencies required for a process to perform.

Where does Data Modeling tie in with Process Modeling?

Historically, industry analysts have viewed Data and Process Modeling as two competing approaches. However, it’s time that notion was cast aside, as the benefits of the two working in tandem are too great to just ignore.

The secret behind making the most out of data, is being able to see the full picture, as well as drill down – or rather, zoom in – on what’s important in the given context.

From a process perspective, you will be able to see what data is used in the process and architecture models. And from a data perspective, users can see the context of the data and the impact of all the places it is used in processes across the enterprise. This provides a more well-rounded view of the organization and the data. Data modelers will benefit from this, enabling them to create and manage better data models, as well as implement more context specific data deployments.

It could be that the former approach to Data and Process Modeling was born out of the cost to invest in both (for some businesses) being too high, aligning the two approaches being too difficult, or a cocktail of both.

The latter is perhaps the more common culprit, though. This is evident when we consider the many companies already modeling both their data and processes. But the problem with the current approach is that the two model types are siloed, severing the valuable connections between the data and meaning alignment is difficult to achieve. Additionally, although all the data is there, the aforementioned severed connections are just as useful as the data itself, and so denying them means a business isn’t seeing the full picture.

However, there are now examples of both Data and Process Modeling being united under one banner.

“By bringing both data and process together, we are delivering more value to different stakeholders in the organization by providing more visibility of each domain,” suggested Martin. “Data isn’t locked into the database administrator or architect, it’s now expressed to the business by connections to process models.”

The added visibility provided by a connected data and process modeling approach is essential to a Big Data strategy. And there are further indications this approach will soon be (or already is), more crucial than ever before. The Internet of Things (IoT), for example, continues to gain momentum, and with it will come more data, at quicker speeds, from more disparate sources. Businesses will need to adopt this sort of approach to govern how this data is moved and united, and to identify/tackle any security issues that arise.

Enterprise Data Architecture and Data Governance

Categories
erwin Expert Blog

erwin Adds Business Process Modeling to the Platform

We are excited to announce the acquisition of Casewise, a leading provider of business process modeling solutions. Click here to read our press release, issued this afternoon.

Organizations have massive amounts of data scattered throughout their enterprise and often struggle to see this data in the context of where and how it’s being used by enterprise processes, technologies and applications. This visibility and understanding is critical to building the data-driven enterprise and to the solid foundation of every big data initiative. That’s why erwin and Casewise is such a compelling combination.

Casewise powerful process modeling integrated deeply into the erwin data management platform delivers the following key benefits for customers and the larger big data marketplace:

  • Data Interoperability and Lineage: Through visualization of models, identify where and how data elements are being used throughout the organization
  • Data Impact Analysis: Shows data in context, with the ability to identify applications, processes and technologies that will directly be impacted by changes to the data design
  • Design and Create Database Structures: Create database design and data definition language (DDL) directly from visual models
  • Business Process Simulation: After analyzing existing operations, assess the impact of potential changes before expensive and time consuming implementations

In less than 9 months since we took erwin out of CA and launched erwin, Inc., and we’ve done two transformational acquisitions and have developed a very exciting vision and roadmap. erwin is truly building the foundation for the new data-driven enterprise and we believe this has powerful benefits for our customers and our partners.

Please join us at our upcoming customer and partner webcasts to learn more about our vision, market view and what this means for you.

Register for our customer webcast, Monday, December 12 at 12 PM ET

Register for our partner webcast, Thursday, December 15 at 10 AM ET