Categories
erwin Expert Blog

A Guide to CCPA Compliance and How the California Consumer Privacy Act Compares to GDPR

California Consumer Privacy Act (CCPA) compliance shares many of the same requirements in the European Unions’ General Data Protection Regulation (GDPR).

While the CCPA has been signed into law, organizations have until Jan. 1, 2020, to enact its mandates. Luckily, many organizations have already laid the regulatory groundwork for it because of their efforts to comply with GDPR.

However, there are some key differences that we’ll explore in the Q&A below.

Data governance, thankfully, provides a framework for compliance with either or both – in addition to other regulatory mandates your organization may be subject to.

CCPA Compliance Requirements vs. GDPR FAQ

Does CCPA apply to not-for-profit organizations? 

No, CCPA compliance only applies to for-profit organizations. GDPR compliance is required for any organization, public or private (including not-for-profit).

What for-profit businesses does CCPA apply to?

The mandate for CCPA compliance only applies if a for-profit organization:

  • Has an annual gross revenue exceeding $25 million
  • Collects, sells or shares the personal data of 50,000 or more consumers, households or devices
  • Earns 50% of more of its annual revenue by selling consumers’ personal information

Does the CCPA apply outside of California?

As the name suggests, the legislation is designed to protect the personal data of consumers who reside in the state of California.

But like GDPR, CCPA compliance has impacts outside the area of origin. This means businesses located outside of California, but selling to (or collecting the data of) California residents must also comply.

Does the CCPA exclude anything that GDPR doesn’t? 

GDPR encompasses all categories of “personal data,” with no distinctions.

CCPA does make distinctions, particularly when other regulations may overlap. These include:

  • Medical information covered by the Confidentiality of Medical Information Act (CMIA) and the Health Insurance Portability and Accountability Act (HIPAA)
  • Personal information covered by the Gramm-Leach-Bliley Act (GLBA)
  • Personal information covered by the Driver’s Privacy Protection Act (DPPA)
  • Clinical trial data
  • Information sold to or by consumer reporting agencies
  • Publicly available personal information (federal, state and local government records)

What about access requests? 

Under the GDPR, organizations must make any personal data collected from an EU citizen available upon request.

CCPA compliance only requires data collected within the last 12 months to be shared upon request.

Does the CCPA include the right to opt out?

CCPA, like GDPR, empowers gives consumers/citizens the right to opt out in regard to the processing of their personal data.

However, CCPA compliance only requires an organization to observe an opt-out request when it comes to the sale of personal data. GDPR does not make any distinctions between “selling” personal data and any other kind of data processing.

To meet CCPA compliance opt-out standards, organizations must provide a “Do Not Sell My Personal Information” link on their home pages.

Does the CCPA require individuals to willingly opt in?

No. Whereas the GDPR requires informed consent before an organization sells an individual’s information, organizations under the scope of the CCPA can still assume consent. The only exception involves the personal information of children (under 16). Children over 13 can consent themselves, but if the consumer is a child under 13, a parent or guardian must authorize the sale of said child’s personal data.

What about fines for CCPA non-compliance? 

In theory, fines for CCPA non-compliance are potentially more far reaching than those of GDPR because there is no ceiling for CCPA penalties. Under GDPR, penalties have a ceiling of 4% of global annual revenue or €20 million, whichever is greater. GDPR recently resulted in a record fine for Google.

Organizations outside of CCPA compliance can only be fined up to $7,500 per violation, but there is no upper ceiling.

CCPA compliance is a data governance issue

Data Governance for Regulatory Compliance

While CCPA has a more narrow geography and focus than GDPR, compliance is still a serious effort for organizations under its scope. And as data-driven business continues to expand, so too will the pressure on lawmakers to regulate how organizations process data. Remember the Facebook hearings and now inquiries into Google and Twitter, for example?

Regulatory compliance remains a key driver for data governance. After all, to understand how to meet data regulations, an organization must first understand its data.

An effective data governance initiative should enable just that, by giving an organization the tools to:

  • Discover data: Identify and interrogate metadata from various data management silos
  • Harvest data: Automate the collection of metadata from various data management silos and consolidate it into a single source
  • Structure data: Connect physical metadata to specific business terms and definitions and reusable design standards
  • Analyze data: Understand how data relates to the business and what attributes it has
  • Map data flows: Identify where to integrate data and track how it moves and transforms
  • Govern data: Develop a governance model to manage standards and policies and set best practices
  • Socialize data: Enable all stakeholders to see data in one place in their own context

A Regulatory EDGE

The erwin EDGE software platform creates an “enterprise data governance experience” to transform how all stakeholders discover, understand, govern and socialize data assets. It includes enterprise modeling, data cataloging and data literacy capabilities, giving organizations visibility and control over their disparate architectures and all the supporting data.

Both IT and business stakeholders have role-based, self-service access to the information they need to collaborate in making strategic decisions. And because many of the associated processes can be automated, you reduce errors and increase the speed and quality of your data pipeline. This data intelligence unlocks knowledge and value.

The erwin EDGE provides the most agile, efficient and cost-effective means of launching and sustaining a strategic and comprehensive data governance initiative, whether you wish to deploy on premise or in the cloud. But you don’t have to implement every component of the erwin EDGE all at once to see strategic value.

Because of the platform’s federated design, you can address your organization’s most urgent needs, such as regulatory compliance, first. Then you can proactively address other organization objectives, such as operational efficiency, revenue growth, increasing customer satisfaction and improving overall decision-making.

You can learn more about leveraging data governance to navigate the changing tide of data regulations here.

Are you compliant with data regulations?

Categories
erwin Expert Blog

Data Governance Stock Check: Using Data Governance to Take Stock of Your Data Assets

For regulatory compliance (e.g., GDPR) and to ensure peak business performance, organizations often bring consultants on board to help take stock of their data assets. This sort of data governance “stock check” is important but can be arduous without the right approach and technology. That’s where data governance comes in …

While most companies hold the lion’s share of operational data within relational databases, it also can live in many other places and various other formats. Therefore, organizations need the ability to manage any data from anywhere, what we call our “any-squared” (Any2) approach to data governance.

Any2 first requires an understanding of the ‘3Vs’ of data – volume, variety and velocity – especially in context of the data lifecycle, as well as knowing how to leverage the key  capabilities of data governance – data cataloging, data literacy, business process, enterprise architecture and data modeling – that enable data to be leveraged at different stages for optimum security, quality and value.

Following are two examples that illustrate the data governance stock check, including the Any2 approach in action, based on real consulting engagements.

Data Governance Stock Check

Data Governance “Stock Check” Case 1: The Data Broker

This client trades in information. Therefore, the organization needed to catalog the data it acquires from suppliers, ensure its quality, classify it, and then sell it to customers. The company wanted to assemble the data in a data warehouse and then provide controlled access to it.

The first step in helping this client involved taking stock of its existing data. We set up a portal so data assets could be registered via a form with basic questions, and then a central team received the registrations, reviewed and prioritized them. Entitlement attributes also were set up to identify and profile high-priority assets.

A number of best practices and technology solutions were used to establish the data required for managing the registration and classification of data feeds:

1. The underlying metadata is harvested followed by an initial quality check. Then the metadata is classified against a semantic model held in a business glossary.

2. After this classification, a second data quality check is performed based on the best-practice rules associated with the semantic model.

3. Profiled assets are loaded into a historical data store within the warehouse, with data governance tools generating its structure and data movement operations for data loading.

4. We developed a change management program to make all staff aware of the information brokerage portal and the importance of using it. It uses a catalog of data assets, all classified against a semantic model with data quality metrics to easily understand where data assets are located within the data warehouse.

5. Adopting this portal, where data is registered and classified against an ontology, enables the client’s customers to shop for data by asset or by meaning (e.g., “what data do you have on X topic?”) and then drill down through the taxonomy or across an ontology. Next, they raise a request to purchase the desired data.

This consulting engagement and technology implementation increased data accessibility and capitalization. Information is registered within a central portal through an approved workflow, and then customers shop for data either from a list of physical assets or by information content, with purchase requests also going through an approval workflow. This, among other safeguards, ensures data quality.

Benefits of Data Governance

Data Governance “Stock Check” Case 2: Tracking Rogue Data

This client has a geographically-dispersed organization that stored many of its key processes in Microsoft Excel TM spreadsheets. They were planning to move to Office 365TM and were concerned about regulatory compliance, including GDPR mandates.

Knowing that electronic documents are heavily used in key business processes and distributed across the organization, this company needed to replace risky manual processes with centralized, automated systems.

A key part of the consulting engagement was to understand what data assets were in circulation and how they were used by the organization. Then process chains could be prioritized to automate and outline specifications for the system to replace them.

This organization also adopted a central portal that allowed employees to register data assets. The associated change management program raised awareness of data governance across the organization and the importance of data registration.

For each asset, information was captured and reviewed as part of a workflow. Prioritized assets were then chosen for profiling, enabling metadata to be reverse-engineered before being classified against the business glossary.

Additionally, assets that were part of a process chain were gathered and modeled with enterprise architecture (EA) and business process (BP) modeling tools for impact analysis.

High-level requirements for new systems then could be defined again in the EA/BP tools and prioritized on a project list. For the others, decisions could be made on whether they could safely be placed in the cloud and whether macros would be required.

In this case, the adoption of purpose-built data governance solutions helped build an understanding of the data assets in play, including information about their usage and content to aid in decision-making.

This client then had a good handle of the “what” and “where” in terms of sensitive data stored in their systems. They also better understood how this sensitive data was being used and by whom, helping reduce regulatory risks like those associated with GDPR.

In both scenarios, we cataloged data assets and mapped them to a business glossary. It acts as a classification scheme to help govern data and located data, making it both more accessible and valuable. This governance framework reduces risk and protects its most valuable or sensitive data assets.

Focused on producing meaningful business outcomes, the erwin EDGE platform was pivotal in achieving these two clients’ data governance goals – including the infrastructure to undertake a data governance stock check. They used it to create an “enterprise data governance experience” not just for cataloging data and other foundational tasks, but also for a competitive “EDGE” in maximizing the value of their data while reducing data-related risks.

To learn more about the erwin EDGE data governance platform and how it aids in undertaking a data governance stock check, register for our free, 30-minute demonstration here.

Categories
erwin Expert Blog

Five Benefits of an Automation Framework for Data Governance

Organizations are responsible for governing more data than ever before, making a strong automation framework a necessity. But what exactly is an automation framework and why does it matter?

In most companies, an incredible amount of data flows from multiple sources in a variety of formats and is constantly being moved and federated across a changing system landscape.

Often these enterprises are heavily regulated, so they need a well-defined data integration model that helps avoid data discrepancies and removes barriers to enterprise business intelligence and other meaningful use.

IT teams need the ability to smoothly generate hundreds of mappings and ETL jobs. They need their data mappings to fall under governance and audit controls, with instant access to dynamic impact analysis and lineage.

With an automation framework, data professionals can meet these needs at a fraction of the cost of the traditional manual way.

In data governance terms, an automation framework refers to a metadata-driven universal code generator that works hand in hand with enterprise data mapping for:

  • Pre-ETL enterprise data mapping
  • Governing metadata
  • Governing and versioning source-to-target mappings throughout the lifecycle
  • Data lineage, impact analysis and business rules repositories
  • Automated code generation

Such automation enables organizations to bypass bottlenecks, including human error and the time required to complete these tasks manually.

In fact, being able to rely on automated and repeatable processes can result in up to 50 percent in design savings, up to 70 percent conversion savings and up to 70 percent acceleration in total project delivery.

So without further ado, here are the five key benefits of an automation framework for data governance.

Automation Framework

Benefits of an Automation Framework for Data Governance

  1. Creates simplicity, reliability, consistency and customization for the integrated development environment.

Code automation templates (CATs) can be created – for virtually any process and any tech platform – using the SDK scripting language or the solution’s published libraries to completely automate common, manual data integration tasks.

CATs are designed and developed by senior automation experts to ensure they are compliant with industry or corporate standards as well as with an organization’s best practice and design standards.

The 100-percent metadata-driven approach is critical to creating reliable and consistent CATs.

It is possible to scan, pull in and configure metadata sources and targets using standard or custom adapters and connectors for databases, ERP, cloud environments, files, data modeling, BI reports and Big Data to document data catalogs, data mappings, ETL (XML code) and even SQL procedures of any type.

  1. Provides blueprints anyone in the organization can use.

Stage DDL from source metadata for the target DBMS; profile and test SQL for test automation of data integration projects; generate source-to-target mappings and ETL jobs for leading ETL tools, among other capabilities.

It also can populate and maintain Big Data sets by generating PIG, Scoop, MapReduce, Spark, Python scripts and more.

  1. Incorporates data governance into the system development process.

An organization can achieve a more comprehensive and sustainable data governance initiative than it ever could with a homegrown solution.

An automation framework’s ability to automatically create, version, manage and document source-to-target mappings greatly matters both to data governance maturity and a shorter-time-to-value.

This eliminates duplication that occurs when project teams are siloed, as well as prevents the loss of knowledge capital due to employee attrition.

Another value capability is coordination between data governance and SDLC, including automated metadata harvesting and cataloging from a wide array of sources for real-time metadata synchronization with core data governance capabilities and artifacts.

  1. Proves the value of data lineage and impact analysis for governance and risk assessment.

Automated reverse-engineering of ETL code into natural language enables a more intuitive lineage view for data governance.

With end-to-end lineage, it is possible to view data movement from source to stage, stage to EDW, and on to a federation of marts and reporting structures, providing a comprehensive and detailed view of data in motion.

The process includes leveraging existing mapping documentation and auto-documented mappings to quickly render graphical source-to-target lineage views including transformation logic that can be shared across the enterprise.

Similarly, impact analysis – which involves data mapping and lineage across tables, columns, systems, business rules, projects, mappings and ETL processes – provides insight into potential data risks and enables fast and thorough remediation when needed.

Impact analysis across the organization while meeting regulatory compliance with industry regulators requires detailed data mapping and lineage.

THE REGULATORY RATIONALE FOR INTEGRATING DATA MANAGEMENT & DATA GOVERNANCE

  1. Supports a wide spectrum of business needs.

Intelligent automation delivers enhanced capability, increased efficiency and effective collaboration to every stakeholder in the data value chain: data stewards, architects, scientists, analysts; business intelligence developers, IT professionals and business consumers.

It makes it easier for them to handle jobs such as data warehousing by leveraging source-to-target mapping and ETL code generation and job standardization.

It’s easier to map, move and test data for regular maintenance of existing structures, movement from legacy systems to new systems during a merger or acquisition, or a modernization effort.

erwin’s Approach to Automation for Data Governance: The erwin Automation Framework

Mature and sustainable data governance requires collaboration from both IT and the business, backed by a technology platform that accelerates the time to data intelligence.

Part of the erwin EDGE portfolio for an “enterprise data governance experience,” the erwin Automation Framework transforms enterprise data into accurate and actionable insights by connecting all the pieces of the data management and data governance lifecycle.

 As with all erwin solutions, it embraces any data from anywhere (Any2) with automation for relational, unstructured, on-premise and cloud-based data assets and data movement specifications harvested and coupled with CATs.

If your organization would like to realize all the benefits explained above – and gain an “edge” in how it approaches data governance, you can start by joining one of our weekly demos for erwin Mapping Manager.

Automate Data Mapping

Categories
erwin Expert Blog

Massive Marriott Data Breach: Data Governance for Data Security

Organizations have been served yet another reminder of the value of data governance for data security.

Hotel and hospitality powerhouse Marriott recently revealed a massive data breach that led to the theft of personal data for an astonishing 500 million customers of its Starwood hotels. This is the second largest data breach in recent history, surpassed only by Yahoo’s breach of 3 billion accounts in 2013 for which it has agreed to pay a $50 million settlement to more than 200 million customers.

Now that Marriott has taken a major hit to its corporate reputation, it has two moves:

  1. Respond: Marriott’s response to its data breach so far has not received glowing reviews. But beyond how it communicates to effected customers, the company must examine how the breach occurred in the first place. This means understanding the context of its data – what assets exist and where, the relationship between them and enterprise systems and processes, and how and by what parties the data is used – to determine the specific vulnerability.
  2. Fix it: Marriott must fix the problem, and quickly, to ensure it doesn’t happen again. This step involves a lot of analysis. A data governance solution would make it a lot less painful by providing visibility into the full data landscape – linkages, processes, people and so on. Then more context-sensitive data security architectures can put in place to for corporate and consumer data privacy.

The GDPR Factor

It’s been six months since the General Data Protection Regulation (GDPR) took effect. While fines for noncompliance have been minimal to date, we anticipate them to dramatically increase in the coming year. Marriott’s bad situation could potentially worsen in this regard, without holistic data governance in place to identify whose and what data was taken.

Data management and data governance, together, play a vital role in compliance, including GDPR. It’s easier to protect sensitive data when you know what it is, where it’s stored and how it needs to be governed.

FREE GUIDE: THE REGULATORY RATIONALE FOR INTEGRATING DATA MANAGEMENT & DATA GOVERNANCE 

Truly understanding an organization’s data, including the data’s value and quality, requires a harmonized approach embedded in business processes and enterprise architecture. Such an integrated enterprise data governance experience helps organizations understand what data they have, where it is, where it came from, its value, its quality and how it’s used and accessed by people and applications.

Data Governance for Data Security

Data Governance for Data Security: Lessons Learned

Other companies should learn (like pronto) that they need to be prepared. At this point it’s not if, but when, a data breach will rear its ugly head. Preparation is your best bet for avoiding the entire fiasco – from the painstaking process of identifying what happened and why to notifying customers their data and trust in your organization have been compromised.

A well-formed security architecture that is driven by and aligned by data intelligence is your best defense. However, if there is nefarious intent, a hacker will find a way. So being prepared means you can minimize your risk exposure and the damage to your reputation.

Multiple components must be considered to effectively support a data governance, security and privacy trinity. They are:

  1. Data models
  2. Enterprise architecture
  3. Business process models

What’s key to remember is that these components act as links in the data governance chain by making it possible to understand what data serves the organization, its connection to the enterprise architecture, and all the business processes it touches.

THE EXPERT GUIDE TO DATA GOVERNANCE, SECURITY AND PRIVACY

Creating policies for data handling and accountability and driving culture change so people understand how to properly work with data are two important components of a data governance initiative, as is the technology for proactively managing data assets.

Without the ability to harvest metadata schemas and business terms; analyze data attributes and relationships; impose structure on definitions; and view all data in one place according to each user’s role within the enterprise, businesses will be hard pressed to stay in step with governance standards and best practices around security and privacy.

As a consequence, the private information held within organizations will continue to be at risk. Organizations suffering data breaches will be deprived of the benefits they had hoped to realize from the money spent on security technologies and the time invested in developing data privacy classifications. They also may face heavy fines and other financial, not to mention PR, penalties.

Less Pain, More Gain

Most organizations don’t have enough time or money for data management using manual processes. And outsourcing is also expensive, with inevitable delays because these vendors are dependent on manual processes too. Furthermore, manual processes require manual analysis and auditing, which is always more expensive and time consuming.

So the more processes an organization can automate, the less risk of human error, which is actually the primary cause of most data breaches. And automated processes are much easier to analyze and audit because everything is captured, versioned and available for review in a log somewhere. You can read more about automation in our 10 Reasons to Automate Data Mapping and Data Preparation.

And to learn more about how data governance underpins data security and privacy, click here.

Automate Data Mapping