Categories
erwin Expert Blog

Using Strategic Data Governance to Manage GDPR/CCPA Complexity

In light of recent, high-profile data breaches, it’s past-time we re-examined strategic data governance and its role in managing regulatory requirements.

News broke earlier this week of British Airways being fined 183 million pounds – or $228 million – by the U.K. for alleged violations of the European Union’s General Data Protection Regulation (GDPR). While not the first, it is the largest penalty levied since the GDPR went into effect in May 2018.

Given this, Oppenheimer & Co. cautions:

“European regulators could accelerate the crackdown on GDPR violators, which in turn could accelerate demand for GDPR readiness. Although the CCPA [California Consumer Privacy Act, the U.S. equivalent of GDPR] will not become effective until 2020, we believe that new developments in GDPR enforcement may influence the regulatory framework of the still fluid CCPA.”

With all the advance notice and significant chatter for GDPR/CCPA,  why aren’t organizations more prepared to deal with data regulations?

In a word? Complexity.

The complexity of regulatory requirements in and of themselves is aggravated by the complexity of the business and data landscapes within most enterprises.

So it’s important to understand how to use strategic data governance to manage the complexity of regulatory compliance and other business objectives …

Designing and Operationalizing Regulatory Compliance Strategy

It’s not easy to design and deploy compliance in an environment that’s not well understood and difficult in which to maneuver. First you need to analyze and design your compliance strategy and tactics, and then you need to operationalize them.

Modern, strategic data governance, which involves both IT and the business, enables organizations to plan and document how they will discover and understand their data within context, track its physical existence and lineage, and maximize its security, quality and value. It also helps enterprises put these strategic capabilities into action by:

  • Understanding their business, technology and data architectures and their inter-relationships, aligning them with their goals and defining the people, processes and technologies required to achieve compliance.
  • Creating and automating a curated enterprise data catalog, complete with physical assets, data models, data movement, data quality and on-demand lineage.
  • Activating their metadata to drive agile data preparation and governance through integrated data glossaries and dictionaries that associate policies to enable stakeholder data literacy.

Strategic Data Governance for GDPR/CCPA

Five Steps to GDPR/CCPA Compliance

With the right technology, GDPR/CCPA compliance can be automated and accelerated in these five steps:

  1. Catalog systems

Harvest, enrich/transform and catalog data from a wide array of sources to enable any stakeholder to see the interrelationships of data assets across the organization.

  1. Govern PII “at rest”

Classify, flag and socialize the use and governance of personally identifiable information regardless of where it is stored.

  1. Govern PII “in motion”

Scan, catalog and map personally identifiable information to understand how it moves inside and outside the organization and how it changes along the way.

  1. Manage policies and rules

Govern business terminology in addition to data policies and rules, depicting relationships to physical data catalogs and the applications that use them with lineage and impact analysis views.

  1. Strengthen data security

Identify regulatory risks and guide the fortification of network and encryption security standards and policies by understanding where all personally identifiable information is stored, processed and used.

How erwin Can Help

erwin is the only software provider with a complete, metadata-driven approach to data governance through our integrated enterprise modeling and data intelligence suites. We help customers overcome their data governance challenges, with risk management and regulatory compliance being primary concerns.

However, the erwin EDGE also delivers an “enterprise data governance experience” in terms of agile innovation and business transformation – from creating new products and services to keeping customers happy to generating more revenue.

Whatever your organization’s key drivers are, a strategic data governance approach – through  business process, enterprise architecture and data modeling combined with data cataloging and data literacy – is key to success in our modern, digital world.

If you’d like to get a handle on handling your data, you can sign up for a free, one-on-one demo of erwin Data Intelligence.

For more information on GDPR/CCPA, we’ve also published a white paper on the Regulatory Rationale for Integrating Data Management and Data Governance.

GDPR White Paper

Categories
erwin Expert Blog

The Importance of EA/BP for Mergers and Acquisitions

Over the past few weeks several huge mergers and acquisitions (M&A) have been announced, including Raytheon and United Technologies, the Salesforce acquisition of Tableau and the Merck acquisition of Tilos Therapeutics.

According to collated research and a Harvard Business Review report, the M&A failure rate sits between 70 and 90 percent. Additionally, McKinsey estimates that around 70 percent of mergers do not achieve their expected “revenue synergies.”

Combining two organizations into one is complicated. And following a merger or acquisition, businesses typically find themselves with duplicate applications and business capabilities that are costly and obviously redundant, making alignment difficult.

Enterprise architecture is essential to successful mergers and acquisitions. It helps alignment by providing a business- outcome perspective for IT and guiding transformation. It also helps define strategy and models, improving interdepartmental cohesion and communication. Roadmaps can be used to provide a common focus throughout the new company, and if existing roadmaps are in place, they can be modified to fit the new landscape.

Additionally, an organization must understand both sets of processes being brought to the table. Without business process modeling, this is near impossible.

In an M&A scenario, businesses need to ensure their systems are fully documented and rationalized. This way, they can comb through their inventories to make more informed decisions about which systems to cut or phase out to operate more efficiently and then deliver the roadmap to enable those changes.

Mergers and Acquisitions

Getting Rid of Duplications Duplications

Mergers and acquisitions are daunting. Depending on the size of the businesses, hundreds of systems and processes need to be accounted for, which can be difficult, and even impossible to do in advance.

Enterprise architecture aids in rooting out process and operational duplications, making the new entity more cost efficient. Needless to say, the behind-the-scenes complexities are many and can include discovering that the merging enterprises use the same solution but under different names in different parts of the organizations, for example.

Determinations also may need to be made about whether particular functions, that are expected to become business-critical, have a solid, scalable base to build upon. If an existing application won’t be able to handle the increased data load and processing, then those previously planned investments don’t need to be made.

Gaining business-wide visibility of data and enterprise architecture all within a central repository enables relevant parties across merging companies to work from a single source of information. This provides insights to help determine whether, for example, two equally adept applications of the same nature can continue to be used as the companies merge, because they share common underlying data infrastructures that make it possible for them to interoperate across a single source of synched information.

Or, in another scenario, it may be obvious that it is better to keep only one of the applications because it alone serves as the system of record for what the organization has determined are valuable conceptual data entities in its data model.

At the same time, it can reveal the location of data that might otherwise have been unwittingly discharged with the elimination of an application, enabling it to be moved to a lower-cost storage tier for potential future use.

Knowledge Retention – Avoiding Brain Drain

When employees come and go, as they tend to during mergers and acquisitions, they take critical institutional knowledge with them.

Unlocking knowledge and then putting systems in place to retain that knowledge is one key benefit of business process modeling. Knowledge retention and training has become a pivotal area in which businesses will either succeed or fail.

Different organizations tend to speak different languages. For instance, one company might refer to a customer as “customer,” while another might refer to them as a “client.” Business process modeling is a great way to get everybody in the organization using the same language, referring to things in the same way.

Drawing out this knowledge then allows a centralized and uniform process to be adopted across the company. In any department within any company, individuals and teams develop processes for doing things. Business process modeling extracts all these pieces of information from individuals and teams so they can be turned into centrally adopted processes.

 

[FREE EBOOK] Application Portfolio Management For Mergers & Acquisitions 

 

Ensuring Compliance

Industry and government regulations affect businesses that work in or do business with any number of industries or in specific geographies. Industry-specific regulations in areas like healthcare, pharmaceuticals and financial services have been in place for some time.

Now, broader mandates like the European Union’s Generation Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA) require businesses across industries to think about their compliance efforts. Business process modeling helps organizations prove what they are doing to meet compliance requirements and understand how changes to their processes impact compliance efforts (and vice versa).

In highly regulated industries like financial services and pharmaceuticals, where mergers and acquisitions activity is frequent, identifying and standardizing business processes meets the scrutiny of regulatory compliance.

Business process modeling makes it easier to document processes, align documentation within document control and learning management systems, and give R&D employees easy access and intuitive navigation so they can find the information they need.

Introducing Business Architecture

Organizations often interchange the terms “business process” and “enterprise architecture” because both are strategic functions with many interdependencies.

However, business process architecture defines the elements of a business and how they interact with the aim of aligning people, processes, data, technologies and applications. Enterprise architecture defines the structure and operation of an organization with the purpose of determining how it can achieve its current and future objectives most effectively, translating those goals into a blueprint of IT capabilities.

Although both disciplines seek to achieve the organization’s desired outcomes, both have largely operated in silos.

To learn more about how erwin provides modeling and analysis software to support both business process and enterprise architecture practices and enable their broader collaboration, click here.

Cloud-based enterprise architecture and business process

Categories
erwin Expert Blog

Enterprise Architecture and Business Process: Common Goals Require Common Tools

For decades now, the professional world has put a great deal of energy into discussing the gulf that exists between business and IT teams within organizations.

They speak different languages, it’s been said, and work toward different goals. Technology plans don’t seem to account for the reality of the business, and business plans don’t account for the capabilities of the technology.

Data governance is one area where business and IT never seemed to establish ownership. Early attempts at data governance treated the idea as a game of volleyball, passing ownership back and forth, with one team responsible for storing data and running applications, and one responsible for using the data for business outcomes.

Today, we see ample evidence this gap is closing at many organizations. Consider:

  • Many technology platforms and software applications now are designed for business users. Business intelligence is a prime example; it’s rare today to see IT pros have to run reports for business users thanks to self-service.
  • Many workers, especially those that came of age surrounded by technology, have a better understanding of both the business and technology that runs their organizations. Education programs also have evolved to help students develop a background in both business and technology.
  • There’s more portability in roles, with technology minds moving to business leadership positions and vice versa.

“The business domain has always existed in enterprise architecture,” says Manuel Ponchaux, director of product management at erwin, Inc. “However, enterprise architecture has traditionally been an IT function with a prime focus on IT. We are now seeing a shift with a greater focus on business outcomes.”

You can see evidence of this blended focus in some of the titles, like “business architect,” being bestowed upon what was traditionally at IT function. These titles demonstrate an understanding that technology cannot exist in the modern organization for the sake of technology alone – technology needs to support the business and its customers. This concept is also a major focus of the digital transformation wave that’s washing over the business world, and thus we see it reflected in job titles that simply didn’t exist a decade ago.

Job titles aside, enterprise architecture (EA) and business process (BP) teams still have different goals, though at many organizations they now work more closely together than they did in the past. Today, both EA and BP teams recognize that their common goal is better business outcomes. Along the way to that goal, each team conducts a number of similar tasks.

Enterprise Architecture and Business Process: Better Together

One prominent example is modeling. Both enterprise architecture and business process teams do modeling, but they do it in different ways at different levels, and they often use different data and tools. This lack of coordination and communication makes it difficult to develop a true sense of a process from the IT and business sides of the equation. It can also lead to duplication of efforts, which is inefficient and likely to add further confusion when trying to understand outcomes.

Building better business outcomes is like following a plan at a construction site. If different teams are making their own decisions about the materials they’re going to use and following their own blueprints, you’re unlikely to see the building you expect to see at the end of the job.

And that’s essentially what is missing at many organizations: A common repository with role-based views, interfaces and dashboard so that enterprise architecture and business process can truly work together using the same blueprint. When enterprise architecture and business process can use common tools that both aid collaboration and help them understand the elements most important to their roles, the result is greater accuracy, increased efficiency and improved outcomes.

erwin’s enterprise architecture and business process tools provide the common repository and role-based views that help these teams work collaboratively toward their common goals. Finally, enterprise architecture and business process can be on the same page.

Business Process Modeling Use Cases

Categories
erwin Expert Blog Data Governance

Data Governance Frameworks: The Key to Successful Data Governance Implementation

A strong data governance framework is central to successful data governance implementation in any data-driven organization because it ensures that data is properly maintained, protected and maximized.

But despite this fact, enterprises often face push back when implementing a new data governance initiative or trying to mature an existing one.

Let’s assume you have some form of informal data governance operation with some strengths to build on and some weaknesses to correct. Some parts of the organization are engaged and behind the initiative, while others are skeptical about its relevance or benefits.

Some other common data governance implementation obstacles include:

  • Questions about where to begin and how to prioritize which data streams to govern first
  • Issues regarding data quality and ownership
  • Concerns about data lineage
  • Competing project and resources (time, people and funding)

By using a data governance framework, organizations can formalize their data governance implementation and subsequent adherence to. This addressess common concerns including data quality and data lineage, and provides a clear path to successful data governance implementation.

In this blog, we will cover three key steps to successful data governance implementation. We will also look into how we can expand the scope and depth of a data governance framework to ensure data governance standards remain high.

Data Governance Implementation in 3 Steps

When maturing or implementing data governance and/or a data governance framework, an accurate assessment of the ‘here and now’ is key. Then you can rethink the path forward, identifying any current policies or business processes that should be incorporated, being careful to avoid making the same mistakes of prior iterations.

With this in mind, here are three steps we recommend for implementing data governance and a data governance framework.

Data Governance Framework

Step 1: Shift the culture toward data governance

Data governance isn’t something to set and forget; it’s a strategic approach that needs to evolve over time in response to new opportunities and challenges. Therefore, a successful data governance framework has to become part of the organization’s culture but such a shift requires listening – and remembering that it’s about people, empowerment and accountability.

In most cases, a new data governance framework requires people – those in IT and across the business, including risk management and information security – to change how they work. Any concerns they raise or recommendations they make should be considered. You can encourage feedback through surveys, workshops and open dialog.

Once input has been discussed and plan agreed upon, it is critical to update roles and responsibilities, provide training and ensure ongoing communication. Many organizations now have internal certifications for different data governance roles who wear these badges with pride.

A top-down management approach will get a data governance initiative off the ground, but only bottom-up cultural adoption will carry it out.

Step 2: Refine the data governance framework

The right capabilities and tools are important for fueling an accurate, real-time data pipeline and governing it for maximum security, quality and value. For example:

Data catalogingOrganization’s implementing a data governance framework will benefit from automated metadata harvesting, data mapping, code generation and data lineage with reference data management, lifecycle management and data quality. With these capabilities, you can  efficiently integrate and activate enterprise data within a single, unified catalog in accordance with business requirements.

Data literacy Being able to discover what data is available and understand what it means in common, standardized terms is important because data elements may mean different things to different parts of the organization. A business glossary answers this need, as does the ability for stakeholders to view data relevant to their roles and understand it within a business context through a role-based portal.

Such tools are further enhanced if they can be integrated across data and business architectures and when they promote self-service and collaboration, which also are important to the cultural shift.

 

Subscribe to the erwin Expert Blog

Once you submit the trial request form, an erwin representative will be in touch to verify your request and help you start data modeling.

 

 

Step 3: Prioritize then scale the data governance framework

Because data governance is on-going, it’s important to prioritize the initial areas of focus and scale from there. Organizations that start with 30 to 50 data items are generally more successful than those that attempt more than 1,000 in the early stages.

Find some representative (familiar) data items and create examples for data ownership, quality, lineage and definition so stakeholders can see real examples of the data governance framework in action. For example:

  • Data ownership model showing a data item, its definition, producers, consumers, stewards and quality rules (for profiling)
  • Workflow showing the creation, enrichment and approval of the above data item to demonstrate collaboration

Whether your organization is just adopting data governance or the goal is to refine an existing data governance framework, the erwin DG RediChek will provide helpful insights to guide you in the journey.

Categories
erwin Expert Blog

Data Mapping Tools: What Are the Key Differentiators

The need for data mapping tools in light of increasing volumes and varieties of data – as well as the velocity at which it must be processed – is growing.

It’s not difficult to see why either. Data mapping tools have always been a key asset for any organization looking to leverage data for insights.

Isolated units of data are essentially meaningless. By linking data and enabling its categorization in relation to other data units, data mapping provides the context vital for actionable information.

Now with the General Data Protection Regulation (GDPR) in effect, data mapping has become even more significant.

The scale of GDPR’s reach has set a new precedent and is the closest we’ve come to a global standard in terms of data regulations. The repercussions can be huge – just ask Google.

Data mapping tools are paramount in charting a path to compliance for said new, near-global standard and avoiding the hefty fines.

Because of GDPR, organizations that may not have fully leveraged data mapping for proactive data-driven initiatives (e.g., analysis) are now adopting data mapping tools with compliance in mind.

Arguably, GDPR’s implementation can be viewed as an opportunity – a catalyst for digital transformation.

Those organizations investing in data mapping tools with compliance as the main driver will definitely want to consider this opportunity and have it influence their decision as to which data mapping tool to adopt.

With that in mind, it’s important to understand the key differentiators in data mapping tools and the associated benefits.

Data Mapping Tools: erwin Mapping Manager

Data Mapping Tools: Automated or Manual?

In terms of differentiators for data mapping tools, perhaps the most distinct is automated data mapping versus data mapping via manual processes.

Data mapping tools that allow for automation mean organizations can benefit from in-depth, quality-assured data mapping, without the significant allocations of resources typically associated with such projects.

Eighty percent of data scientists’ and other data professionals’ time is spent on manual data maintenance. That’s anything and everything from addressing errors and inconsistencies and trying to understand source data or track its lineage. This doesn’t even account for the time lost due to missed errors that contribute to inherently flawed endeavors.

Automated data mapping tools render such issues and concerns void. In turn, data professionals’ time can be put to much better, proactive use, rather than them being bogged down with reactive, house-keeping tasks.

FOUR INDUSTRY FOCUSSED CASE STUDIES FOR AUTOMATED METADATA-DRIVEN AUTOMATION 
(BFSI, PHARMA, INSURANCE AND NON-PROFIT) 

 

As well as introducing greater efficiency to the data governance process, automated data mapping tools enable data to be auto-documented from XML that builds mappings for the target repository or reporting structure.

Additionally, a tool that leverages and draws from a single metadata repository means that mappings are dynamically linked with underlying metadata to render automated lineage views, including full transformation logic in real time.

Therefore, changes (e.g., in the data catalog) will be reflected across data governance domains (business process, enterprise architecture and data modeling) as and when they’re made – no more juggling and maintaining multiple, out-of-date versions.

It also enables automatic impact analysis at the table and column level – even for business/transformation rules.

For organizations looking to free themselves from the burden of juggling multiple versions, siloed business processes and a disconnect between interdepartmental collaboration, this feature is a key benefit to consider.

Data Mapping Tools: Other Differentiators

In light of the aforementioned changes to data regulations, many organizations will need to consider the extent of a data mapping tool’s data lineage capabilities.

The ability to reverse-engineer and document the business logic from your reporting structures for true source-to-report lineage is key because it makes analysis (and the trust in said analysis) easier. And should a data breach occur, affected data/persons can be more quickly identified in accordance with GDPR.

Article 33 of GDPR requires organizations to notify the appropriate supervisory authority “without undue delay and, where, feasible, not later than 72 hours” after discovering a breach.

As stated above, a data governance platform that draws from a single metadata source is even more advantageous here.

Mappings can be synchronized with metadata so that source or target metadata changes can be automatically pushed into the mappings – so your mappings stay up to date with little or no effort.

The Data Mapping Tool For Data-Driven Businesses

Nobody likes manual documentation. It’s arduous, error-prone and a waste of resources. Quite frankly, it’s dated.

Any organization looking to invest in data mapping, data preparation and/or data cataloging needs to make automation a priority.

With automated data mapping, organizations can achieve “true data intelligence,”. That being the ability to tell the story of how data enters the organization and changes throughout the entire lifecycle to the consumption/reporting layer.  If you’re working harder than your tool, you have the wrong tool.

The manual tools of old do not have auto documentation capabilities, cannot produce outbound code for multiple ETL or script types, and are a liability in terms of accuracy and GDPR.

Automated data mapping is the only path to true GDPR compliance, and erwin Mapping Manager can get you there in a matter of weeks thanks to our robust reverse-engineering technology. 

Learn more about erwin’s automation framework for data governance here.

Automate Data Mapping

Categories
erwin Expert Blog

Data Governance Stock Check: Using Data Governance to Take Stock of Your Data Assets

For regulatory compliance (e.g., GDPR) and to ensure peak business performance, organizations often bring consultants on board to help take stock of their data assets. This sort of data governance “stock check” is important but can be arduous without the right approach and technology. That’s where data governance comes in …

While most companies hold the lion’s share of operational data within relational databases, it also can live in many other places and various other formats. Therefore, organizations need the ability to manage any data from anywhere, what we call our “any-squared” (Any2) approach to data governance.

Any2 first requires an understanding of the ‘3Vs’ of data – volume, variety and velocity – especially in context of the data lifecycle, as well as knowing how to leverage the key  capabilities of data governance – data cataloging, data literacy, business process, enterprise architecture and data modeling – that enable data to be leveraged at different stages for optimum security, quality and value.

Following are two examples that illustrate the data governance stock check, including the Any2 approach in action, based on real consulting engagements.

Data Governance Stock Check

Data Governance “Stock Check” Case 1: The Data Broker

This client trades in information. Therefore, the organization needed to catalog the data it acquires from suppliers, ensure its quality, classify it, and then sell it to customers. The company wanted to assemble the data in a data warehouse and then provide controlled access to it.

The first step in helping this client involved taking stock of its existing data. We set up a portal so data assets could be registered via a form with basic questions, and then a central team received the registrations, reviewed and prioritized them. Entitlement attributes also were set up to identify and profile high-priority assets.

A number of best practices and technology solutions were used to establish the data required for managing the registration and classification of data feeds:

1. The underlying metadata is harvested followed by an initial quality check. Then the metadata is classified against a semantic model held in a business glossary.

2. After this classification, a second data quality check is performed based on the best-practice rules associated with the semantic model.

3. Profiled assets are loaded into a historical data store within the warehouse, with data governance tools generating its structure and data movement operations for data loading.

4. We developed a change management program to make all staff aware of the information brokerage portal and the importance of using it. It uses a catalog of data assets, all classified against a semantic model with data quality metrics to easily understand where data assets are located within the data warehouse.

5. Adopting this portal, where data is registered and classified against an ontology, enables the client’s customers to shop for data by asset or by meaning (e.g., “what data do you have on X topic?”) and then drill down through the taxonomy or across an ontology. Next, they raise a request to purchase the desired data.

This consulting engagement and technology implementation increased data accessibility and capitalization. Information is registered within a central portal through an approved workflow, and then customers shop for data either from a list of physical assets or by information content, with purchase requests also going through an approval workflow. This, among other safeguards, ensures data quality.

Benefits of Data Governance

Data Governance “Stock Check” Case 2: Tracking Rogue Data

This client has a geographically-dispersed organization that stored many of its key processes in Microsoft Excel TM spreadsheets. They were planning to move to Office 365TM and were concerned about regulatory compliance, including GDPR mandates.

Knowing that electronic documents are heavily used in key business processes and distributed across the organization, this company needed to replace risky manual processes with centralized, automated systems.

A key part of the consulting engagement was to understand what data assets were in circulation and how they were used by the organization. Then process chains could be prioritized to automate and outline specifications for the system to replace them.

This organization also adopted a central portal that allowed employees to register data assets. The associated change management program raised awareness of data governance across the organization and the importance of data registration.

For each asset, information was captured and reviewed as part of a workflow. Prioritized assets were then chosen for profiling, enabling metadata to be reverse-engineered before being classified against the business glossary.

Additionally, assets that were part of a process chain were gathered and modeled with enterprise architecture (EA) and business process (BP) modeling tools for impact analysis.

High-level requirements for new systems then could be defined again in the EA/BP tools and prioritized on a project list. For the others, decisions could be made on whether they could safely be placed in the cloud and whether macros would be required.

In this case, the adoption of purpose-built data governance solutions helped build an understanding of the data assets in play, including information about their usage and content to aid in decision-making.

This client then had a good handle of the “what” and “where” in terms of sensitive data stored in their systems. They also better understood how this sensitive data was being used and by whom, helping reduce regulatory risks like those associated with GDPR.

In both scenarios, we cataloged data assets and mapped them to a business glossary. It acts as a classification scheme to help govern data and located data, making it both more accessible and valuable. This governance framework reduces risk and protects its most valuable or sensitive data assets.

Focused on producing meaningful business outcomes, the erwin EDGE platform was pivotal in achieving these two clients’ data governance goals – including the infrastructure to undertake a data governance stock check. They used it to create an “enterprise data governance experience” not just for cataloging data and other foundational tasks, but also for a competitive “EDGE” in maximizing the value of their data while reducing data-related risks.

To learn more about the erwin EDGE data governance platform and how it aids in undertaking a data governance stock check, register for our free, 30-minute demonstration here.

Categories
erwin Expert Blog

Digital Transformation in Municipal Government: The Hidden Force Powering Smart Cities

Smart cities are changing the world.

When you think of real-time, data-driven experiences and modern applications to accomplish tasks faster and easier, your local town or city government probably doesn’t come to mind. But municipal government is starting to embrace digital transformation and therefore data governance.

Municipal government has never been an area in which to look for tech innovation. Perpetually strapped for resources and budget, often relying on legacy applications and infrastructure, and perfectly happy being available during regular business hours (save for emergency responders), most municipal governments lacked the ability and motivation to (as they say in the private sector) digitally transform. Then an odd thing happened – the rest of the world started transforming.

If you shop at a retailer that doesn’t deliver a modern, personalized experience, thousands more retailers are just a click away. But people rarely pick up and move to a new city because the new city offers a better website or mobile app. The motivation for municipal governments to transform simply isn’t there in the same way it is for the private sector.

But there are some things many city residents care about deeply: public safety, quality of life, how their tax dollars are spent, and the ability to do business with their local government when they want, not when it’s convenient for the municipality. And much like the private sector, better decisions around all of these concerns can be made when accurate, timely data is available to help inform them.

Digital transformation in municipal government is taking place in two main areas today: constituent services and the “smart cities” movement.

Digital Transformation in Municipal Government: Being “Smart” About It

The ability to serve constituents easily and efficiently is of increasing importance and a key objective of digital transformation in municipal government. It’s a direct result of the data-driven customer experiences that are increasingly the norm in the private sector.

Residents want the ability to pay their taxes online, report a pothole from their phone, and generally make it easier to interact with their local officials and services. This can be accomplished with dashboards and constituent portals.

The smart cities movement refers to the broad effort of municipal governments to incorporate sensors, data collection and analysis to improve responses to everything from rush-hour traffic to air quality to crime prevention. When the McKinsey Global Institute examined smart technologies that could be deployed by cities, it found that the public sector would be the natural owner of 70 percent of the applications it reviewed.

“Cities are getting in on the data game,” says Danny Sandwell, product marketing director at erwin, Inc. And with information serving as the lifeblood of many of these projects, the effectiveness of the services offered, the return on the investments in hardware and software, and the happiness of the users all depend on timely, accurate and effective data.

These initiatives present a pretty radical departure from the way cities have traditionally been managed.

A constituent portal, for example, requires that users can be identified, authenticated and then have access to information that resides in various departments, such as the tax collector to view and pay taxes, the building department to view a building permit, and the parking authority to manage public parking permits.

For many municipalities, this is uncharted territory.

Smart Cities

Data Governance: The Force Powering Smart Cities

The efficiencies offered by smart city technologies only exist if the data leads to a proper allocation of resources.

If you can identify an increase in crime in a certain neighborhood, for example, you can increase police patrols in response. But if the data is inaccurate, those patrols are wasted while other neighborhoods experience a rise in crime.

Now that they’re in the data game, it’s time for municipal governments to understand data governance – the driving force behind any successful data-driven operation. When you have the ability to understand all of the information related to a piece of data, you have more confidence in how it is analyzed, used and protected.

Data governance doesn’t take place at a single application or in the data warehouse. It needs to be woven into the enterprise architecture and processes of the municipality to ensure data is accurate, timely and accessible to those who need it (and inaccessible to everyone else).

When this all comes together – good data, solid analytics and improved services for residents – the results can be quite striking. New efficiencies will make municipal governments better stewards of tax dollars. An improved quality of life can lift tax revenue by making the city more appealing to citizens and developers.

There’s a lot for cities to gain if they get in the data game. And truly smart cities will make sure they play the game right with effective data governance.

Benefits of Data Governance

Categories
erwin Expert Blog

Data Preparation and Data Mapping: The Glue Between Data Management and Data Governance to Accelerate Insights and Reduce Risks

Organizations have spent a lot of time and money trying to harmonize data across diverse platforms, including cleansing, uploading metadata, converting code, defining business glossaries, tracking data transformations and so on. But the attempts to standardize data across the entire enterprise haven’t produced the desired results.

A company can’t effectively implement data governance – documenting and applying business rules and processes, analyzing the impact of changes and conducting audits – if it fails at data management.

The problem usually starts by relying on manual integration methods for data preparation and mapping. It’s only when companies take their first stab at manually cataloging and documenting operational systems, processes and the associated data, both at rest and in motion, that they realize how time-consuming the entire data prepping and mapping effort is, and why that work is sure to be compounded by human error and data quality issues.

To effectively promote business transformation, as well as fulfil regulatory and compliance mandates, there can’t be any mishaps.

It’s obvious that the manual road is very challenging to discover and synthesize data that resides in different formats in thousands of unharvested, undocumented databases, applications, ETL processes and procedural code.

Consider the problematic issue of manually mapping source system fields (typically source files or database tables) to target system fields (such as different tables in target data warehouses or data marts).

These source mappings generally are documented across a slew of unwieldy spreadsheets in their “pre-ETL” stage as the input for ETL development and testing. However, the ETL design process often suffers as it evolves because spreadsheet mapping data isn’t updated or may be incorrectly updated thanks to human error. So questions linger about whether transformed data can be trusted.

Data Quality Obstacles

The sad truth is that high-paid knowledge workers like data scientists spend up to 80 percent of their time finding and understanding source data and resolving errors or inconsistencies, rather than analyzing it for real value.

Statistics are similar when looking at major data integration projects, such as data warehousing and master data management with data stewards challenged to identify and document data lineage and sensitive data elements.

So how can businesses produce value from their data when errors are introduced through manual integration processes? How can enterprise stakeholders gain accurate and actionable insights when data can’t be easily and correctly translated into business-friendly terms?

How can organizations master seamless data discovery, movement, transformation and IT and business collaboration to reverse the ratio of preparation to value delivered.

What’s needed to overcome these obstacles is establishing an automated, real-time, high-quality and metadata- driven pipeline useful for everyone, from data scientists to enterprise architects to business analysts to C-level execs.

Doing so will require a hearty data management strategy and technology for automating the timely delivery of quality data that measures up to business demands.

From there, they need a sturdy data governance strategy and technology to automatically link and sync well-managed data with core capabilities for auditing, statutory reporting and compliance requirements as well as to drive business insights.

Creating a High-Quality Data Pipeline

Working hand-in-hand, data management and data governance provide a real-time, accurate picture of the data landscape, including “data at rest” in databases, data lakes and data warehouses and “data in motion” as it is integrated with and used by key applications. And there’s control of that landscape to facilitate insight and collaboration and limit risk.

With a metadata-driven, automated, real-time, high-quality data pipeline, all stakeholders can access data that they now are able to understand and trust and which they are authorized to use. At last they can base strategic decisions on what is a full inventory of reliable information.

The integration of data management and governance also supports industry needs to fulfill regulatory and compliance mandates, ensuring that audits are not compromised by the inability to discover key data or by failing to tag sensitive data as part of integration processes.

Data-driven insights, agile innovation, business transformation and regulatory compliance are the fruits of data preparation/mapping and enterprise modeling (business process, enterprise architecture and data modeling) that revolves around a data governance hub.

erwin Mapping Manager (MM) combines data management and data governance processes in an automated flow through the integration lifecycle from data mapping for harmonization and aggregation to generating the physical embodiment of data lineage – that is the creation, movement and transformation of transactional and operational data.

Its hallmark is a consistent approach to data delivery (business glossaries connect physical metadata to specific business terms and definitions) and metadata management (via data mappings).

Automate Data Mapping

Categories
erwin Expert Blog

Top 10 Data Governance Predictions for 2019

This past year witnessed a data governance awakening – or as the Wall Street Journal called it, a “global data governance reckoning.” There was tremendous data drama and resulting trauma – from Facebook to Equifax and from Yahoo to Marriott. The list goes on and on. And then, the European Union’s General Data Protection Regulation (GDPR) took effect, with many organizations scrambling to become compliant.

So what’s on the horizon for data governance in the year ahead? We’re making the following data governance predictions for 2019:

Data Governance Predictions

Top 10 Data Governance Predictions for 2019

1. GDPR-esque regulation for the United States:

GDPR has set the bar and will become the de facto standard across geographies. Look at California as an example with California Consumer Privacy Act (CCPA) going into effect in 2020. Even big technology companies like Apple, Google, Amazon and Twitter are encouraging more regulations in part because they realize that companies that don’t put data privacy at the forefront will feel the wrath from both the government and the consumer.

2. GDPR fines are coming and they will be massive:

Perhaps one of the safest data governance predictions for 2019 is the coming clamp down on GDPR enforcement. The regulations weren’t brought in for show and so it’s likely the fine-free streak for GDPR will be ending … and soon. The headlines will resemble data breaches or hospitals with Health Information Portability Privacy Act (HIPAA) violations in the U.S. healthcare sector. Lots of companies will have an “oh crap” moment and realize they have a lot more to do to get their compliance house in order.

3. Data policies as a consumer buying criteria:

The threat of “data trauma” will continue to drive visibility for enterprise data in the C-suite. How they respond will be the key to their long-term success in transforming data into a true enterprise asset. We will start to see a clear delineation between organizations that maintain a reactive and defensive stance (pain avoidance) versus those that leverage this negative driver as an impetus to increase overall data visibility and fluency across the enterprise with a focus on opportunity enablement. The latter will drive the emergence of true data-driven entities versus those that continue to try to plug the holes in the boat.

4. CDOs will rise, better defined role within the organization:

We will see the chief data officer (CDO) role elevated from being a lieutenant of the CIO to taking a proper seat at the table beside the CIO, CMO and CFO.  This will give them the juice needed to create a sustainable vision and roadmap for data. So far, there’s been a profound lack of consensus on the nature of the role and responsibilities, mandate and background that qualifies a CDO. As data becomes increasingly more vital to an organization’s success from a compliance and business perspective, the role of the CDO will become more defined.

5. Data operations (DataOps) gains traction/will be fully optimized:

Much like how DevOps has taken hold over the past decade, 2019 will see a similar push for DataOps. Data is no longer just an IT issue. As organizations become data-driven and awash in an overwhelming amount of data from multiple data sources (AI, IOT, ML, etc.), organizations will need to get a better handle on data quality and focus on data management processes and practices. DataOps will enable organizations to better democratize their data and ensure that all business stakeholders work together to deliver quality, data-driven insights.

Data Management and Data Governance

6. Business process will move from back office to center stage:

Business process management will make its way out of the back office and emerge as a key component to digital transformation. The ability for an organization to model, build and test automated business processes is a gamechanger. Enterprises can clearly define, map and analyze workflows and build models to drive process improvement as well as identify business practices susceptible to the greatest security, compliance or other risks and where controls are most needed to mitigate exposures.

7. Turning bad AI/ML data good:

Artificial Intelligence (AI) and Machine Learning (ML) are consumers of data. The risk of training AI and ML applications with bad data will initially drive the need for data governance to properly govern the training data sets. Once trained, the data they produce should be well defined, consistent and of high quality. The data needs to be continuously governed for assurance purposes.

8. Managing data from going over the edge:

Edge computing will continue to take hold. And while speed of data is driving its adoption, organizations will also need to view, manage and secure this data and bring it into an automated pipeline. The internet of things (IoT) is all about new data sources (device data) that often have opaque data structures. This data is often integrated and aggregated with other enterprise data sources and needs to be governed like any other data. The challenge is documenting all the different device management information bases (MIBS) and mapping them into the data lake or integration hub.

9. Organizations that don’t have good data harvesting are doomed to fail:

Research shows that data scientists and analysts spend 80 percent of their time preparing data for use and only 20 percent of their time actually analyzing it for business value. Without automated data harvesting and ingesting data from all enterprise sources (not just those that are convenient to access), data moving through the pipeline won’t be the highest quality and the “freshest” it can be. The result will be faulty intelligence driving potentially disastrous decisions for the business.

10. Data governance evolves to data intelligence:

Regulations like GDPR are driving most large enterprises to address their data challenges. But data governance is more than compliance. “Best-in-breed” enterprises are looking at how their data can be used as a competitive advantage. These organizations are evolving their data governance practices to data intelligence – connecting all of the pieces of their data management and data governance lifecycles to create actionable insights. Data intelligence can help improve the customer experiences and enable innovation of products and services.

The erwin Expert Blog will continue to follow data governance trends and provide best practice advice in the New Year so you can see how our data governance predictions pan out for yourself. To stay up to date, click here to subscribe.

Data Management and Data Governance: Solving the Enterprise Data Dilemma

Categories
erwin Expert Blog

Massive Marriott Data Breach: Data Governance for Data Security

Organizations have been served yet another reminder of the value of data governance for data security.

Hotel and hospitality powerhouse Marriott recently revealed a massive data breach that led to the theft of personal data for an astonishing 500 million customers of its Starwood hotels. This is the second largest data breach in recent history, surpassed only by Yahoo’s breach of 3 billion accounts in 2013 for which it has agreed to pay a $50 million settlement to more than 200 million customers.

Now that Marriott has taken a major hit to its corporate reputation, it has two moves:

  1. Respond: Marriott’s response to its data breach so far has not received glowing reviews. But beyond how it communicates to effected customers, the company must examine how the breach occurred in the first place. This means understanding the context of its data – what assets exist and where, the relationship between them and enterprise systems and processes, and how and by what parties the data is used – to determine the specific vulnerability.
  2. Fix it: Marriott must fix the problem, and quickly, to ensure it doesn’t happen again. This step involves a lot of analysis. A data governance solution would make it a lot less painful by providing visibility into the full data landscape – linkages, processes, people and so on. Then more context-sensitive data security architectures can put in place to for corporate and consumer data privacy.

The GDPR Factor

It’s been six months since the General Data Protection Regulation (GDPR) took effect. While fines for noncompliance have been minimal to date, we anticipate them to dramatically increase in the coming year. Marriott’s bad situation could potentially worsen in this regard, without holistic data governance in place to identify whose and what data was taken.

Data management and data governance, together, play a vital role in compliance, including GDPR. It’s easier to protect sensitive data when you know what it is, where it’s stored and how it needs to be governed.

FREE GUIDE: THE REGULATORY RATIONALE FOR INTEGRATING DATA MANAGEMENT & DATA GOVERNANCE 

Truly understanding an organization’s data, including the data’s value and quality, requires a harmonized approach embedded in business processes and enterprise architecture. Such an integrated enterprise data governance experience helps organizations understand what data they have, where it is, where it came from, its value, its quality and how it’s used and accessed by people and applications.

Data Governance for Data Security

Data Governance for Data Security: Lessons Learned

Other companies should learn (like pronto) that they need to be prepared. At this point it’s not if, but when, a data breach will rear its ugly head. Preparation is your best bet for avoiding the entire fiasco – from the painstaking process of identifying what happened and why to notifying customers their data and trust in your organization have been compromised.

A well-formed security architecture that is driven by and aligned by data intelligence is your best defense. However, if there is nefarious intent, a hacker will find a way. So being prepared means you can minimize your risk exposure and the damage to your reputation.

Multiple components must be considered to effectively support a data governance, security and privacy trinity. They are:

  1. Data models
  2. Enterprise architecture
  3. Business process models

What’s key to remember is that these components act as links in the data governance chain by making it possible to understand what data serves the organization, its connection to the enterprise architecture, and all the business processes it touches.

THE EXPERT GUIDE TO DATA GOVERNANCE, SECURITY AND PRIVACY

Creating policies for data handling and accountability and driving culture change so people understand how to properly work with data are two important components of a data governance initiative, as is the technology for proactively managing data assets.

Without the ability to harvest metadata schemas and business terms; analyze data attributes and relationships; impose structure on definitions; and view all data in one place according to each user’s role within the enterprise, businesses will be hard pressed to stay in step with governance standards and best practices around security and privacy.

As a consequence, the private information held within organizations will continue to be at risk. Organizations suffering data breaches will be deprived of the benefits they had hoped to realize from the money spent on security technologies and the time invested in developing data privacy classifications. They also may face heavy fines and other financial, not to mention PR, penalties.

Less Pain, More Gain

Most organizations don’t have enough time or money for data management using manual processes. And outsourcing is also expensive, with inevitable delays because these vendors are dependent on manual processes too. Furthermore, manual processes require manual analysis and auditing, which is always more expensive and time consuming.

So the more processes an organization can automate, the less risk of human error, which is actually the primary cause of most data breaches. And automated processes are much easier to analyze and audit because everything is captured, versioned and available for review in a log somewhere. You can read more about automation in our 10 Reasons to Automate Data Mapping and Data Preparation.

And to learn more about how data governance underpins data security and privacy, click here.

Automate Data Mapping