Categories
erwin Expert Blog

Data Intelligence in the Next Normal; Why, Who and When?

While many believe that the dawn of a new year represents a clean slate or a blank canvas, we simply don’t leave the past behind by merely flipping over a page in the calendar.

As we enter 2021, we will also be building off the events of  2020 – both positive and negative – including the acceleration of digital transformation as the next normal begins to be defined.

data intelligence

As the pandemic took hold, IDC surveyed technology users and decision makers around the globe, reaching out every two weeks until September, when the survey frequency shifted to monthly. These surveys helped IDC develop a model that describes the five stages of enterprise recovery, aligning business focus with the economic situation:

  • When the COVID-19 crisis hit, organizations focused on business continuity.
  • As the economy slowed, they focused on cost optimization.
  • In the recession period, their focus turned to business resiliency.
  • As the economy returns to growth, organizations are making targeted investments.
  • When we enter into the next normal, the future enterprise will emerge.

The IDC surveys explored how the crisis impacted budgets across different areas of IT, from hardware and networking, to software and professional services. When the pandemic first hit, there was some negative impact on big data and analytics spending.

However, the economic situation changed as time went on. Digital transformation was accelerated, and budgets for spending on big data and analytics increased. This spending has continued during the return to growth, with more organizations moving toward becoming the future enterprise.

I have long stated that data is the lifeblood of digital transformation, and if the pandemic really has accelerated digital transformation, then the trends reported in IDC’s worldwide surveys make sense.

But data without intelligence is just data, and this is WHY data intelligence is required.

Data intelligence is a key input to data enablement in the digital enterprise, both by improving data literacy among data-native workers and by assuring the right data is being used at the right time, and for the right reason(s).

WHO needs to be involved in implementing and using data intelligence in the digital enterprise?

There is an ever-growing number of roles that work with data daily to complete tasks, make decisions, and affect business outcomes. These roles range from technical to business, from operations to strategy, and from the back office to the front office.

IDC has defined people in these roles as a generation: “Generation Data,” or “Gen-D” for short. Gen-D workers are data-natives — data is what they work in and work with to complete their tasks, tactical and/or strategic.

You may be part of Gen-D if “data” is in your job title, you are expected to make data-driven decisions, and you are able to use data to communicate with others. Gen-D workers also contribute to the overall data knowledge in the organization by participating in data intelligence and data literacy efforts and promoting good data culture.

WHEN do you need to gather intelligence about your data?

Now is the time.

The next or new normal has already begun and the more you know about your data, the better your digital business outcomes will be. It has been said that while it can take a long time to gain a customer’s trust, it only takes one bad experience to lose it.

Personally, I have had several instances of poor digital experiences such as items sent to the wrong address or orders (including mobile food orders) being fulfilled incorrectly.

Each represents a data problem: incorrect data, incorrect data interpretation, or a complete disconnect between the virtual and physical world. In these cases, better data intelligence could have helped in assuring the correct address, enabling correct order fulfillment, and assisting with interpretation through better data definition and description.

Even if you don’t have a formal data intelligence program in place, there is a good possibility your organization has intelligence about its data, because it is difficult for data to exist without some form of associated metadata.

Technical metadata is what makes up database schema and table definitions. Logical and physical data models may exist in data modeling or general-purpose diagraming software.

There is also a high likelihood that data models, data dictionaries, and data catalogs exist in the ubiquitous spreadsheet, or in centralized document repositories. However, just having metadata isn’t the same as managing and leveraging it as intelligence. Data in modern business environments is very dynamic, constantly moving, drifting, and shifting – requiring automated collection, management, and analytics to extract and leverage intelligence about it.

In many English-speaking countries, “Auld Lang Syne,” a Scots-language poem written by Robbie Burns and set to a common folk song tune, is often sung as the clock strikes midnight on the first day of the new year.

The phrase “auld lang syne” has several interpretations, but it can loosely be translated as “for the sake of old times.” As we move into 2021, we need to forget the negatives of 2020, and build on the positives to help define the next normal.

Categories
erwin Expert Blog

Data Equals Truth, and Truth Matters

In these times of great uncertainty and massive disruption, is your enterprise data helping you drive better business outcomes?

The COVID-19 pandemic has forced organizations to tactically adjust their business models, work practices and revenue projections for the short term. But the real challenges will be accelerating recovery and crisis-proofing your business to mitigate the impact of “the next big thing.”

data truth, truth matters

Assure an Unshakable Data Supply Chain to Drive Better Business Outcomes in Turbulent Times

Consider these high-priority scenarios in which the demand for a sound data infrastructure to drive trusted insights is clear and compelling:

  • Organizations contributing to managing the pandemic: (healthcare, government, pharma, etc.)
  • Organizations dealing with major business disruptions in the near and mid-term: (hospitality, retail, transportation)
  • Organizations looking to the post-pandemic future for risk-adverse business models, new opportunities, and/or new approaches to changing markets: (virtually every organization that needs to survive and then thrive)

A data-driven approach has never been more valuable to addressing the complex yet foundational questions enterprises must answer.

However, as we have seen with data surrounding the COVID situation itself, incorrect, incomplete or misunderstood data turn these “what-if” exercises into “WTF” solutions. Organizations that have their data management, data governance and data intelligence houses in order are much better positioned to respond to these challenges and thrive in whatever their new normal turns out to be.

Optimizing data management across the enterprise delivers both tactical and strategic benefits that can mitigate short-term impacts and enable the future-proofing required to ensure stability and success.  Strong data management practices can have:

  • Financial impact (revenue, cash flow, cost structures, etc.)
  • Business capability impact (remote working, lost productivity, restricted access to business-critical infrastructure, supply chain)
  • Market impact (changing customers, market shifts, emerging opportunities)

Turning Data Into a Source of Truth & Regeneration

How can every CEO address the enterprise data dilemma by transforming data into a source of truth and regeneration for post-COVID operations?

  • Accelerate time to value across the data lifecycle (cut time and costs)
    • Decrease data discovery and preparation times
    • Lower the overhead on data related processes and maintenance
    • Reduce latency in the data supply chain
  • Ensure continuity in data capabilities (reduce losses)
    • Automate data management, data intelligence and data governance practices
    • Create always-available and always-transparent data pipelines
    • Reduce necessity for in-person collaboration
  • Ensure company-wide data compliance (reduce risks)
    • Deliver detailed and reliable impact analysis on demand
    • Establish agile and transparent business data governance (policy, classification, rules, usage)
    • Build visibility and traceability into data assets and supporting processes
  • Demand trusted insights based on data truths (Drive innovation and assure veracity)
    • Ensure accurate business context and classification of data
    • Deliver detailed and accurate data lineage on demand
    • Provide visibility into data quality and proven “golden sources”
  • Foster data-driven collaboration (assure agility, visibility and integration of initiatives)
    • Enable navigable data intelligence visualizations and governed feedback loops
    • Govern self-service discovery with rigorous workflows and properly curated data assets
    • Provide visibility across the entire data life cycle (from creation through consumption)

Listen to erwin’s CEO, Adam Famularo, discuss how organizations can use data as a source of truth to navigate current circumstances and determine what’s next on Nasdaq’s Trade Talks.

Data equals truth. #TruthMatters

erwin Rapid Response Resource Center (ERRRC)

Categories
erwin Expert Blog Data Governance

What is Data Lineage? Top 5 Benefits of Data Lineage

What is Data Lineage and Why is it Important?

Data lineage is the journey data takes from its creation through its transformations over time. It describes a certain dataset’s origin, movement, characteristics and quality.

Tracing the source of data is an arduous task.

Many large organizations, in their desire to modernize with technology, have acquired several different systems with various data entry points and transformation rules for data as it moves into and across the organization.

data lineage

These tools range from enterprise service bus (ESB) products, data integration tools; extract, transform and load (ETL) tools, procedural code, application program interfaces (API)s, file transfer protocol (FTP) processes, and even business intelligence (BI) reports that further aggregate and transform data.

With all these diverse data sources, and if systems are integrated, it is difficult to understand the complicated data web they form much less get a simple visual flow. This is why data’s lineage must be tracked and why its role is so vital to business operations, providing the ability to understand where data originates, how it is transformed, and how it moves into, across and outside a given organization.

Data Lineage Use Case: From Tracing COVID-19’s Origins to Data-Driven Business

A lot of theories have emerged about the origin of the coronavirus. A recent University of California San Francisco (UCSF) study conducted a genetic analysis of COVID-19 to determine how the virus was introduced specifically to California’s Bay Area.

It detected at least eight different viral lineages in 29 patients in February and early March, suggesting no regional patient zero but rather multiple independent introductions of the pathogen. The professor who directed the study said, “it’s like sparks entering California from various sources, causing multiple wildfires.”

Much like understanding viral lineage is key to stopping this and other potential pandemics, understanding the origin of data, is key to a successful data-driven business.

Top Five Data Lineage Benefits

From my perspective in working with customers of various sizes across multiple industries, I’d like to highlight five data lineage benefits:

1. Business Impact

Data is crucial to every organization’s survival. For that reason, businesses must think about the flow of data across multiple systems that fuel organizational decision-making.

For example, the marketing department uses demographics and customer behavior to forecast sales. The CEO also makes decisions based on performance and growth statistics. An understanding of the data’s origins and history helps answer questions about the origin of data in a Key Performance Indicator (KPI) reports, including:

  • How the report tables and columns are defined in the metadata?
  • Who are the data owners?
  • What are the transformation rules?

Without data lineage, these functions are irrelevant, so it makes sense for a business to have a clear understanding of where data comes from, who uses it, and how it transforms. Also, when there is a change to the environment, it is valuable to assess the impacts to the enterprise application landscape.

In the event of a change in data expectations, data lineage provides a way to determine which downstream applications and processes are affected by the change and helps in planning for application updates.

2. Compliance & Auditability

Business terms and data policies should be implemented through standardized and documented business rules. Compliance with these business rules can be tracked through data lineage, incorporating auditability and validation controls across data transformations and pipelines to generate alerts when there are non-compliant data instances.

Regulatory compliance places greater transparency demands on firms when it comes to tracing and auditing data. For example, capital markets trading firms must understand their data’s origins and history to support risk management, data governance and reporting for various regulations such as BCBS 239 and MiFID II.

Also, different organizational stakeholders (customers, employees and auditors) need to be able to understand and trust reported data. Data lineage offers proof that the data provided is reflected accurately.

3. Data Governance

An automated data lineage solution stitches together metadata for understanding and validating data usage, as well as mitigating the associated risks.

It can auto-document end-to-end upstream and downstream data lineage, revealing any changes that have been made, by whom and when.

This data ownership, accountability and traceability is foundational to a sound data governance program.

See: The Benefits of Data Governance

4. Collaboration

Analytics and reporting are data-dependent, making collaboration among different business groups and/or departments crucial.

The visualization of data lineage can help business users spot the inherent connections of data flows and thus provide greater transparency and auditability.

Seeing data pipelines and information flows further supports compliance efforts.

5. Data Quality

Data quality is affected by data’s movement, transformation, interpretation and selection through people, process and technology.

Root-cause analysis is the first step in repairing data quality. Once a data steward determines where a data flaw was introduced, the reason for the error can be determined.

With data lineage and mapping, the data steward can trace the information flow backward to examine the standardizations and transformations applied to confirm whether they were performed correctly.

See Data Lineage in Action

Data lineage tools document the flow of data into and out of an organization’s systems. They capture end-to-end lineage and ensure proper impact analysis can be performed in the event of problems or changes to data assets as they move across pipelines.

The erwin Data Intelligence Suite (erwin DI) automatically generates end-to-end data lineage, down to the column level and between repositories. You can view data flows from source systems to the reporting layers, including intermediate transformation and business logic.

Join us for the next live demo of erwin Data Intelligence (DI) to see metadata-driven, automated data lineage in action.

erwin data intelligence

Subscribe to the erwin Expert Blog

Once you submit the trial request form, an erwin representative will be in touch to verify your request and help you start data modeling.

Categories
Data Intelligence erwin Expert Blog

What is a Data Catalog?

The easiest way to understand a data catalog is to look at how libraries catalog books and manuals in a hierarchical structure, making it easy for anyone to find exactly what they need.

Similarly, a data catalog enables businesses to create a seamless way for employees to access and consume data and business assets in an organized manner.

By combining physical system catalogs, critical data elements, and key performance measures with clearly defined product and sales goals, you can manage the effectiveness of your business and ensure you understand what critical systems are for business continuity and measuring corporate performance.

As illustrated above, a data catalog is essential to business users because it synthesizes all the details about an organization’s data assets across multiple data sources. It organizes them into a simple, easy- to-digest format and then publishes them to data communities for knowledge-sharing and collaboration.

Another foundational purpose of a data catalog is to streamline, organize and process the thousands, if not millions, of an organization’s data assets to help consumers/users search for specific datasets and understand metadata, ownership, data lineage and usage.

Look at Amazon and how it handles millions of different products, and yet we, as consumers, can find almost anything about everything very quickly.

Beyond Amazon’s advanced search capabilities, they also give detailed information about each product, the seller’s information, shipping times, reviews and a list of companion products. The company measure sales down to a zip-code territory level across product categories.

Data Catalog Use Case Example: Crisis Proof Your Business

One of the biggest lessons we’re learning from the global COVID-19 pandemic is the importance of data, specifically using a data catalog to comply, collaborate and innovate to crisis-proof our businesses.

As COVID-19 continues to spread, organizations are evaluating and adjusting their operations in terms of both risk management and business continuity. Data is critical to these decisions, such as how to ramp up and support remote employees, re-engineer processes, change entire business models, and adjust supply chains.

Think about the pandemic itself and the numerous global entities involved in identifying it, tracking its trajectory, and providing guidance to governments, healthcare systems and the general public. One example is the European Union (EU) Open Data Portal, which is used to document, catalog and govern EU data related to the pandemic. This information has helped:

  • Provide daily updates
  • Give guidance to governments, health professionals and the public
  • Support the development and approval of treatments and vaccines
  • Help with crisis coordination, including repatriation and humanitarian aid
  • Put border controls in place
  • Assist with supply chain control and consular coordination

So one of the biggest lessons we’re learning from COVID-19 is the need for data collection, management and governance. What’s the best way to organize data and ensure it is supported by business policies and well-defined, governed systems, data elements and performance measures?

According to Gartner, “organizations that offer a curated catalog of internal and external data to diverse users will realize twice the business value from their data and analytics investments than those that do not.”

Data Catalog Benefits

5 Advantages of Using a Data Catalog for Crisis Preparedness & Business Continuity

The World Bank has been able to provide an array of real-time data, statistical indicators, and other types of data relevant to the coronavirus pandemic through its authoritative data catalogs. The World Bank data catalogs contain datasets, policies, critical data elements and measures useful for analysis and modeling the virus’ trajectory to help organizations measure the impact.

What can your organization learn from this example when it comes to crisis preparedness and business continuity? By developing and maintaining a data catalog as part of a larger data governance program supported by stakeholders across the organization, you can:

  1. Catalog and Share Information Assets

Catalog critical systems and data elements, plus enable the calculation and evaluation of key performance measures. It’s also important to understand data linage and be able to analyze the impacts to critical systems and essential business processes if a change occurs.

  1. Clearly Document Data Policies and Rules

Managing a remote workforce creates new challenges and risks. Do employees have remote access to essential systems? Do they know what the company’s work-from-home policies are? Do employees understand how to handle sensitive data? Are they equipped to maintain data security and privacy? A data catalog with self-service access serves up the correct policies and procedures.

  1. Reduce Operational Costs While Increasing Time to Value

Datasets need to be properly scanned, documented, tagged and annotated with their definitions, ownership, lineage and usage. Automating the cataloging of data assets saves initial development time and streamlines its ongoing maintenance and governance. Automating the curation of data assets also accelerates the time to value for analytics/insights reporting significantly reduce operational costs.

  1. Make Data Accessible & Usable

Open your organization’s data door, making it easier to access, search and understand information assets. A data catalog is the core of data analysis for decision-making, so automating its curation and access with the associated business context will enable stakeholders to spend more time analyzing it for meaningful insights they can put into action.

  1. Ensure Regulatory Compliance

Regulations like the California Consumer Privacy Act (CCPA) and the European Union’s General Data Protection Regulation (GDPR) require organizations to know where all their customer, prospect and employee data resides to ensure its security and privacy.

A fine for noncompliance is the last thing you need on top of everything else your organization is dealing with, so using a data catalog centralizes data management and the associated usage policies and guardrails.

See a Data Catalog in Action

The erwin Data Intelligence Suite (erwin DI) provides data catalog and data literacy capabilities with built-in automation so you can accomplish all of the above and more.

Join us for the next live demo of erwin DI.

Data Intelligence for Data Automation

Categories
erwin Expert Blog

Data Governance for Smart Data Distancing

Hello from my home office! I hope you and your family are staying safe, practicing social distancing, and of course, washing your hands.

These are indeed strange days. During this coronavirus emergency, we are all being deluged by data from politicians, government agencies, news outlets, social media and websites, including valid facts but also opinions and rumors.

Happily for us data geeks, the general public is being told how important our efforts and those of data scientists are to analyzing, mapping and ultimately shutting down this pandemic.

Yay, data geeks!

Unfortunately though, not all of the incoming information is of equal value, ethically sourced, rigorously prepared or even good.

As we work to protect the health and safety of those around us, we need to understand the nuances of meaning for the received information as well as the motivations of information sources to make good decisions.

On a very personal level, separating the good information from the bad becomes a matter of life and potential death. On a business level, decisions based on bad external data may have the potential to cause business failures.

In business, data is the food that feeds the body or enterprise. Better data makes the body stronger and provides a foundation for the use of analytics and data science tools to reduce errors in decision-making. Ultimately, it gives our businesses the strength to deliver better products and services to our customers.

How then, as a business, can we ensure that the data we consume is of good quality?

Distancing from Third-Party Data

Just as we are practicing social distancing in our personal lives, so too we must practice data distancing in our professional lives.

In regard to third-party data, we should ask ourselves: How was the data created? What formulas were used? Does the definition (description, classification, allowable range of values, etc.) of incoming, individual data elements match our internal definitions of those data elements?

If we reflect on the coronavirus example, we can ask: How do individual countries report their data? Do individual countries use the same testing protocols? Are infections universally defined the same way (based on widely administered tests or only hospital admissions)? Are asymptomatic infections reported? Are all countries using the same methods and formulas to collect and calculate infections, recoveries and deaths?

In our businesses, it is vital that we work to develop a deeper understanding of the sources, methods and quality of incoming third-party data. This deeper understanding will help us make better decisions about the risks and rewards of using that external data.

Data Governance Methods for Data Distancing

We’ve received lots of instructions lately about how to wash our hands to protect ourselves from coronavirus. Perhaps we thought we already knew how to wash our hands, but nonetheless, a refresher course has been worthwhile.

Similarly, perhaps we think we know how to protect our business data, but maybe a refresher would be useful here as well?

Here are a few steps you can take to protect your business:

  • Establish comprehensive third-party data sharing guidelines (for both inbound and outbound data). These guidelines should include communicating with third parties about how they make changes to collection and calculation methods.
  • Rationalize external data dictionaries to our internal data dictionaries and understand where differences occur and how we will overcome those differences.
  • Ingest to a quarantined area where it can be profiled and measured for quality, completeness, and correctness, and where necessary, cleansed.
  • Periodically review all data ingestion or data-sharing policies, processes and procedures to ensure they remain aligned to business needs and goals.
  • Establish data-sharing training programs so all data stakeholders understand associated security considerations, contextual meaning, and when and when not to share and/or ingest third-party data.

erwin Data Intelligence for Data Governance and Distancing

With solutions like those in the erwin Data Intelligence Suite (erwin DI), organizations can auto-document their metadata; classify their data with respect to privacy, contractual and regulatory requirements; attach data-sharing and management policies; and implement an appropriate level of data security.

If you believe the management of your third-party data interfaces could benefit from a review or tune-up, feel free to reach out to me and my colleagues here at erwin.

We’d be happy to provide a demo of how to use erwin DI for data distancing.

erwin Data Intelligence

Categories
erwin Expert Blog

Data Intelligence and Its Role in Combating Covid-19

Data intelligence has a critical role to play in the supercomputing battle against Covid-19.

Last week, The White House announced the launch of the COVID-19 High Performance Computing Consortium, a public-private partnership to provide COVID-19 researchers worldwide with access to the world’s most powerful high performance computing resources that can significantly advance the pace of scientific discovery in the fight to stop the virus.

Rensselaer Polytechnic Institute (RPI) is one of the organizations that has joined the consortium to provide computing resources to help fight the pandemic.

Data Intelligence COVID-19

While leveraging supercomputing power is a tremendous asset in our fight to combat this global pandemic, in order to deliver life-saving insights, you really have to understand what data you have and where it came from. Answering these questions is at the heart of data intelligence.

Managing and Governing Data From Lots of Disparate Sources

Collecting and managing data from many disparate sources for the Covid-19 High Performance Computing Consortium is on a scale beyond comprehension and, quite frankly, it boggles the mind to even think about it.

To feed the supercomputers with epidemiological data, the information will flow-in from many different and heavily regulated data sources, including population health, demographics, outbreak hotspots and economic impacts.

This data will be collected from organizations such as, the World Health Organization (WHO), the Centers for Disease Control (CDC), and state and local governments across the globe.

Privately it will come from hospitals, labs, pharmaceutical companies, doctors and private health insurers. It also will come from HL7 hospital data, claims administration systems, care management systems, the Medicaid Management Information System, etc.

These numerous data types and data sources most definitely weren’t designed to work together. As a result, the data may be compromised, rendering faulty analyses and insights.

To marry the epidemiological data to the population data it will require a tremendous amount of data intelligence about the:

  • Source of the data;
  • Currency of the data;
  • Quality of the data; and
  • How it can be used from an interoperability standpoint.

To do this, the consortium will need the ability to automatically scan and catalog the data sources and apply strict data governance and quality practices.

Unraveling Data Complexities with Metadata Management

Collecting and understanding this vast amount of epidemiological data in the fight against Covid-19 will require data governance oversite and data intelligence to unravel the complexities of the underlying data sources. To be successful and generate quality results, this consortium will need to adhere to strict disciplines around managing the data that comes into the study.

Metadata management will be critical to the process for cataloging data via automated scans. Essentially, metadata management is the administration of data that describes other data, with an emphasis on associations and lineage. It involves establishing policies and processes to ensure information can be integrated, accessed, shared, linked, analyzed and maintained.

While supercomputing can be used to process incredible amounts of data, a comprehensive data governance strategy plus technology will enable the consortium to determine master data sets, discover the impact of potential glossary changes, audit and score adherence to rules and data quality, discover risks, and appropriately apply security to data flows, as well as publish data to the right people.

Metadata management delivers the following capabilities, which are essential in building an automated, real-time, high-quality data pipeline:

  • Reference data management for capturing and harmonizing shared reference data domains
  • Data profiling for data assessment, metadata discovery and data validation
  • Data quality management for data validation and assurance
  • Data mapping management to capture the data flows, reconstruct data pipelines, and visualize data lineage
  • Data lineage to support impact analysis
  • Data pipeline automation to help develop and implement new data pipelines
  • Data cataloging to capture object metadata for identified data assets
  • Data discovery facilitated via a shared environment allowing data consumers to understand the use of data from a wide array of sources

Supercomputing will be very powerful in helping fight the COVID-19 virus. However, data scientists need access to quality data harvested from many disparate data sources that weren’t designed to work together to deliver critical insights and actionable intelligence.

Automated metadata harvesting, data cataloging, data mapping and data lineage combined with integrated business glossary management and self-service data discovery can give this important consortium data asset visibility and context so they have the relevant information they need to help us stop this virus effecting all of us around the globe.

To learn more about more about metadata management capabilities, download this white paper, Metadata Management: The Hero in Unleashing Enterprise Data’s Value.

COVID-19 Resources

Categories
erwin Expert Blog

Using Enterprise Architecture, Data Modeling & Data Governance for Rapid Crisis Response

Because of the coronavirus pandemic, organizations across the globe are changing how they operate.

Teams need to urgently respond to everything from massive changes in workforce access and management to what-if planning for a variety of grim scenarios, in addition to building and documenting new applications and providing fast, accurate access to data for smart decision-making.

We want to help, so we built the erwin Rapid Response Resource Center. It provides free access to videos, webinars, courseware, simulations, frameworks and expert strategic advice leveraging the erwin EDGE platform for rapid response transformation during the COVID-19 crisis.

How can the erwin EDGE platform help? Here are a few examples specific to enterprise architecture and business process modeling, data modeling and data governance.

Enterprise Architecture & Business Process Modeling

In the face of rapid change, your organization needs to move fast to support the business in a way that provides comprehensive documentation of systems, applications, people and processes. Even though we face a new reality that requires flexibility, the business still has to run with order, documentation and traceability for compliance purposes.

erwin Evolve is purpose-built for these situations and can be used for strategic planning, what-if scenarios, as-is/to-be modeling and its associated impacts and more.

Agility and remote working requires a supporting infrastructure and full documentation. No matter what your role in the company, you need access to the processes you support and the details you most need to get your job done.

erwin Evolve is an enterprise architecture tool that provides a central repository of key processes, the systems that support them, and the business continuity plans for every working environment. This gives all your employees the access and knowledge to operate in a clear and defined way.

Data Modeling

Companies everywhere are building innovative business applications to support their customers, partners and employees in this time of need. But even with the “need for speed” to market, new applications must be modeled and documented for compliance and transparency. Building in the cloud? No problem.

erwin Data Modeler can help you find, visualize, design, deploy and standardize high-quality enterprise data assets. And it’s intuitive, so you can get new modelers up and running quickly as you scale to address this new business reality.

Data Governance

In times of crisis, knowledge is power and nothing fuels decision-making better than your enterprise data. Your data scientists need access to quality data harvested from every data source in your organization to deliver insights and actionable intelligence.

erwin Data Catalog and erwin Data Literacy work in tandem as the erwin Data Intelligence Suite to support data governance and any other data-driven initiative.

Automated metadata harvesting, data cataloging, data mapping and data lineage combined with integrated business glossary management and self-service data discovery gives data scientists and all stakeholders data asset visibility and context so they have the relevant information they need to do their jobs effectively.

Rapid Crisis Response

We stand ready to help with the tools and intelligence you need to navigate these unusual circumstances.

Click here to request access to the erwin Rapid Response Resource Center (ERRRC).

Questions for our experts? You can email them here.

We’ll continue to add content to the ERRRC over the coming days and weeks. How can erwin best help you through these challenging times? Email me directly and let me know.

erwin Rapid Response Resource Center (ERRRC)