Categories
erwin Expert Blog

Data Governance 2.0: Biggest Data Shakeups to Watch in 2018

This year we’ll see some huge changes in how we collect, store and use data, with Data Governance 2.0 at the epicenter. For many organizations, these changes will be reactive, as they have to adapt to new regulations. Others will use regulatory change as a catalyst to be proactive with their data. Ideally, you’ll want to be in the latter category.

Data-driven businesses and their relevant industries are experiencing unprecedented rates of change.

Not only has the amount of data exploded in recent years, we’re now seeing the amount of insights data provides increase too. In essence, we’re finding smaller units of data more useful, but also collecting more than ever before.

At present, data opportunities are seemingly boundless, and we’ve barely begun to scratch the surface. So here are some of the biggest data shakeups to expect in 2018.

2018 data governance 2.0

GDPR

The General Data Protection Regulation (GDPR) has organizations scrambling. Penalties for non-compliance go into immediate effect on May 25, with hefty fines – up to €20 million or 4 percent of the company’s global annual turnover, whichever is greater.

Although it’s a European mandate, the fact is that all organizations trading with Europe, not just those based within the continent, must comply. Because of this, we’re seeing a global effort to introduce new policies, procedures and systems to prepare on a scale we haven’t seen since Y2K.

It’s easy to view mandated change of this nature as a burden. But the change is well overdue – both from a regulatory and commercial point of view.

In terms of regulation, a globalized approach had to be introduced. Data doesn’t adhere to borders in the same way as physical materials, and conflicting standards within different states, countries and continents have made sufficient regulation difficult.

In terms of business, many organizations have stifled their digital transformation efforts to become data-driven, neglecting to properly govern the data that would enable it. GDPR requires a collaborative approach to data governance (DG), and when done right, will add value as well as achieve compliance.

Rise of Data Governance 2.0

Data Governance 1.0 has failed to gain a foothold because of its siloed, un-collaborative nature. It lacks focus on business outcomes, so business leaders have struggled to see the value in it. Therefore, IT has been responsible for cataloging data elements to support search and discovery, yet they rarely understand the data’s context due to being removed from the operational side of the business. This means data is often incomplete and of poor quality, making effective data-driven business impossible.

Company-wide responsibility for data governance, encouraged by the new standards of regulation, stand to fundamentally change the way businesses view data governance. Data Governance 2.0 and its collaborative approach will become the new normal, meaning those with the most to gain from data and its insights will be directly involved in its governance.

This means more buy-in from C-level executives, line managers, etc. It means greater accountability, as well as improved discoverability and traceability. Most of all, it means better data quality that leads to faster, better decisions made with more confidence.

Escalated Digital Transformation

Digital transformation and its prominence won’t diminish this year. In fact, thanks to Data Governance 2.0, digital transformation is poised to accelerate – not slow down.

Organizations that commit to data governance beyond just compliance will reap the rewards. With a stronger data governance foundation, organizations undergoing digital transformation will enjoy a number of significant benefits, including better decision making, greater operational efficiency, improved data understanding and lineage, greater data quality, and increased revenue.

Data-driven exemplars, such as Amazon, Airbnb and Uber, have enjoyed these benefits, using them to disrupt and then dominate their respective industries. But you don’t have to be Amazon-sized to achieve them. De-siloing DG and treating it as a strategic initiative is the first step to data-driven success.

Data as Valuable Asset

Data became more valuable than oil in 2017. Yet despite this assessment, many businesses neglect to treat their data as a prized asset. For context, the Industrial Revolution was powered by machinery that had to be well-maintained to function properly, as downtime would result in loss. Such machinery adds value to a business, so it is inherently valuable.

Fast forward to 2018 with data at center stage. Because data is the value driver, the data itself is valuable. Just because it doesn’t have a physical presence doesn’t mean it is any less important than physical assets. So businesses will need to change how they perceive their data, and this is the year in which this thinking is likely to change.

DG-Enabled AI and IoT

Artificial Intelligence (AI) and the Internet of Things (IoT) aren’t new concepts. However, they’re yet to be fully realized with businesses still competing to carve a slice out of these markets.

As the two continue to expand, they will hypercharge the already accelerating volume of data – specifically unstructured data – to almost unfathomable levels. The three Vs of data tend to escalate in unison. As the volume increases, so does the velocity and speed at which data must be processed. The variety of data – mostly unstructured in these cases – also increases, so to manage it, businesses will need to put effective data governance in place.

Alongside strong data governance practices, more and more businesses will turn to NoSQL databases to manage diverse data types.

For more best practices in business and IT alignment, and successfully implementing Data Governance 2.0, click here.

Data governance is everyone's business

Categories
erwin Expert Blog

SQL, NoSQL or NewSQL: Evaluating Your Database Options

A common question in the modern data management space involves database technology: SQL, NoSQL or NewSQL?

But there isn’t a one-size-fits-all answer. What’s “right” must be evaluated on a case-by-case basis and is dependent on data maturity.

For example, a large bookstore chain with a big-data initiative would be stifled by a SQL database. The advantages that could be gained from analyzing social media data (for popular books, consumer buying habits) couldn’t be realized effectively through sequential analysis. There’s too much data involved in this approach, with too many threads to follow.

However, an independent bookstore isn’t necessarily bound to a big-data approach because it may not have a mature data strategy. It might not have ventured beyond digitizing customer records, and a SQL database is sufficient for that work.

Having said that, the “SQL, NoSQL or NewSQL” question is gaining prominence because businesses are becoming increasingly data-driven.

In 2019, an IDC study found 85% of enterprise decision-makers said they had a time frame of two years to make significant inroads into digital transformation or they will fall behind their competitors and suffer financially. Furthermore, a Progress study showed that 85% of enterprise decision-makers feel they only have two years to make significant digital-transformation progress before suffering financially and/or falling behind competitors.

Considering these statistics, what better time than now to evaluate your database technology? The “SQL, NoSQL or NewSQL question,” is especially important if you intend to become more data-driven.

SQL, NoSQL or NewSQL: Advantages and Disadvantages

SQL

SQL databases are tried and tested, proven to work on disks using interfaces with which businesses are already familiar.

As the longest-standing type of database, plenty of SQL options are available. This competitive market means you’ll likely find what you’re looking for at affordable prices.

Additionally, businesses in the earlier stages of data maturity are more likely to have a SQL database at work already, meaning no new investments need to be made.

However in the modern digital business context, SQL databases weren’t made to support the the three Vs of data. The volume is too high, the variety of sources is too vast, and the velocity (speed at which the data must be processed) is too great to be analyzed in sequence.

Furthermore, the foundational, legacy IT world they were purpose-built to serve has evolved. Now, corporate IT departments must be agile, and their databases must be agile and scalable to match.

NoSQL

Despite its name, “NoSQL” doesn’t mean the complete absence of the SQL database approach. Rather, it works as more of a hybrid. The term is a contraction of “not only SQL.”

So, in addition to the advantage of continuity that staying with SQL offers, NoSQL enjoys many of the benefits of SQL databases.

The key difference is that NoSQL databases were developed with modern IT in mind. They are scalable, agile and purpose-built to deal with disparate, high-volume data.

Hence, data is typically more readily available and can be changed, stored or handle the insertion of new data more easily.

For example, MongoDB, one of the key players in the NoSQL world, uses JavaScript Object Notation (JSON). As the company explains, “A JSON database returns query results that can be easily parsed, with little or no transformation.” The open, human- and machine-readable standard facilitates data interchange and can store records, “just as tables and rows store records in a relational database.”

Generally, NoSQL databases are better equipped to deal with other non-relational data too. As well as JSON, NoSQL supports log messages, XML and unstructured documents. This support avoids the lethargic “schema-on-write,” opting to “schema-on-read” instead.

NewSQL

NewSQL refers to databases based on the relational (SQL) database and SQL query language. In an attempt to solve some of the problems of SQL, the likes of VoltDB and others take a best-of-both-worlds approach, marrying the familiarity of SQL with the scalability and agile enablement of NoSQL.

However, as with most seemingly win-win opportunities, NewSQL isn’t without its caveats. These vary from vendor to vendor, but in essence, you either have to sacrifice familiarity side or scalability.

If you’d like to speak with someone at erwin about SQL, NoSQL or NewSQL in more detail, click here.

For more industry advice, subscribe to the erwin Expert Blog.

Benefits of NoSQL Data Modeling

Categories
erwin Expert Blog

Data Modeling in a Jargon-filled World – NoSQL/NewSQL

In the first two posts of this series, we focused on the “volume” and “velocity” of Big Data, respectively.  In this post, we’ll cover “variety,” the third of Big Data’s “three Vs.” In particular, I plan to discuss NoSQL and NewSQL databases and their implications for data modeling.

As the volume and velocity of data available to organizations continues to rapidly increase, developers have chafed under the performance shackles of traditional relational databases and SQL.

An astonishing array of database solutions have arisen during the past decade to provide developers with higher performance solutions for various aspects of managing their application data. These have been collectively labeled as NoSQL databases.

Originally NoSQL meant that “no SQL” was required to interface with the database. In many cases, developers viewed this as a positive characteristic.

However, SQL is very useful for some tasks, with many organizations having rich SQL skillsets. Consequently, as more organizations demanded SQL as an option to complement some of the new NoSQL databases, the term NoSQL evolved to mean “not only SQL.” This way, SQL capabilities can be leveraged alongside other non-traditional characteristics.

Among the most popular of these new NoSQL options are document databases like MongoDB. MongoDB offers the flexibility to vary fields from document to document and change structure over time. Document databases typically store data in JSON-like documents, making it easy to map to objects in application code.

As the scale of NoSQL deployments in some organizations has rapidly grown, it has become increasingly important to have access to enterprise-grade tools to support modeling and management of NoSQL databases and to incorporate such databases into the broader enterprise data modeling and governance fold.

While document databases, key-value databases, graph databases and other types of NoSQL databases have added valuable options for developers to address various challenges posed by the “three Vs,” they did so largely by compromising consistency in favor of availability and speed, instead offering “eventual consistency.” Consequently, most NoSQL stores lack true ACID transactions, though there are exceptions, such as Aerospike and MarkLogic.

But some organizations are unwilling or unable to forgo consistency and transactional requirements, giving rise to a new class of modern relational database management systems (RDBMS) that aim to guarantee ACIDity while also providing the same level of scalability and performance offered by NoSQL databases.

NewSQL databases are typically designed to operate using a shared nothing architecture. VoltDB is one prominent example of this emerging class of ACID-compliant NewSQL RDBMS. The logical design for NewSQL database schemas is similar to traditional RDBMS schema design, and thus, they are well supported by popular enterprise-grade data modeling tools such as erwin DM.

Whatever mixture of databases your organization chooses to deploy for your OLTP requirements on premise and in the cloud – RDBMS, NoSQL and/or NewSQL – it’s as important as ever for data-driven organizations to be able to model their data and incorporate it into an overall architecture.

When it comes to organizations’ analytics requirements, including data that may be sourced from a wide range of NoSQL, NewSQL RDBMS and unstructured sources, leading organizations are adopting a variety of approaches, including a hybrid approach that many refer to as Managed Data Lakes.

Please join us next time for the fourth installment in our series: Data Modeling in a Jargon-filled World – Managed Data Lakes.

nosql

Categories
erwin Expert Blog

Multi-tenancy vs. Single-tenancy: Have We Reached the Multi-Tenant Tipping Point?

The multi-tenancy vs. single-tenancy hosting debate has raged for years. Businesses’ differing demands have led to a stalemate, with certain industries more likely to lean one way than the other.

But with advancements in cloud computing and storage infrastructure, the stalemate could be at the beginning of its end.

To understand why multi-tenancy hosting is gaining traction over single-tenancy, it’s important to understand the fundamental differences.

Multi-Tenancy vs. Single-Tenancy

Gartner defines multi-tenancy as: “A reference to the mode of operation of software where multiple independent instances of one or multiple applications operate in a shared environment. The instances (tenants) are logically isolated, but physically integrated.”

The setup is comparable to that of a bank. The bank houses the assets of all customers in one place, but each customer’s assets are stored separately and securely from one another. Yet every bank customer still uses the same services, systems and processes to access the assets that belong to him/her.

The single-tenancy counterpart removes the shared infrastructure element described above. It operates on a one customer (tenant) per instance basis.

The trouble with the single-tenancy approach is that those servers are maintained separately by the host. And of course, this comes with costs – time as well as money – and customers have to foot the bill.

Additionally, the single-tenancy model involves tenants drawing from the power of a single infrastructure. Businesses with thorough Big Data strategies (of which numbers are increasing), need to be able to deal with a wide variety of data sources. The data is often high in volume, and must be processed at increasingly high velocities (more on the Three Vs of Big Data here).

Such businesses need greater ‘elasticity’ to operate efficiently, with ‘elasticity’ referring to the ability to scale resources up and down as required.

Along with cost savings and greater elasticity, multi-tenancy is also primed to make things easier for the tenant from the ground up. The host upgrades systems on the back-end, with updates instantly available to tenants. Maintenance is handled on the host side as well, and only one set of code is needed for delivering, greatly increasing the speed at which new updates can be made.

Given these considerations, it’s hard to fathom why the debate over multi-tenancy vs. single-tenancy has waged for so long.

Diminishing Multi-Tenancy Concerns

The advantages of cost savings, scalability and the ability to focus on improving the business, rather than up-keep, would seem to pique the interest of any business leader.

But the situation is more nuanced than that. Although all businesses would love to take advantage of multi-tenancy’s obvious advantages, shared infrastructure remains a point of contention for some.

Fears about host data breaches are valid and flanked by externally dictated downtime.

But these fears are now increasingly alleviated by sound reassurances. Multi-tenancy hosting initially spun out of single-tenancy hosting, and the fact it wasn’t built for purpose left gaps.

However, we’re now witnessing a generation of purpose-built, multi-tenancy approaches that address the aforementioned fears.

Server offloading means maintenance can happen without tenant downtime and widespread service disruption.

Internal policies and improvements in the way data is managed and siloed on a tenant-by-tenant basis serve to squash security concerns.

Of course, shared infrastructure will still be a point of contention in some industries, but we’re approaching a tipping point as evidenced by the success of such multi-tenancy hosts as Salesforce.

Through solid multi-tenancy strategy, Salesforce has dominated the CRM market, outstripping the growth of its contemporaries. Analysts expect further growth this year to match the uptick in cloud adoption.

What are your thoughts on multi-tenancy vs. single tenancy hosting?

Data-Driven Business Transformation

Categories
erwin Expert Blog

Data Modeling in a Jargon-filled World – Internet of Things (IoT)

In the first post of this blog series, we focused on jargon related to the “volume” aspect of Big Data and its impact on data modeling and data-driven organizations. In this post, we’ll focus on “velocity,” the second of Big Data’s “three Vs.”

In particular, we’re going to explore the Internet of Things (IoT), the constellation of web-connected devices, vehicles, buildings and related sensors and software. It’s a great time for this discussion too, as IoT devices are proliferating at a dizzying pace in both number and variety.

Though IoT devices typically generate small “chunks” of data, they often do so at a rapid pace, hence the term “velocity.” Some of these devices generate data from multiple sensors for each time increment. For example, we recently worked with a utility that embedded sensors in each transformer in its electric network and then generated readings every 4 seconds for voltage, oil pressure and ambient temperature, among others.

While the transformer example is just one of many, we can quickly see two key issues that arise when IoT devices are generating data at high velocity. First, organizations need to be able to process this data at high speed.  Second, organizations need a strategy to manage and integrate this never-ending data stream. Even small chunks of data will accumulate into large volumes if they arrive fast enough, which is why it’s so important for businesses to have a strong data management platform.

It’s worth noting that the idea of managing readings from network-connected devices is not new. In industries like utilities, petroleum and manufacturing, organizations have used SCADA systems for years, both to receive data from instrumented devices to help control processes and to provide graphical representations and some limited reporting.

More recently, many utilities have introduced smart meters in their electricity, gas and/or water networks to make the collection of meter data easier and more efficient for a utility company, as well as to make the information more readily available to customers and other stakeholders.

For example, you may have seen an energy usage dashboard provided by your local electric utility, allowing customers to view graphs depicting their electricity consumption by month, day or hour, enabling each customer to make informed decisions about overall energy use.

Seems simple and useful, but have you stopped to think about the volume of data underlying this feature? Even if your utility only presents information on an hourly basis, if you consider that it’s helpful to see trends over time and you assume that a utility with 1.5 million customers decides to keep these individual hourly readings for 13 months for each customer, then we’re already talking about over 14 billion individual readings for this simple example (1.5 million customers x 13 months x over 30 days/month x 24 hours/day).

Now consider the earlier example I mentioned of each transformer in an electrical grid with sensors generating multiple readings every 4 seconds. You can get a sense of the cumulative volume impact of even very small chunks of data arriving at high speed.

With experts estimating the IoT will consist of almost 50 billion devices by 2020, businesses across every industry must prepare to deal with IoT data.

But I have good news because IoT data is generally very simple and easy to model. Each connected device typically sends one or more data streams with each having a value for the type of reading and the time at which it occurred. Historically, large volumes of simple sensor data like this were best stored in time-series databases like the very popular PI System from OSIsoft.

While this continues to be true for many applications, alternative architectures, such as storing the raw sensor readings in a data lake, are also being successfully implemented. Though organizations need to carefully consider the pros and cons of home-grown infrastructure versus time-tested industrial-grade solutions like the PI System.

Regardless of how raw IoT data is stored once captured, the real value of IoT for most organizations is only realized when IoT data is “contextualized,” meaning it is modeled in the context of the broader organization.

The value of modeled data eclipses that of “edge analytics” (where the value is inspected by a software program while inflight from the sensor, typically to see if it falls within an expected range, and either acted upon if required or allowed simply to pass through) or simple reporting like that in the energy usage dashboard example.

It is straightforward to represent a reading of a particular type from a particular sensor or device in a data model or process model. It starts to get interesting when we take it to the next step and incorporate entities into the data model to represent expected ranges –  both for readings under various conditions and representations of how the devices relate to one another.

If the utility in the transformer example has modeled that IoT data well, it might be able to prevent a developing problem with a transformer and also possibly identify alternate electricity paths to isolate the problem before it has an impact on network stability and customer service.

Hopefully this overview of IoT in the utility industry helps you see how your organization can incorporate high-velocity IoT data to become more data-driven and therefore more successful in achieving larger corporate objectives.

Subscribe and join us next time for Data Modeling in a Jargon-filled World – NoSQL/NewSQL.

Data-Driven Business Transformation

Categories
erwin Expert Blog

Enterprise Architecture vs. Data Architecture vs. Business Process Architecture

Despite the nomenclature, enterprise architecture, data architecture and business process architecture are very different disciplines. Despite this, organizations that combine the disciplines enjoy much greater success in data management.

Both an understanding of the differences between the three and an understanding of how the three work together, has to start with understanding the disciplines individually:

What is Enterprise Architecture?

Enterprise architecture defines the structure and operation of an organization. Its desired outcome is to determine current and future objectives and translate those goals into a blueprint of IT capabilities.

A useful analogy for understanding enterprise architecture is city planning. A city planner devises the blueprint for how a city will come together, and how it will be interacted with. They need to be cognizant of regulations (zoning laws) and understand the current state of city and its infrastructure.

A good city planner means less false starts, less waste and a faster, more efficient carrying out of the project.

In this respect, a good enterprise architect is a lot like a good city planner.

What is Data Architecture?

The Data Management Body of Knowledge (DMBOK), define data architecture as  “specifications used to describe existing state, define data requirements, guide data integration, and control data assets as put forth in a data strategy.”

So data architecture involves models, policy rules or standards that govern what data is collected and how it is stored, arranged, integrated and used within an organization and its various systems. The desired outcome is enabling stakeholders to see business-critical information regardless of its source and relate to it from their unique perspectives.

There is some crossover between enterprise and data architecture. This is because data architecture is inherently an offshoot of enterprise architecture. Where enterprise architects take a holistic, enterprise-wide view in their duties, data architects tasks are much more refined, and focussed. If an enterprise architect is the city planner, then a data architect is an infrastructure specialist – think plumbers, electricians etc.

For a more in depth look into enterprise architecture vs data architecture, see: The Difference Between Data Architecture and Enterprise Architecture

What is Business Process Architecture?

Business process architecture describes an organization’s business model, strategy, goals and performance metrics.

It provides organizations with a method of representing the elements of their business and how they interact with the aim of aligning people, processes, data, technologies and applications to meet organizational objectives. With it, organizations can paint a real-world picture of how they function, including opportunities to create, improve, harmonize or eliminate processes to improve overall performance and profitability.

Enterprise, Data and Business Process Architecture in Action

A successful data-driven business combines enterprise architecture, data architecture and business process architecture. Integrating these disciplines from the ground up ensures a solid digital foundation on which to build. A strong foundation is necessary because of the amount of data businesses already have to manage. In the last two years, more data has been created than in all of humanity’s history.

And it’s still soaring. Analysts predict that by 2020, we’ll create about 1.7 megabytes of new information every second for every human being on the planet.

While it’s a lot to manage, the potential gains of becoming a data-driven enterprise are too high to ignore. Fortune 1000 companies could potentially net an additional $65 million in income with access to just 10 percent more of their data.

To effectively employ enterprise architecture, data architecture and business process architecture, it’s important to know the differences in how they operate and their desired business outcomes.Enterprise Architecture, Data Architecture and Business Process Architecture

Combining Enterprise, Data and Business Process Architecture for Better Data Management

Historically, these three disciplines have been siloed, without an inherent means of sharing information. Therefore, collaboration between the tools and relevant stakeholders has been difficult.

To truly power a data-driven business, removing these silos is paramount, so as not to limit the potential analysis your organization can carry out. Businesses that understand and adopt this approach will benefit from much better data management when it comes to the ‘3 Vs.’

They’ll be better able to cope with the massive volumes of data a data-driven business will introduce; be better equipped to handle increased velocity of data, processing data accurately and quickly in order to keep time to markets low; and be able to effectively manage data from a growing variety of different sources.

In essence, enabling collaboration between enterprise architecture, data architecture and business process architecture helps an organization manage “any data, anywhere” – or Any2. This all-encompassing view provides the potential for deeper data analysis.

However, attempting to manage all your data without all the necessary tools is like trying to read a book without all the chapters. And trying to manage data with a host of uncollaborative, disparate tools is like trying to read a story with chapters from different books. Clearly neither approach is ideal.

Unifying the disciplines as the foundation for data management provides organizations with the whole ‘data story.’

The importance of getting the whole data story should be very clear considering the aforementioned statistic – Fortune 1000 companies could potentially net an additional $65 million in income with access to just 10 percent more of their data.

Download our eBook, Solving the Enterprise Data Dilemma to learn more about data management tools, particularly enterprise architecture, data architecture and business process architecture, working in tandem.

Categories
erwin Expert Blog

Data-Driven Business – Changing Perspective

Data-driven business is booming. The dominant, driving force in business has arguably become a driving force in our daily lives for consumers and corporations alike.

We now live in an age in which data is a more valuable resource than oil, and five of the world’s most valuable companies – Alphabet/Google, Amazon, Apple, Facebook and Microsoft – all deal in data.

However, just acknowledging data’s value won’t do. For a business to truly benefit from its information, a change in perspective is also required. With an additional $65 million in net income available to Fortune 1000 companies that make use of just 10 percent more of their data, the stakes are too high to ignore.

Changing Perspective

Traditionally, data management only concerned data professionals. However, mass digital transformation, with data as the foundation, puts this traditional approach at odds with current market needs. Siloing data with data professionals undermines the opportunity to apply data to improve overall business performance.

The precedent is there. Some of the most disruptive businesses of the last decade have doubled down on the data-driven approach, reaping huge rewards for it.

Airbnb, Netflix and Uber have used data to transform everything, including how they make decisions, invent new products or services, and improve processes to add to both their top and bottom lines. And they have shaken their respective markets to their cores.

Even with very different offerings, all three of these businesses identify under the technology banner – that’s telling.

Common Goals

One key reason for the success of data-driven business, is the alignment of common C-suite goals with the outcomes of a data initiative.

Those goals being:

  • Identifying opportunities and risk
  • Strengthening marketing and sales
  • Improving operational and financial performance
  • Managing risk and compliance
  • Producing new products and services, or improve existing ones
  • Monetizing data
  • Satisfying customers

This list of C-suite goals is, in essence, identical to the business outcomes of a data-driven business strategy.

What Your Data Strategy Needs

In the early stages of data transformation, businesses tend to take an ad-hoc approach to data management. Although that might be viable in the beginning, a holistic data-driven strategy requires more than makeshift efforts, and repurposed Office tools .

Organizations that truly embrace data, becoming fundamentally data-driven businesses, will have to manage data from numerous and disparate sources (variety) in increasingly large quantities (volume) and at demandingly high speeds (velocity).

To manage these three Vs of data effectively, your business needs to take an “any-squared” (Any2) approach. That’s “any data” from “anywhere.”

Any2

By leveraging a data management platform with data modeling, enterprise architecture and business process modelling, you can ensure your organization is prepared to undergo a successful digital transformation.

Data modeling identifies what data you have (internal and external), enterprise architecture determines how best to use that data to drive value, and business process modeling provides understanding in how the data should be used to drive business strategy and objectives.

Therefore, the application of the above disciplines and associated tools goes a long way in achieving the common goals of C-suite executives.

For more data advice and best practices, follow us on Twitter, and LinkedIn to stay up to date with the blog.

For a deeper dive into best practices for data, its benefits, and its applications, get the FREE whitepaper below.

Data-Driven Business Transformation

Categories
erwin Expert Blog

Why the NoSQL Database is a Necessary Step

 The NoSQL database is gaining huge traction and for good reason.

Traditionally, most organizations have leveraged relational databases to manage their data. Relational databases ensure the referential integrity, constraints, normalization and structured access for data across disparate tools, which is why they’re so widely used.

But as with any technology, evolving trends and requirements eventually push the limits of capability and suitability for emerging business use cases.

New data sources, characterized by increased volume, variety and velocity have exposed limitations in the strict relational approach to managing data.  These characteristics require a more flexible approach to the storage and provisioning of data assets that can support these new forms of data with the agility and scalability they demand.

Technology – specifically data – has changed the way organizations operate. Lower development costs are allowing start ups and smaller business to grow far quicker. In turn, this leads to less stable markets and more frequent disruptions.

As more and more organizations look to cut their own slice of the data pie, businesses are more focused on in-house development than ever.

This is where relational data modeling becomes somewhat of a stumbling block.

Rise of the NoSQL Database

More and more, application developers are turning to the NoSQL database.

The NoSQL database is a more flexible approach that enables increased agility in development teams. Data models can be evolved on the fly to account for changing application requirements.

This enables businesses to adopt an agile system to releasing new iterations and code. They’re scalable and object oriented, and can also handle large volumes of structured, semi-structured and unstructured data.

Due to the growing deployment of NoSQL and the fact that our customers need the same tools to manage them as their relational databases, erwin is excited to announce the availability of a beta program for our new erwin DM for NoSQL product.

With our new erwin DM NoSQL option, we’re the only provider to help you model, govern and manage your unstructured cloud data just like any other traditional database in your business.

  • Building new cloud-based apps running on MongoDB?
  • Migrating from a relational database to MongoDB or the reverse?
  • Want to ensure that all your data is governed by a logical enterprise model, no matter where its located?

Then erwin DM NoSQL is the right solution for you. Click here to apply for our erwin DM NoSQL/MongoDB beta program now.

And look for more info here on the power and potential of  NoSQL databases in the coming weeks.

erwin NoSQL database

Categories
erwin Expert Blog

The Rise of NoSQL and NoSQL Data Modeling

With NoSQL data modeling gaining traction, data governance isn’t the only data shakeup organizations are currently facing.