Cloud Computing vs. Local Databases: A Data Security Duel

Cloud Computing vs. Local Databases: A Data Security Duel

Cloud vs. Local: The Debate on Data Storage Solutions

In my experience, the debate between cloud computing and local databases for data storage is a central topic among IT professionals. Cloud computing, with its vast scalability and accessibility, has revolutionized how organizations store and manage data. The ability to scale resources on demand and access data from any location has significant advantages for businesses seeking agility and growth. However, data security and sovereignty have emerged as pressing concerns. The apprehension stems from the reliance on third-party service providers, which may be subject to different regulatory standards and potential vulnerabilities.

On the other hand, local databases are often lauded for their performance and heightened data security, particularly when it comes to sensitive information. The proximity of data storage to the users can result in lower latency and faster processing times, which is crucial for many mission-critical applications. Yet, despite these advantages, local databases can fall behind in terms of cost-effectiveness and backup recovery solutions. The infrastructure costs for local storage solutions are typically higher, and the responsibility for creating and managing backups rests solely on the organization, which can be resource-intensive.

While cloud computing provides advanced data management features and seamless backup and recovery processes, the cost-benefit analysis and the control over data security protocols can be variable. Organizations might struggle to find the right balance between leveraging the cloud’s efficiencies and maintaining adequate control over their data security measures.

Protecting Sensitive Information: Security Measures Compared

When it comes to protecting sensitive information, the choice between cloud computing and local databases is pivotal. While cloud computing presents unparalleled scalability and accessibility for data storage, it raises questions about data security and sovereignty. The multi-tenant nature of cloud services means that data is often stored in shared environments, which can be a concern for businesses handling sensitive information that requires strict compliance with industry regulations.

Local databases, in contrast, provide superior control over security measures. Organizations can implement and manage their security protocols tailored to their specific needs, which can include advanced encryption, regular security audits, and strict access controls. However, this level of control and security does come at a price; local databases may not offer the same level of cost-effectiveness and scalability as cloud solutions.

The backup and recovery strategies employed by cloud computing and local databases also differ significantly. Cloud service providers typically offer automated backup solutions, which can greatly reduce the manual effort required by organizations. Local databases, however, may necessitate a more hands-on approach to data management, ensuring that backups are performed regularly and effectively, which can be both time-consuming and costly.

How Scalability Influences Data Security Strategies

Scalability is a critical factor that influences data security strategies. Cloud computing allows for dynamic data storage solutions that can adapt to the changing needs of a business. This flexibility extends to security measures, such as scalable encryption and access control, which can be adjusted as the data storage requirements grow. Enhanced scalability in the cloud also ensures that backup and recovery options are robust and can maintain data sovereignty even as the volume of data expands.

Conversely, local databases may face challenges with scalability, which can impact their ability to secure data effectively. As data volume increases, the existing infrastructure may struggle to keep pace, potentially leading to performance bottlenecks and increased security risks. While local databases offer a high degree of control over data security, their scalability limits can pose significant challenges for growing businesses.

The cost-efficiency of cloud computing compared to local databases is also influenced by scalability. The cloud’s pay-as-you-go model allows for flexible adaptation to changing data storage and accessibility needs without compromising security. This model is particularly advantageous for organizations that experience fluctuating data usage patterns.

Accessibility: Anytime, Anywhere Data Retrieval

Accessibility is a cornerstone of modern business operations, and cloud computing excels in this regard. The ability to retrieve data from any location is critical for businesses that rely on real-time access and collaboration across different geographies. Cloud computing’s distributed nature ensures that data is available whenever and wherever it is needed, which is a considerable advantage over local databases.

Local databases, while offering more control over data security and sovereignty, may not provide the same level of scalability and cost-efficiencies as cloud-based solutions. Local storage typically requires significant infrastructure investment, which can be less flexible when it comes to pay-as-you-go data storage models.

Moreover, the backup and recovery systems in cloud computing are designed for efficiency due to its distributed nature. In the event of data loss or system failure, cloud services can quickly restore data from backups located in multiple geographic locations. Local databases, however, may offer faster immediate performance for data management but often require more sophisticated and potentially expensive backup recovery solutions to ensure similar levels of business continuity.

Evaluating Total Cost of Ownership for Storage Options

When making decisions about data storage, evaluating the total cost of ownership (TCO) is crucial. Comparing cloud computing with local databases necessitates a careful assessment of scalability. The chosen solution must be able to adapt to growing data volumes without compromising performance or cost-efficiency. Cloud computing often wins on scalability, but the TCO should reflect all aspects of the storage solution, including ongoing operational costs.

Data security is a paramount consideration in the duel between cloud computing and local databases. It is essential to evaluate the strength of encryption, the effectiveness of access controls, and compliance with data sovereignty requirements to ensure that the data is adequately protected. Both cloud and local options have their merits and challenges in security, and the choice will largely depend on the specific needs and regulatory requirements of the organization.

Accessibility and backup recovery are also vital factors in the TCO equation. Cloud computing may provide superior accessibility and streamlined backup solutions, which can significantly reduce the time and resources needed for data management. In contrast, local databases might offer greater control over backup recovery protocols but may require more investment in data management infrastructure.

Measuring Performance: Throughput and Latency Concerns

When measuring performance in data storage solutions, throughput and latency are critical concerns. The scalability of cloud computing directly influences its throughput capabilities. Cloud solutions can elastically scale to meet fluctuating demand, maintaining performance levels even during peak usage times. This capability is particularly beneficial for businesses that experience variable workloads and need to ensure consistent data processing speeds.

Data security remains a central concern in the selection between cloud computing and local databases. Cloud services typically offer robust backup and recovery options, which can provide a safety net in the event of data loss. However, local databases can offer tighter control over data sovereignty, which can be a decisive factor for organizations with stringent regulatory compliance requirements.

In terms of cost-efficiency in data management, cloud computing typically presents a pay-as-you-go model that provides financial flexibility, making it more accessible and cost-effective for many businesses. Conversely, local databases might incur higher upfront capital costs for infrastructure, which can be a significant barrier, especially for smaller organizations.

Data Management Policies and Best Practices

Assessing the advantages of cloud computing’s scalability and cost-efficiency over local databases is crucial for setting up dynamic data storage needs. Cloud solutions can adapt more readily to changing business environments, which simplifies data management policies and allows organizations to respond quickly to market demands.

Comparing data security measures between cloud computing platforms and local database systems is imperative for developing robust data management policies. Understanding the nuances of data sovereignty concerns and how they impact the choice of storage solution is essential for ensuring compliance and maintaining trust with stakeholders.

Performance, accessibility, and backup recovery solutions must all be considered when formulating data management policies. Cloud computing often leads to more streamlined data management practices, thanks to its superior accessibility and backup solutions. Local databases, while providing greater control, require careful planning to ensure that data management policies are in line with best practices for security and business continuity.

Backup and Recovery Solutions: Ensuring Business Continuity

Differences in backup and recovery solutions are a key aspect of the comparison between cloud computing and local databases. Cloud solutions offer scalability and accessibility, ensuring that backup processes are less intrusive and more efficient. The automated backup solutions provided by cloud services can significantly contribute to ensuring business continuity, with less reliance on manual intervention.

Conversely, local databases often require manual backup processes, which can introduce more complexity and potential for human error. While these systems may provide enhanced data sovereignty and security control, the trade-off is often seen in terms of cost-effectiveness and performance during the backup and recovery process.

In evaluating cost-efficiency and performance, cloud computing’s pay-as-you-go model offers adaptability and financial prudence, particularly appealing for businesses with variable data management needs. Local databases, meanwhile, entail significant upfront capital expenditure and ongoing maintenance costs for data management infrastructure, which must be carefully weighed against the benefits of enhanced control and security.

The Verdict: Balancing Security, Scalability, and Cost-Efficiency

In the duel between cloud computing and local databases, there is no one-size-fits-all winner. The choice ultimately depends on an organization’s specific needs, priorities, and constraints. 

Cloud computing emerges as the champion of scalability, accessibility, and streamlined data management. Its ability to dynamically adapt to changing business needs and its cost-effective pay-as-you-go model make it an attractive option for many organizations, especially those with variable workloads and geographically dispersed teams.

However, local databases put up a strong fight when it comes to data security and sovereignty. For businesses dealing with highly sensitive information and strict regulatory requirements, the enhanced control and customization offered by local solutions can be the deciding factor. The faster immediate performance of local databases also gives them an edge for certain mission-critical applications.

Ultimately, the victor in this duel depends on careful evaluation of an organization’s total cost of ownership, performance requirements, and data management policies. For some, the scalability and accessibility of the cloud will reign supreme. For others, the security and control of local databases will be the key to success.

In many cases, a hybrid approach that leverages the strengths of both cloud computing and local databases may provide the optimal balance. By strategically allocating workloads and data between cloud and local solutions, organizations can maximize the benefits of each while mitigating their respective challenges.

As technology continues to evolve, the duel between cloud computing and local databases is sure to take on new dimensions. Emerging trends like edge computing and blockchain-based storage may further disrupt the data management landscape. But one thing remains clear: in the ever-changing world of data, finding the right balance of security, scalability, and cost-efficiency will always be the key to unlocking the full potential of an organization’s most valuable asset – its information.

-Buda Consulting

Oracle SQL Firewall: A New Feature That Blocks Top Database Attacks in Real-Time

Oracle SQL Firewall: A New Feature That Blocks Top Database Attacks in Real-Time

Oracle 23c introduces a very powerful and easy-to-use database security feature that many users will want to try, especially for web application workloads. Called Oracle SQL Firewall, it offers real-time protection from within the database kernel against both external and insider SQL injection attacks, credential attacks, and other top threats. 

Oracle SQL Firewall should be a huge help in reducing the risk of successful cyber-attacks on sensitive databases. For example, vulnerability to SQL injection due to improperly sanitized inputs is currently ranked as the #3 most common web application security weakness overall in the latest OWASP Top 10. This tool effectively eliminates SQL injection as a threat wherever it is deployed.

SQL Firewall is intended for use in any Oracle Database deployment, including on-premises, cloud-based, multitenant, clustered, etc. It is compatible with other Oracle security features like Transparent Data Encryption (TDE), Oracle Database Vault, and database auditing.

How Oracle SQL Firewall works

SQL Firewall provides rock-solid, real-time protection against some of the most common database attacks by restricting database access to only authorized SQL statements or connections. Because SQL Firewall is embedded in the Oracle database, hackers cannot bypass it. It inspects all SQL statements, whether local or network-based, and whether encrypted or unencrypted. It analyzes the SQL, any stored procedures, and related database objects. 

The new tool works by monitoring and blocking unauthorized SQL statements before they can execute. To use it, you first capture, review, and build a list of permitted or approved SQL statements that a typical application user would run. These form the basis of an allow-list of permitted actions, akin to a whitelist. 

You can also specify session context data like client IP address, operating system user, or program type on the allow-list to preemptively block database connections associated with credential-based attacks. This includes mitigating the risk of stolen or misused credentials for application service accounts.

Once enabled, Oracle SQL Firewall inspects all incoming SQL statements. Any unexpected SQL can be logged to a violations list and/or blocked from executing. Though the names are similar, Oracle SQL Firewall is much simpler architecturally than the longstanding Oracle Database Firewall (Audit Vault and Database Firewall or AVDF) system. You can configure the new SQL firewall at the root level or the pluggable database (PDB) level.

Is there a downside to using Oracle SQL Firewall?

In part because it is still so new, Oracle SQL Firewall performance data is not widely reported online. Transaction throughput is vitally important for many applications, so it’s possible that SQL Firewall would create unacceptable overhead even if it were minimal. The good news is that “before and after” performance testing in your environment should be straightforward using best-practice testing techniques.

Oracle SQL Firewall administrative security is robust and logically integrated with other Oracle Database admin security, so it does not introduce new security risks. For example, only the SQL_FIREWALL_ADMIN role can administer the tool or query the views associated with it. SQL Firewall metadata is stored in dictionary tables in the SYS schema, which rely on dictionary protection like other such tables in SYS.

Who should use Oracle SQL Firewall?

For any business that needs to improve application security, such as for compliance with US government supply chain regulations or as part of a Zero Trust initiative, Oracle SQL Firewall could be a good choice. It could prove especially useful in DevOps environments due to its minimal impact on application development and testing timelines

What’s next?

A goal for this blog post is to encourage organizations using Oracle 23c to implement SQL Firewall. It is a low-effort way to improve application and database security and significantly reduce information security risk associated with the sensitive data it protects.

To speak with an expert on how Oracle Database Firewall could improve your database security, and how it might fit with your overall security goals and challenges, contact Buda Consulting




Navigating Database Cloud Migration: How to Choose the Best Cloud Migration Services

Navigating Database Cloud Migration: How to Choose the Best Cloud Migration Services

Thinking of moving your database from your data center to a cloud or managed hosting provider? There are lots of options, and choosing the right cloud migration services for your workload takes research and planning. To get the most business value from your move to the cloud, you need a strategy that minimizes both time to benefit and business risk.

Why move a database to the cloud?

Common reasons for undertaking a cloud database migration include:

  • Reduced operating costs. In the cloud, the cloud service provider (CSP) bears the cost of maintaining, securing, and supporting the physical and virtual infrastructure your databases will run on.
  • Simplified remote access. The public cloud makes it easy to provide database access to remote workers and services.
  • Less security responsibility. Leading public clouds offer comprehensive, multi-layered security controls like data encryption, network protection for remote workers, user activity monitoring (UAM), and threat monitoring/intelligence.
  • Improved scalability. Most clouds can automatically scale data storage and workloads on demand, reducing the overhead associated with manually scaling your infrastructure. 

But the process of migrating databases to the cloud can often exceed time and cost estimates and even lead to security and compliance issues if badly executed. Choosing the right cloud migration services can help streamline key steps and make progress easier to track and manage.

What public cloud should you move to?

A primary consideration that largely dictates what cloud migration services you can pick from is the cloud environment you want to move to.

In some cases, this choice is effectively predetermined. For example, if you are running Microsoft SQL Server workloads and want to keep them in the Microsoft ecosystem, you’ll want to move to Microsoft Azure.  

Similarly, if you use Oracle Database and want to take advantage of the sophisticated cloud migration services that Oracle offers its customers, the best cloud for your workloads might be Oracle Cloud Infrastructure (OCI).

Or maybe you want to use Amazon Web Services with its rich landscape of services. If so, you might benefit from expert guidance from a trusted partner on how to structure your Amazon environment, including networking, storage, and server components. For example, not every business is ready to fully leverage the ephemeral nature of some AWS constructs. The best approach might be to move your database workloads to their own individual instances in Amazon EC2. Or for workloads that don’t require their own instances, Amazon RDS can be a good option.

Finally, if a powerful range of cloud migration services is a deciding factor in your choice of a public cloud, consider Google Cloud. Google Cloud offers multiple approaches for migrating Oracle, SQL Server, and other database workloads. Google’s highly rated cloud migration services use AI to help automate repeatable tasks, saving time and reducing the risk of errors.

What is your database migration strategy?

Another factor in which cloud migration services to use is your database migration strategy. Which strategy you pick will depend on related issues, such as whether you plan to clean up your data or institute new data governance processes as part of the migration.

The three basic database migration strategies are:

  1. Big bang—where you transfer all your data from the source database to the target environment in one “all hands on deck” operation, usually timed to coincide with a period of low database usage, like over a weekend. The advantage of a big bang migration is its simplicity. The downside is that downtime will occur, making this approach unsuitable for databases that require 24×7 availability.
  2. Zero-downtime—where you replicate data from the source to the target. This allows you to use the source database during the migration, making it ideal for critical data. This choice can be fast, overall cost-effective, and generally non-disruptive to the business. The downside of the zero-downtime option is the added complexity of setting up replication, and the risk of possible data loss or hiccups in the data movement if something goes wrong.
  3. Trickle—where you break the migration down into bite-sized sub-migrations, each with its own scope and deadlines. This approach makes it easier to confirm success at each phase. If problems occur, at least their scope is limited. Plus, teams can learn as they go and improve from phase to phase. The problem with a trickle migration is it takes more time and also more resources, since you have to operate two systems until completion.

Cloud migration services examples

Once you’ve identified your target cloud environment and your migration strategy, you can start choosing cloud migration services options.

For example, say you plan to move a business-critical Oracle database to Oracle Cloud Infrastructure using a zero-downtime strategy. One of the best cloud migration services options in this case is Oracle Cloud Zero Downtime Migration (ZDM).

A great feature of ZDM is the ability to fallback if necessary. This is Oracle’s preferred automated tool for migrating a database to OCI with no changes to the database type or version. Using a “controlled switchover” approach that includes creating a standby database, ZDM can dynamically move database services to a new virtual or bare metal environment, synchronize the two databases, and then make the target database the primary database.

At the opposite end of the cloud migration services spectrum from Oracle Cloud ZDM is Oracle Cloud Infrastructure Database Migration—a fully managed service that gives customers a self-service experience for migrating databases to OCI. Oracle Cloud Database Migration runs as a managed cloud service separate from the customer’s OCI tenancy and associated resources. Businesses can choose a simple offline migration option (similar to a “big bang” migration) or an enterprise-scale logical migration with minimal downtime (similar to a “trickle” migration). Teams can pause and resume a migration job as needed, such as to conform to a planned maintenance window.

If you want to move your Oracle, SQL Server, or other database workloads to AWS, Amazon offers a comprehensive set of cloud migration services to help automate the process. However, these tools are complex and powerful, and best used by experienced technologists. Be sure to confirm that AWS database sizing and capacity growth parameters meet your needs. You’ll also need to decide whether to use Amazon Relational Database Service (RDS) or RDS Custom, depending on the kinds of applications your database supports.

Next steps

While moving databases to the cloud offers many benefits, a high percentage of cloud database migrations falter or fail due to inadequate planning and/or a lack of specific expertise. The top public cloud environments offer purpose-built cloud migration services to streamline the process, but these are not always easy to use. The largest CSPs also support millions of users, so your business may struggle to get the individual attention you need in a timely way.

Whether your databases reside in a major public cloud or a smaller cloud or managed hosting environment, Buda Consulting is always the first point of contact for our clients. Personalized service by someone who knows your business is guaranteed. If there is ever a problem, you call us and we take it from there. 

Contact Buda Consulting to discuss how our cloud and managed hosting migration services can help your business get maximum value from moving to the cloud.  


A Focus on Oracle Container Databases

A Focus on Oracle Container Databases

Oracle 12c introduced a major architectural change called Oracle container databases, also known as Oracle multitenant architecture. With this change, an Oracle database can act as a multitenant container database (CDB).

A CDB, in turn, can house zero or more pluggable databases (PDBs), each consisting of schemas and objects that function just like familiar “normal” (pre-Oracle 12c) databases from the viewpoint of applications or SQL IDEs.

Contents of CDBs and PDBs

In the Oracle container database model, the CDB contains most of the working components every Oracle DBA knows, e.g., controlfiles, datafiles, tempfiles, undo, redo logs, etc. The CDB also contains the data dictionary for objects owned by the root container and those visible to all PDBs in the CDB.

Since the CDB contains most of the key parts of the database, each PDB need only contain information that is specific to itself and its schemas and schema objects, like datafiles and tempfiles. A PDB also has its own data dictionary, which includes information about objects specific to that PDB. A PDB can also have its own local undo tablespace. Each PDB has a unique ID and name. To an Oracle Net client, a PDB looks like a separate database.

Besides PDBs, a CDB can also contain zero or more application containers. These are user-created CDB components that store data and metadata for one or more application backends.

Finally, by default every CDB has one root container (named CDB$ROOT) and one seed PDB container (named PDB$SEED). The former stores Oracle metadata and common users. The latter is a template used to create new PDBs.

Deprecation and desupport of non-CDB databases

Beginning with Oracle Database 12c, Oracle deprecated the non Oracle container database architecture, and desupported it in Oracle Database 21c. This means that the Oracle Universal Installer and DBCA can no longer be used to create non-CDB instances of Oracle databases.

Desupport also means that an upgrade to Oracle Database 21c includes a migration to the multitenant architecture. This can be a significant consideration as it can change your approach to database administration.

Benefits of Oracle container database architecture

Is a move to Oracle container database architecture worth the learning curve? Why not just continue to create distinct individual databases or virtual machines (VMs)?

The benefits of moving to the CDB architecture often outweigh the “pain of change” because it can streamline your use of database resources and save you considerable operational and administrative time and costs. Pluggable databases are also easy to move between CDBs, which can increase the agility of your DBA services.

Some specific benefits of the Oracle container database model include:

  • The ability to consolidate code and data without changing existing schemas or applications.
  • Consolidating databases means you can also consolidate IT infrastructure and utilize computing resources more efficiently.
  • Consolidated IT infrastructure, in turn, can simplify monitoring and management of the database environment—including faster backups and patching. Performance tuning can also be easier with the Oracle container database model.
  • Because PDBs look like non-multitenant databases to Oracle Net clients, changes for developers working with Oracle databases are often not dramatic. Developers may notice little difference connecting to a multitenant scenario except that the connection strings have a different format.

Pluggability in the Oracle container database model

One of the top advantages of the Oracle container database model or multitenant option is the ability to unplug a pluggable database (PDB) from one CDB and plug it into a different CDB. This makes it easy to move databases, and can also be used to patch and upgrade database versions. Basically, you just unplug the PDB, move it to the CDB you plan to upgrade, and it will be patched/upgraded automatically along with the CDB.

The Oracle multitenant model also allows you to relocate a PDB to a new CDB or application container even more easily than going the unplugging/plug-in route, with near-zero downtime. During relocation, the source PDB can be open in read/write mode and fully usable.

More about application containers

Along with the Oracle container database model comes the concept of application containers. Similar to a root CDB container, you can use an application container to centralize or “containerize” one or more applications, each consisting of shared configuration, metadata and objects. These are then used by the application PDBs within the application container.

Next steps

The Oracle container database architecture can seem confusing even to experienced DBAs. But it’s more intuitive than it sounds once you’ve had a chance to work with it. The advantages of multitenancy generally far offset the learning curve for many DBAs and their companies.

To speak with an Oracle expert about leveraging the Oracle container database model in your environment, contact Buda Consulting.

How Much Does Database Disaster Recovery Cost? “It Depends”

How Much Does Database Disaster Recovery Cost? “It Depends”

How Much Does Database Disaster Recovery Cost? “It Depends” – a sometimes frustrating response that we hear frequently when we ask a question.  To some, it feels like a dodge. Maybe the person we are asking does not know, or would rather not give their opinion, or would rather not share their knowledge.

But when I hear someone respond “It Depends”, I tend to think that they are seriously considering the question. I hope that the answer will be a thoughtful, considered response.  In fact, few questions really deserve an automatic response. Most issues are nuanced, and when someone says “It Depends”, it does not mean that they are dodging the question.

A common question that we are asked by new clients is how much will it cost to implement Disaster Recovery (D/R) for their database environments,  My answer always starts the same:  “It Depends”

Database Disaster Recovery vs High Availability

Disaster Recovery is sometimes considered distinct from High Availability. For the purposes of this article, I think of them as two parts of the same whole. The objective of both is to keep your database available to your users when they need it. And when designing a solution that meets those objectives, both types of tools may be implemented. 

I think of Disaster Recovery in terms of things like backup and recovery tools and passive standby databases. The idea is to have a straightforward way of recovering and resuming operations if the primary server fails.  And I think of High Availability in terms of things like clustering, geographically distributed availability groups, and active-standby databases. The idea here is to prevent the system from ever failing in the first place.

When it comes to keeping the database available as needed, all of these tools need to be considered.

The Cost of Downtime

There are many factors to consider when thinking about Disaster Recovery.  Perhaps the most important, and I think the first that should be asked, is what is the cost of downtime?   Determining the cost of downtime to our own organizations requires asking what would happen if we were down for 1 minute, 1 hour, 1 day, or other appropriate intervals. We must consider all departments and stakeholders.  For example, in a manufacturing operation (this list of considerations is not exhaustive):

  • How many orders are typically placed in one minute, hour, day? What is the dollar value of those orders? What percentage will likely be lost forever vs delayed?
  • How many items are received during those intervals, what is the downstream impact on production if items cannot be received into the system?
  • How many items are produced during those intervals, what is the downstream financial impact if they are not produced and shipped?
  • How many orders are labeled during those intervals, how many shipped? What is the downstream impact of delays on labeling or shipping?
  • What are the upstream production impacts of not being able to produce, label, ship, or record order information (inventory space, etc)
  • What is the liability cost of not getting products or services to vendors or end customers within contractual guidelines?

These are not simple questions to answer, but the true cost of downtime can only be determined by such an exercise. 

What is Acceptable Database Disaster Recovery?

Once we know the cost of downtime, we can determine what level of disaster recovery is required in order to prevent unacceptable costs to the organization, which, of course, is the main reason to have a disaster recovery plan in the first place.  At the end of the day, the question is how much data loss or downtime is acceptable.

Of course, we would always like to say zero. Zero downtime, zero data loss, no matter what. However, implementing true zero loss Disaster Recovery may be cost-prohibitive for your organization. And moving from a zero-loss posture to a very small loss posture can reduce implementation costs vary significantly. So it makes sense to determine what the costs are and therefore what is acceptable to the organization.

Once we know the cost for an interval of downtime, we can do a cost/benefit analysis regarding the cost of implementing D/R. 

Factors That Drive The Cost of Implementation

The implementation cost of Database Disaster Recovery varies mainly on two key factors. 

  • The amount of data loss that is acceptable (known as recovery point objective or RPO)
  • The amount of downtime that is acceptable (known as recovery time objective or RTO)

For both of these factors, the lower the acceptable loss, the higher the cost, with the cost and complexity of driving down downtime generally greater than that of driving down the amount of data loss.

Implementing a Disaster Recovery scenario with zero possibility of data loss and zero downtime can be very expensive. This approach essentially requires full live redundancy across multiple geographic regions and the complexity that goes along with ensuring a seamless automatic transition of all applications from one environment to another and real-time synchronization between them.  

For many organizations, this full redundancy approach will be cost-prohibitive. And for most organizations, the cost of a small amount of downtime and a small possibility of a very small amount of data loss is acceptable and will not cause significant damage to the operation (or to profit). This compromise can mean the difference between being able to afford a Disaster Recovery Solution and not being able to do so. Having any Disaster Recovery Solution, even one without all zeroes is much better than having none.

The Bottom Line

When someone asks me how much it will cost to implement a Disaster Recovery Solution, I always say “It Depends”.  And then I ask a lot of questions. Contact us today for a consultation.

Need Continuous Database Protection across Oracle and SQL Server? Consider Dbvisit Standby MultiPlatform.

Need Continuous Database Protection across Oracle and SQL Server? Consider Dbvisit Standby MultiPlatform.

Availability of your database environment and continuous database protection is business-critical. Without continuous database protection, you can’t ensure business continuity. But it’s only a matter of time before you experience a failure. When (not if) that happens, will you be ready?

When it comes to disaster recovery, many businesses rely on conventional backup/restore procedures to protect their database from risks like operational failures, cyber-attack, disaster impacts, and data corruption. But restoring from traditional backups can be slow, taking hours or even days. Restoring from backups is also notoriously failure-prone because testing and validation are usually infrequent. Plus, depending on how frequently backups occur, you could lose hours’ worth of the most recent changes to your data.

If your organization requires rapid, resilient disaster recovery and business continuity capabilities and/or cannot tolerate data loss, you may want to consider a standby database configuration. A standby database is a copy of the primary database, usually at a remote location. It updates continuously to minimize data loss and can quickly “failover” to support ongoing operations if the primary database goes down or is corrupted.

Why use a standby database for disaster recovery and continuous database protection?

A standby server has several important advantages over traditional backup/restore tools for disaster recovery and data loss prevention:

  • It is always operational and available in seconds, not hours or days, so you can recover more quickly.
  • It minimizes potential data loss by updating continuously with minimal time lag.
  • Its operational readiness is constantly verified, which guarantees database integrity after failover.
  • It enables you to test your disaster recovery plan much more easily, with minimal risk or impact to your primary database and the applications that rely on it.
  • It can be offsite, geographically distant, and running on separate infrastructure from your primary database, which reduces disaster risk in the event of operational failure at your production site.
  • You can enjoy peace of mind knowing that your database is always backed up and can be restored or recovered at any time with no surprises.

In short, a standby database can be an ideal solution for organizations that want to ensure continuous database protection to minimize downtime, data loss, and business risk. The following figure illustrates a standby database configuration.

Meet Dbvisit, Buda Consulting’s standby database partner

Buda Consulting has considerable experience helping organizations implement backup/restore, high availability and disaster recovery solutions for their databases on Oracle, Microsoft SQL Server and open-source platforms. We have found our longtime partner Dbvisit to be a world-class standby database solution provider whose solutions are easy to use, cost-effective and backed by great customer service. Our customers of all sizes love Dbvisit, which is why we’re sharing this blog post.

We’re especially excited to share with our client base that Dbvisit now offers the industry’s first multiplatform option. Called StandbyMP, it enables you to manage standby databases for Oracle and SQL Server through a single pane of glass. Imagine confronting an outage and being able to failover all your databases automatically or with a single click! PostgreSQL support is also coming soon in 2022.

Another big advantage of Dbvisit solutions is you can deploy them on-premises, in a public cloud or on hybrid cloud. Supported public clouds include Amazon Web Services (AWS), Microsoft Azure and Oracle Cloud.

Gold Standard Disaster Recovery and Continuous Database Protection

The folks at Dbvisit are disaster recovery specialists, with thousands of customers in 120 countries and offices in North America, Europe and Asia Pacific. While they serve some of the world’s leading enterprises, including Verizon, Barclays, 7-Eleven, the US Navy, Volkswagen, PWC and CBS, Dbvisit’ exceptional support and industry-leading total cost of ownership (TCO) make them a great choice for small to midsized businesses (SMBs) as well.

According to Neil Barton CTO of Dbvisit, “Dbvisit Standby guarantees database continuity through a verified standby database that is always available and ready to take over at the moment you need it.” Even if your most trusted DBA is on vacation when an emergency occurs at 3AM, your database(s) will be protected from contingencies ranging from human error to hardware failure to hurricanes to hackers.

Dbvisit Standby solutions for Oracle and/or SQL Server promise minimal data loss (a maximum of approximately 10 minutes) and fast database recovery/failover (within a few minutes). Continuous exercising and testing maintains and validates the integrity of your standby database 24×7. This is what Dbvisit calls “Gold Standard Disaster Recovery.” It offers the following value propositions:

  • Database integrity with a verified standby database that is identical to the primary database and fully operational to ensure successful failover
  • Resilience to meet your recovery requirements across all outage and disaster scenarios
  • Automated and intuitive to eliminate manual processes, opportunities for error and dependence on highly skilled staff
  • Decision simplification to “de-stress DR”
  • Near-zero data loss
  • Cost-efficiency and low risk

Dbvisit lives up to its motto: “We believe nothing should stand in the way of your business moving forward.”

Dbvisit StandbyMP: Enterprise-class DR for multiple database platforms

Using different disaster recovery tools and processes across multiple database types has always been complex. Dbvisit’s new StandbyMP offering promises to reduce this complexity and for the first time allow customers to manage DR processes for SQL Server and Oracle SE databases through a single console. We are very excited about the multi-platform concept and are looking forward to the addition of PostgreSQL and other popular databases soon.

Prioritizing risk reduction, disaster resiliency, recovery speed and ease of use, StandbyMP delivers rapid time-to-benefit, ease of administration and automated, on-demand failover. Dbvisit guarantees database continuity and radically reduces database risk with a consistent, “Gold Standard” approach to protecting both Oracle and SQL Server databases.

“Our software costs the equivalent of two minutes’ downtime,” said Tim Marshall, Product Marketing Manager, in a recent Dbvisit blog post. “Great doesn’t have to be expensive.”

Dbvisit highlights these key value propositions for its StandbyMP solution:

Simplify – Control your Oracle and SQL Server disaster recovery configurations from a single central console
Speed up – Multi/concurrent database actions accelerate recovery across both Oracle and SQL Server
Risk down – Automation removes manual processes, hard-to-maintain scripts, and opportunities for error
Level up – Simplify your disaster recovery plans and ensure best practices are implemented across all your databases

Next steps

An industry-leading standby database solution like Dbvisit StandbyMP can be the perfect way to continuously protect your critical data—but it’s not right for every database. To connect with an expert on whether a standby database makes sense for your business, contact Buda Consulting to schedule a 15-minute conversation.

For more information on Dbvisit solutions and services, check out