Database Encryption: What You Need to Know

Database Encryption: What You Need to Know

These days organizations are storing, accessing, and analyzing more data than ever, both on-premises and on the cloud. As this trend accelerates, the need for effective database security grows right along with it.

Basic security controls like login/password credentials aren’t adequate to safeguard sensitive data from today’s increasingly sophisticated external and internal attacks. To reduce cyber risk, comply with regulations and give customers and other stakeholders peace of mind, many organizations need a holistic security approach that includes database encryption.

The view that database encryption comes with burdensome costs, added IT complexity and degraded performance is outdated. With today’s solutions, database encryption can be among the easiest, most affordable, and most effective security steps you can take. Multiple database encryption approaches are available, so choosing the right one for your needs is essential.

How does database encryption work?

When you encrypt all or part of a database, an encryption algorithm (there are many) converts the data from human-readable characters to ciphertext, which completely obscures the content and renders it useless to attackers. To decrypt the data and use it, you need the correct key, which the encryption solution generates.

Unlike many other security controls, like firewalls or anti-malware tools, most database encryption operates directly on the data where it is stored, often termed “data at rest.” At-rest encryption keeps your data secure if your network or database server is compromised, or if a malicious insider or cybercriminal with privileged access attempts to exfiltrate your data. Only users who have the right key can make use of the encrypted data.

What types of encryption are available?

To balance your users’ needs for access and performance with the value of your data and the risks it faces, you can choose from a range of database encryption options. These include:

  • Full database encryption, where all the data in the database is encrypted.
  • Column-/field-/table-level database encryption, where the most sensitive data elements are encrypted but others are not. This option can improve application performance and reduce system overhead by impacting only queries against encrypted data.
  • Client-side encryption encrypts the data on a user’s system before it is stored in the database. This approach puts the computational overhead of encryption on the client system, which often has cycles to spare. A further advantage is that data encrypted in this way is safe even from malicious code running on the server or within the RDBMS environment.
  • Homomorphic encryption uses complex mathematical computations to analyze encrypted data in various ways without decrypting it. This approach preserves privacy for sensitive data like health or educational records. It allows cloud service providers (CSPs), remote database administrators (DBAs), and other third parties to process data while maintaining regulatory compliance and full security. 
  • Hardware encryption, where the encryption mechanism is built into the hardware (e.g., a disk drive) where the database resides. The primary benefit of hardware encryption is that if the database environment or server is compromised, the data will remain inaccessible to attackers. 

How does Oracle handle encryption?

Oracle has long supported a feature called Transparent Database Encryption (TDE), which is both effective and straightforward to implement. TDE lets you encrypt the whole database, or only specific tables or columns. 

Oracle stores the database encryption keys in a separate Oracle Key Vault, which helps you govern your keys so that they’re secure from unauthorized access and available automatically (“transparently”) for authorized users and systems. This makes TDE a great option where you need to protect data from attacks that compromise your database servers and/or Oracle RDBMS, or where hackers gain access to the physical storage media.

How does Microsoft SQL Server handle encryption?

Like Oracle, Microsoft SQL Server also has the capability to encrypt data at rest, which it calls Transparent Data Encryption (TDE). SQL Server’s TDE offers many of the same data encryption capabilities as Oracle’s TDE. But its default key storage is different. Instead of a separate vault, SQL Server stores the database encryption key (DEK) in the database boot record for availability during recovery.

SQL Server also offers the Always Encrypted feature, which lets you encrypt highly sensitive data inside client applications and never reveal the encryption keys to the database engine. Because it segregates those who own the data from those who can view it or need to manage it Always Encrypted is ideal for maintaining security and compliance for high-value data in cloud environments, for instance.

How do open-source databases handle database encryption?

MySQL, PostgreSQL and most other popular open-source databases support third-party encryption libraries, such as pgcrypto or MyDiamo. There are also open-source toolkits for specialized types of database encryption like homomorphic encryption.

Database operations can also call on encryption functions available at the file system level in Windows, Linux, MacOS, etc. With this type of encryption, the server encrypts entire files as they are stored, potentially adding to system overhead but saving the cost of a separate solution.

Important considerations with managing encryption keys

Your encrypted data is only as secure as your encryption keys. Since they control access to encrypted data, you should store your keys separately from the database when possible. For example, both Microsoft Azure and IBM Cloud offer a “key vault” service that stores encryption keys in a hardware module for an extra level of encryption.

Your key management also needs to factor in backups, because backing up encrypted data without protecting the associated keys could be futile. One option is to consolidate your database encryption keys into a centralized key manager solution, and back them up from there.

Another consideration with encryption keys is their length. Different encryption methods rely on different key types. Longer keys, like longer passwords, are generally more secure. For example, 128-bit encryption schemes use a 128-bit key, which are deemed virtually impossible to break using today’s (non Quantum) computer systems.

The downside of longer keys—and potentially encryption overall—is higher overhead and reduced data throughput, as well as increased storage needs for the database. However, applying best practices in your implementation can reduce or eliminate many undesirable impacts.

Next steps

In response to customer demands and/or emerging security and privacy compliance requirements, more businesses are encrypting more databases in more ways than ever before. But the more data you encrypt, the more encryption keys you need to manage and the more you need to be concerned with performance impacts.

For optimal benefit to your organization, your database encryption strategy should reflect a holistic view of present and projected business needs, including cybersecurity and compliance risks, plus expert knowledge of best practices and technology options. A security risk assessment to identify weak spots is a great place to start.

If you are considering database encryption or want to optimize your current encryption approach, Buda Consulting can help you secure your business-critical data, comply with regulations and address database performance and cost issues.

Contact us to connect with an expert about a security risk assessment and related services.


A Step-By-Step Guide to Upgrading Oracle 11g to 12c

A Step-By-Step Guide to Upgrading Oracle 11g to 12c

Is your company’s IT infrastructure an out-of-date eyesore? The 11g support clock has been ticking since 2013, when Oracle released the major version upgrade for 12c. Like in any software cycle, Oracle has been phasing out support for version 11g.

If you are still using version 11g, chances are your database infrastructure is outdated and needs an update. However, you will want to do this carefully in case you compromise your stored data. The most recent upgrade offered by Oracle is their 19c version. If you’re on 11g, you are very behind. Fortunately, we’ve got you covered. Here’s our step-by-step guide on manually updating Oracle 11g to 12c. You can then use that foundation to upgrade oracle 12c to 19c if you want.

Step 1) Check Release and Version

Make sure you are using Linux.

The syntax to check your version is usually:

SELECT * FROM v$version;

You will want to make sure you are using

  • Oracle Database 11gR2 ( (64-bit)
  • Running on Oracle Enterprise SUSE Linux

This tutorial will teach you how to upgrade Oracle 11g to 12c using dbua (a database upgrade assistant).

This is probably the simplest way to upgrade your database version.

Step 2) Check Prerequisites

There are a few prerequisites that you should check before upgrading, or else the process may fail.

Here are some things that you should check.

Gather Dictionary Statistics

You will be able to do this using an SQL prompt.

Input the following:



For Container Databases

If your Oracle database uses containers, you will want to use the following command to perform the actions in all containers.

This will gather your dictionary statistics across all your containers.

$ORACLE_HOME/perl/bin/perl $ORACLE_HOME/rdbms/admin/ -l /tmp -b gather_dict_stats — –x”EXEC DBMS_STATS.gather_dictionary_stats”

Purge Recycle Bin

You will want to purge your recycle bin as well. This can be done using an SQL prompt.




Compile Invalid Objects

Again this can be done by inputting an SQL prompt.



Compile for Container Databases

This is the prompt you will want to use to compile invalid objects for container databases. This will execute the action across all your containers.

$ORACLE_HOME/perl/bin/perl $ORACLE_HOME/rdbms/admin/ -l /tmp -b comp_invalid_objs — –x”@?/rdbms/admin/utlrp.sql”

Step 3) Stop the Database and Listener

Next, you want to stop the database from running.

Input the following into the command prompt.

oracle&hostname > sys /as sysdba

SQL> shutdown immediate;

oracle& hostanme> lsnrctl


This will stop the database as well as the listener.

Step 4) Check Version and Components Again

Use the previous step to check the version of oracle and then refer to this link to check the components in $ORACLE_HOME.       

You will also want to set the new ORACLE_HOME. 

You can do this by typing in: 

hostname> export ORACLE_HOME=oracle/SID/12.1.0

If you have completed Steps 1-4, you should now be ready to run the 12c installer.

Step 5) 12c Installer

Next, you’ll want to go to the 12c media directory and run the following command. 

oracle&hostname> ./runInstaller                                                                                            

This should pull up the installer.

  1. Once you’ve navigated to the installer window, click Next
  2. In the next window, click Next again and select the “Skip software updates” option
  3. Click Next again and select “Upgrade an existing database.”
  4. What this should do is automatically launch the DBUA to help you upgrade your existing database
  5. You will want to select a language (e.g. English) and click Next
  6. You should select the “Enterprise Edition” and click Next
  7. Click Next again at “Specify Installation Location”
  8. Click Next again at “Privileged Operating System groups”
  9. The installer will then ask you to perform prerequisite checks again and click Next
  10. On this final page, all we have to do is hit Install

Once the progress bar is at 100%, Oracle 12c should be downloaded and installed. This is not all there is to it, though. You will still have to upgrade from 11g to 12c.

Step 6) Use DBUA to Upgrade 11g to 12c

The next few steps will guide you through the process of using the installer to upgrade your database. 

The DBUA supports an 11g to 12c upgrade because the direct upgrade is supported according to Oracle’s upgrade matrix. 

You can follow the instructions in the DBUA to interactively install the upgrade or simply follow this guide.

  1. On the first page, “Select Operation,” and click Next
  2. Click Next again at “Select Database”
  3. At “Prerequisite Checks,” ignore and hit Next again
  4. Hit Next again
  5. At “Upgrade Options,” click Next again
  6. At “Management Options,” you will want to select the “Configure Enterprise Management” button and click Next again
  7. At “Move Database Files,” hit Next
  8. At “Network Configuration,” hit Next
  9. At “Recovery Options,” select the option “I have my own backup” and click Next

Lastly, on the summary page, click Finish. This should again show a progress bar. Once this progress bar is at 100%, the database should be updated to version 12c successfully.

Check that the version has been upgraded by inputting:

SQL> select * from v$version;                                                                                                                                                   

This should show that you are on Oracle Database 12c Enterprise Edition Release – 64bit Production.

Upgrade Oracle 12c to 19c

From here, you can also upgrade to oracle 19c. We will not cover the steps to do so in this guide, but the process should be similar.

Keep in mind that you want to have an upgrade timeline of 6 months to a year if you are still using Oracle 11g. The latest Oracle release is 21c; not upgrading your IT infrastructure could be costly for your company down the line as older versions of Oracle will become de-supported.

Have Any Questions?

While upgrading databases is complex, your company should aim to do so in a reasonable time frame.

Oracle supports direct upgrades from 11g to 12c, which can be done with their DBUA or database upgrade assistant. From there you can upgrade oracle 12c to 19c.

Hopefully, this guide was helpful in helping you get your IT up-to-date.

Please contact us at Buda Consulting if you have any questions about updating your Oracle product. We are Oracle experts and are here to help you at any time.

What Is CUI Data? | An Expert’s Explanation

What Is CUI Data? | An Expert’s Explanation

Did you know there are 125 categories of controlled unclassified information (CUI)? With so much data that is now under the umbrella of CUI, ensuring your business recognizes which data to protect is essential. But what exactly is CUI data? Read on to learn about this type of data, how to recognize if you use it in your business, and how you can protect it.

What Is CUI Data?

CUI, or controlled unclassified information, is information that needs safeguarding. It is data that needs to be disseminated in a manner that follows the laws and regulations the government has in place, but that does not fit under Executive Order 13526 “Classified National Security Information”.

CUI is part of a government program that strives to standardize this type of data and ensure it is protected. CUI replaces the old For Official Use Only (FOUO) programs and offers more efficient and consistent policies. If a document had a label of “Proprietary” or “For Official Use Only” in the past, now it needs the CUI label.

CUI is a term that encompasses other kinds of data: Covered Defense Information (CDI) and Controlled Technical Information (CTI). They refer to technical information that applies to a military or space context and which has a distribution statement. The data can be labeled as CUI Basic or CUI Specified, which is more restrictive in its uses and the safeguards it needs.

Examples of CUI Data

Within the 125 categories of data that fit into the CUI label, you can find many subsets of information that need to be protected, but are not classified. The CUI Registry has a list of what type of data must be safeguarded following government policies, laws, and regulations. Some examples include:

  • Personally Identifiable Information (PII), which is information that can identify a particular person
  • Sensitive Personally Identifiable Information (SPII), which is information that if disclosed without permission could substantially harm or embarrass the person
  • Unclassified Controlled Technical Information (UCTI), which refers to data that has a military or space application
  • Sensitive But Unclassified (SBU), which is information that does not meet the standards for National Security classification
  • Law Enforcement Sensitive (LES), which is data that if disseminated without permission could cause harm to law enforcement procedures

There are many more forms of CUI, and you can expect everything from health records, intellectual property, technical drawings and blueprints, and much more to fall under the label of CUI data.

Identifying CUI Data

If you are an IT professional or are a government contractor of any kind, you will likely have CUI data to worry about. Most of the time, the Department of Defense will label data as CTI or CDI, as needed, but there are instances when the contractor will be creating this kind of data as they complete a project. How do you identify it, then?

Let us look at some of the things to watch for.


Does your site have a US government contract or does it supply a US federal contract? If it does, then you most likely have CUI data you will need to safeguard.

Labeled Information

Some data will have a CUI label on it already or will be easy to identify. If you see “Export Control”, which includes information that needs monitoring, such as Export Administration Regulations (EAR) or International Traffic in Arms Regulations (ITAR), then you can expect CUI data. Labeled information refers to non-classified data that has legacy or agency designations, and that is CUI.

Defense Projects

Many Defense Federal Acquisition Regulations (DFAR) deal with CUI. If projects related to aerospace manufacturing have details that are noncommercial and technical, they are CUI. Technical information can refer to engineering and research data. It can also be engineering drawings and plans, technical orders, process sheets, manuals, datasets, studies, and much more. For defense projects that have technical information related to a military or space application, you need the label of CUI.

Non-Defense Projects

Whether there is CUI data in a non-defense federal project depends on the specifics of the project and of the contract. Federal contract information, which is CUI, is information that the government does not want released to the public, and that has been created for the government or provided by the government during a contract.

Protecting CUI Data

There are government policies and guidelines to help you protect CUI data. You have to physically protect the data using key card access or other similar locks. The data and all its backups need labeling and securing when not in use.

At the network layer, the data also needs protection. Firewalls, switches, and routers all have to protect against unauthorized access. You need OSI layers two through four. You have to have session controls in place, as well. The data has to be protected with authentication and authorization mechanisms, and it all has to take place within the control of the data owner. There are also infrastructure controls that can secure CUI data. They can be virtual machines, storage area networks, physical servers, and backup systems.

You will need to have a risk assessment completed, and there must be network scans done periodically. If there are any configuration changes needed to the system that provides access to the CUI, the process needs a documented review and an approval process. Any logs need a third-party audit on a regular basis.

Keep CUI Secure

If you work with CUI data and need the best security, we can help. At Buda Consulting, we deliver secure and reliable database systems, ensuring even the most sensitive data is safe. Contact us now to speak with an expert!


Relational Database Design: It’s All About The Set

Relational Database Design: It’s All About The Set

The Lost Science Of Relational Algebra And Set Theory

I originally wrote this post in 2011. Much has changed in the database technology landscape since then. Big Data Technologies such as Hadoop have gone mainstream, cloud technology and is changing how and where we think about hosting our databases.

But relational databases are still relied upon as the best option for rich transactional data.

So, since this technology is still the foundation of our mission critical systems, we should understand how to take advantage of one of the foundational elements of relational technology: The Set.

The SQL language (Structured Query Language) was built upon relational algebra. This rigorous approach to query definition is largely about set theory. This post is not a detailed technical discussion of relational algebra or set theory, instead it is about the way that relational databases are often misused.

The purpose of this article is to discuss the central theme of relational database technology and one of its greatest strengths. One that is often overlooked by those practicing Oracle Database Design or SQL Server Database Design and Database Development. I am talking about Set Theory. Relational Databases like Oracle and SQL Server are built and optimized to process sets of rows, as opposed to individual rows. Many application developers, even those that use these relational tools, struggle to think beyond the individual row. That is why the major relational database vendors have created very powerful procedural languages such as PL/SQL and T/SQL.

In many cases, developers use these tools to step row by row through a dataset (by using cursors) because they may not understand how the set operators work. This approach leads to unnecessary development and higher maintenance costs, as well a poor performance.

There are definitely times when a procedural process is necessary. But often times there are set-based alternatives that would be more efficient and much easier to develop.

In this post, I will focus on three core set operators: Union, Intersect, and Difference.

First some definitions:

Given two sets, Set A and Set B

Union:  All records from set A and all records from Set B.  If a record exists in both sets, it will only appear once in the Union. (Areas A, B, and C in figure 1).

Intersection: The unique set of records that exist in both set A and set B (Area C in figure 1).

Difference: The difference between Set A and Set B are all the records in Set A except those that also exist in Set B. (Area A in figure 1).

Vendor Differences

Relational databases implement these operators in different ways, but they all provide a relatively simple way to combine and compare similar sets of data. Oracle has the Union, Intersect, and Minus operators. SQL Server has Union, Intersect, and Except operators.

MySql has the ability to perform these operations as well, but it is more complex. For example, in order to do a difference operation, you must use a not exists or not in operator, resulting in a more complex sql statement.


Lets examine how Oracle implements each of these set operations with a simple example.

This post is intended to discuss the concepts so I did not include the data and the actual query results in the post. But you can download the script to create and populate the tables with test data and run the queries here: set_tables_sql

Suppose you collect bank account events (debits, credits) from multiple sources. You place them into one common table, but also maintain the original source records in separate tables for historical purposes.  The original source records never change, but the events in the common table can be modified as necessary by the users.

Now suppose that occasionally you need to compare the transactional data in the common table to the original source data to see which rows have been changed. This is very easy using set operators.

The tables that we will use for this example follow. I used different column names in each table to illustrate that the column names do not need to be the same in each set that you are comparing. However, the number of columns in each query and the data types in each query must be the same.

Table Definitions

Event_Id NUMBER,
Event_Name VARCHAR2(30),
Event_Description VARCHAR2(255),
Data_Source_location VARCHAR2(30),
Event_Date DATE

CREATE TABLE Event_Source_1
Event_Id_Orig NUMBER,
Event_Name_Orig VARCHAR2(30),
Event_Description_Orig VARCHAR2(255),
Data_Source_location_Orig VARCHAR2(30),
Event_Date_Orig DATE

CREATE TABLE Event_Source_2
Event_Id_Orig NUMBER,
Event_Name_Orig VARCHAR2(30),
Event_Description_Orig VARCHAR2(255),
Data_Source_location_Orig VARCHAR2(30),
Event_Date_Orig DATE

Example 1 — Union: Now suppose you needed to display all event names that appear in Event Source 1 and Event Source 2. The Union operator will display records from both tables, but records appearing in both tables will only appear once (unless the union all operator is specified, in which case duplicates will be displayed).

SELECT Event_Name_Orig FROM Event_Source_1
SELECT Event_Name_Orig FROM Event_Source_2;

Example 2 — Intersection: Now suppose you needed to display only events from Source 1 that have remained unchanged in the Event table. This can be done with an intersection between Event and Event_Source_1.

SELECT Event_Name,Event_Description,Data_Source_Location FROM Event
SELECT Event_Name_Orig,Event_Description_Orig,Data_Source_Location_Orig FROM Event_Source_1;

Example 3  —  Difference: Now suppose you want to know all Data Source Locations that appear in the original Data Source 2 data but not in the original Data Source 1 data. This can be done by using the difference operation, implemented with the Minus operator by Oracle. This will take all the records from one set and subtract those that also exist in another set.

SELECT Event_Name_Orig,Event_Description_Orig,Data_Source_Location_Orig FROM Event_Source_1
SELECT Event_Name_Orig,Event_Description_Orig,Data_Source_Location_Orig FROM Event_Source_2

Database Design Considerations

These powerful operators can be used to reduce or eliminate the need for cursors in many cases. The usefulness of these operators is dependent on sound database design and a well-normalized table structure. For example, a table that has repeating columns designating the same data element (as opposed to using multiple rows) will render these operators much less useful.


With careful database design and a good understanding of the Set management tools provided by the relational vendors, we can simplify and speed development and reduce maintenance costs. Lets think in terms in sets and get the most out of our relational database investment!

If you would like to discuss set theory or relational database design, please give me a call at (888) 809-4803 x 700 and if you have further thoughts on the topic, please add comments!

If you enjoyed this article please like and share!

Advantages, Disadvantages, & Risks Of Virtualization For Your Database — And How To Get It Right!

Advantages, Disadvantages, & Risks Of Virtualization For Your Database — And How To Get It Right!

Types Of Virtualization

There are many types of virtualization. Storage, network, server, database. For the purposes of this article I will discuss server virtualization, but with a special focus on servers that will house databases.  

Server virtualization essentially refers to abstracting the services that make up a computer server from the underlying hardware resources. Database virtualization on the other hand refers to abstracting the services that make up a database system from the servers that provide those services. It is essentially another layer of abstraction.

I am choosing to write about server virtualization in this article because it is has been more widely adopted so far than database virtualization and is implemented in a database agnostic way. In other words, when you virtualize your database servers, the advantages and disadvantages will apply to any database that you are using on that server. 

Advantages Of Virtualization

There are many advantages to server virtualization.  I will discuss two key advantages here.

Rapid Provisioning

Probably the greatest advantage of server virtualization is rapid provisioning. Virtualization platforms like VMWare enable us to build new servers in seconds based on existing servers or server templates. This is a major improvement over needing to configure servers individually in the past.  This saves time, money, and perhaps most importantly, improves consistency and can be used to enforce policies if administered and controlled properly.  By narrowly configuring servers to handle one database or a collection of related databases, we can extend the benefits of rapid provisioning to the database, facilitating rapid creation or refresh of test, dev, or qa database environments, for example. 

Resource Utilization

Another advantage of virtualization is greater resource utilization. Servers on many virtualization platforms can be configured to use resources such as memory on an as- needed basis. This minimizes the amount of resources that need to be maintained for burst times, assuming that all servers do not burst at the same time. Of course, taking advantage of this capability requires careful planning and an understanding of the resource usage profiles of your servers. 

Disadvantages/Risks Of Virtualization

While there are many advantages of virtualization, there are also key disadvantages, which come mostly in the form of risk.  These disadvantages are not inherent problems with virtualization. Instead, they can be the result of a lack of strict planning and management of a virtual environment. 

Management and Accountability

Rapid and simple provisioning comes with a cost. The ease and speed of spinning up new servers tends to promote server and database sprawl, causing management and accountability problems. When virtualizing, strict policies and procedures must be implemented and enforced to avoid future problems, especially in environments with multiple system managers.

Hardware Cost

In addition to management and accountability problems, actual costs can spin out of control. In a cloud environment like AWS (one type of virtualization), cloud provider costs that seem small on a server-by-server basis, quickly add up as server sprawl kicks in. Similarly with in-house virtualization infrastructure, easily created servers eventually overwhelm the resources in the system and more hardware must be purchased, often with difficulty tracking those costs to specific projects or departments. 


Database and System management involves a number of skills;  there are the hard technical skills like knowing what command commands to execute in order to download and install a Linux distribution or an Oracle Patch.  And for every one of those hard skills, there are a hundred soft skills, like knowing what downstream impact a Linux patch may have, what the likely security implications are of granting access to folder required by a piece of software that needs to be installed, or knowing how to determine the most efficient way to configure resources for Oracle.  A huge risk in a virtualized environment is that the ease and speed of provisioning may give the false impression that the need for highly skilled system and database managers has diminished. On the contrary, I think that the ease and speed of provisioning increases the need for those skill sets, because the potential to propagate a poor configuration throughout the system is much greater in a virtualized environment.  And fixing 10 servers later is much more expensive than provisioning the first server properly in the beginning.


Major database vendors price their software based on the underlying resources on the machine that it is running on. In a virtual environment, we can assign a certain amount of computer power to a database server, and that can be a small fraction of the total computer power of the virtualization cluster. But the vendors don’t see it that way, Oracle for example, bases the cost on the total CPU power across the whole cluster regardless of how much power we assign to a given server. This is true unless we use Oracle’s virtualization platform, where it honors the resource partitioning of the virtualization platform. Misunderstanding about this licensing model has caused many companies to be unexpectedly charged very large back licensing fees.  Note that a potential solution to this may be to create a separate virtualization cluster for the database environment but this limits some of the advantages described above.


Security is always a concern when provisioning a server or database. There are many configuration settings, folder access rights restrictions, OS and database users that need to be deactivated, removed, or restricted. The rapid cloning and perceived lower skill requirement for provisioning new servers can take a small security problem and rapidly propagate it throughout the environment. So while there may not be new security vulnerabilities introduced simply because we now operate in a virtual environment,  as with all of the other disadvantages and risk that I mentioned, the risks are magnified in a rapid provisioning environment. 

How To Get It Right?

I spoke to two experts who are responsible for virtualization platforms for their organizations or for client organizations that run mission critical applications. I wanted to find out the keys to success in building and maintaining a solid virtualization platform. Here are some of the takeaways.

I first spoke to the CIO of a financial institution that runs their entire shop on virtualized servers. He said that in-house server virtualization is a mature technology and risks are low for an organization with a relatively stable application mix and resource load, and with a small system management staff. He also feels that compromising like throwing a whole blade server at a specific application, is somethings worth it to limit the risk of resource contention, even though it may reduce the benefit of efficient resource utilization.

I also spoke with Rocco Guerriero, CEO of Contour Data Solutions. Rocco says that having the right policies and procedures in place is the key to ensuring a trouble free environment. For example, he points out that if you are implementing a mission critical database server with a standby database, it is important to ensure that the virtual server holding the primary database will never be migrated to the same physical server that holds the standby. This can be done using rules that can be specified using the virtualization tools.

Rocco also cautions that a good rule of thumb is to have enough resources in each cluster so that you don’t exceeding 50% usage during typical load.  This ensures that servers can acquire the resources they need when demand spikes.

And finally, he recommends carefully assigning priorities to virtual machines if you need to ensure that certain servers always get the resources they need even at the expense of others. 


Server Virtualization can be an effective way to reduce costs and speed up provisioning of hardware and software for our IT projects. But we must mitigate the risks from the beginning. Here are a few steps that I believe should be part the management plan for any virtualized environment.

  • Establish policies and procedures that must be followed for all servers to be provisioned. Based on the conversations that I had with these experts, and on my experience administering databases in virtual environments, I think policies should be in place that require the following;   Evaluation and implementation of necessary virtualization rules as described above, change control, security review, database license review, before and after cluster resource capacity review, and a record of each server stating what application(s) it is to be used for, resource cost estimate, provenance tracking of the server image, backup and restore requirements, and other data that will assist with management.
  • Ensure that only highly skilled system and database administrators are responsible for provisioning and configuring new servers and databases. Resist the temptation to enable regular users or developers who do not have system management experience to do this.

At the end of the day the thing to remember is that provisioning servers is very easy, and very easy to get it wrong.

Watch Your FRA!

Watch Your FRA!

When it comes to Oracle database administration, one of the most revered parts of your database structure is the fast recovery area (FRA). This is an Oracle managed area where DBAs usually store some of the following files:

  • Redo logs
  • Archive logs
  • Flashback logs
  • Control files
  • RMAN backups

The purpose of the Oracle FRA is to simplify database recovery. The Oracle database process automatically manages items stored in the FRA and will delete items that are no longer needed. 

Oracle FRA Settings

Now the Oracle FRA may sound like a magical area that should never run into storage-related issues—but that could not be farther from the truth. Even though the Oracle database process will manage items and attempt to delete files that aren’t needed, the DBA also has to be aware of instance settings that may block Oracle from being able to remove the files. Some settings that can cause storage issues with your FRA include:

  • RMAN backup retention setting – If you set your backup retention to store two weeks’ worth of RMAN backups, but your FRA fills up to 100% before any backups can be purged, this will cause your database to halt.
  • RMAN archive log deletion policy – If you set the deletion policy to delete archive logs after they are applied to all standby databases, but haven’t noticed that your primary and standby databases have been out of sync for a long period of time, your FRA can fill to 100% and cause your database to halt.
  • RMAN archive log backup copies setting – By default, backup copies are set to 1. But what if you want to make sure your backups contain more copies of your archive logs in the event that one of your incremental backups became corrupted? When you set this setting higher than 1, you will not be able to delete any archive logs unless they have been backed up however many times this setting is set to. So if you set this option to 3, you will need to have taken at least three backups of each archive log before said log can be deleted from your system. If you opted to store archive logs in your FRA, then this can fill the FRA to 100% and cause your database to halt.
  • Db_flashback_retention_target setting – If you have enabled the flashback database option this is stored in the FRA by default. As with the archive logs, depending on the time value of the setting, it will store all flashback logs needed to guarantee that you can flashback your database as per the setting. If you set this to a high setting, this can fill the FRA to 100% and cause your database to halt.

Those are just a handful of the many ways you can accidentally fill your Oracle FRA, which is why you need to make sure that your FRA is adequately sized to store all files as per all retention settings. You should also create a script that queries the v$recovery_area_usage and have this result sent to the email of all DBAs, as this will tell you how much of your FRA is used and what in particular is taking up the space:

For remote and onsite DBA support to help keep your databases running smoothly, including 24×7 live support, contact Buda Consulting.