New options for Oracle DBAs – combine database clustering, virtualization, and Open Convergence

With DVX 4.0 you can now run Oracle RAC (Real Application Clustering) solutions on VMware vSphere virtualization on the Datrium DVX Open Converged private cloud platform.

This combination takes advantage of the real time resiliency built into Oracle RAC configurations, the improved management flexibility of virtualizing the underlying servers and the performance and protection provided by the Open Converged system from Datrium.

Running the Oracle database in a virtual machine benefits from the flexibility and mobility provided by virtualizing the server hardware for easier management and administration. Clustering those VMs across the physical virtualization hosts provides even greater resiliency and uptime for the application against hardware issues.

The Datrium DVX system provides the IO performance to the database(s) with local host flash storage for reduced latencies and overall better performance. At the same time, DVX enables a better protected flash-based data environment with:

  • built in erasure coding
  • data reduction – compression and deduplication
  • end-to-end encryption
  • snapshot and replication capabilities extending beyond the private cloud data center to the public (AWS) cloud

Datrium joined up with House of Brick consultants to review, test, and document the basics around a supported 4 node Oracle RAC solution on DVX. The details can be found in this technical report.

For more information about running your virtualized Oracle databases (clustered or not) on Datrium DVX please visit the Oracle Solutions area of the Datrium website.


Simulating Workloads – Getting to the End User Experience!

When IT organizations are looking to virtualize user desktop systems – the end user experience is often the toughest challenge to meet. Virtualizing the traditional physical systems into  more centrally controlled solution in the data center also needs to provide the right functionality. Simulating hundreds of users to make sure all aspects are working well can be a bit of a challenge.

I’ve been working with VDI (Virtual Desktop Infrastructure) or EUC (End User Computing) solutions for several years now and with different infrastructure solutions and versions of software.  During this time, one thing that has been common across these efforts is the use of Login VSI to help test the configurations. Without a really good, commonly used, industry reference point it would be almost impossible to develop reference architectures for our customers.

If you are not familiar with Login VSI, I encourage you to visit their site for more information and possibly get an evaluation of the testing tool. In a nutshell it provides a simple and reliable framework for running almost any number of user desktop workloads that actually do in-desktop, user specific work to open documents, update spreadsheets and yes even watch videos. Having a repeatable tool is critical in my line of work but it also has advantages in normal customer VDI environments when making almost any type of change to the infrastructure or solution.

On the vendor side of this testing, it has been extremely useful to me to use Login VSI  for more than just capturing results that we can publish for others to reference – thats the beauty of having a repeatable set of measures and baselines to work from.

We also use this tool in our regular QA processes as user workloads simulated through Login VSI can provide stress and insight into the workings of complete configurations – software, server, network, storage – that might not be evident in other point testing or benchmarking tools.

On top of that, running hundreds of simulated user dekstop sessions while also running other application testing workloads in a mixed use scenario and still passing all the expected test levels provides a level of confidence into the final configurations.

If you are more interested in how we use this tool, I have recently posted a Reference Architecture document for Horizon 7 on Datrium DVX using Login VSI. The original report is posted on Datrium site under Resources -> Technical White Papers -> Horizon 7 RA

We worked closely with VMware on the review of the results and the solution is also listed on the VMware site on the VMware Solutions Exchange (VSX) under Datrium DVX for Horizon. There is also a blog I posted on the VMware EUC site with some of the highlights of the work done and results observed.

And finally with our good friends at Login VSI, they have posted the document on the Login VSI -> Resources -> Reference Architecture site.

I’m certain we will continue to use this tool and this methodology for testing and publishing results as infrastructure elements change over time – software versions, compute horsepower and even storage enhancements.

Performance Testing Taken to New Levels

In my Technical Marketing role here at Datrium, as well as previous companies I’ve worked for, there always seems to be an element of performance analysis and benchmarking involved with our day to day work – especially with new technologies capable of disrupting the status quo of existing solutions.

In my early days  at Datrium (over two years ago now) I wrote about performance and scalability considerations here on Insane Mode and Scalability.  Admittedly, these were smaller scale exercises primarily to understand the true potential of the DVX system. They did however give me good insight into what is really possible.

At that time (2016) scalability of the DVX was initially only a single Data Node and up to 32 Compute Node hosts – pretty impressive at the time. Things have changed significantly in the past year or so for us and scalability and performance are just part of that growth.

I was sitting through a recent internal update from our engineering and performance teams on the scale level testing we did in conjunction with DELL and a couple of trusted 3rd party organizations at the end of last year (2017). This was some serious testing and a truly collaborative effort. Building a system that scaled out to our current maximum of 10 Data Nodes and 128 Compute Node across multiple data center racks and requisite switching was a major investment for everyone involved. Serious professional work kudos to everyone connected to the project.

I could go into the details of that testing here but I’ll leave that to the people that were directly involved in the effort and the outcome. In this case I was able to watch from the sidelines with that silly “I knew this was possible” grin on my face. These results are definitely worth the read.

From our colleagues at Evaluator Group, we have the announcement and the benchmark report.

And from the team over at Storage Review, we have this report.

If you make to our website blog, there is a good summary post from Andre Leibovici.

Good reading!



storage workload testing with fio

In my day to day activities in Technical Marketing, I get asked a lot to help with storage performance benchmarking and workload testing. I’ve been doing this for over a decade at multiple companies and found that there are a variety of tools and techniques for doing this as well as several standards and standards organizations contributing to the subject.

The wide variety of solutions available can actually create complexity when trying to achieve a simple task of estimating “how much” a particular platform or configuration might be capable of providing.

I like to keep things simple. That is why most of the time I choose the fio tool kit to run through any basic storage testing I find interesting. There are a number of resources out there to help you get started.

To make this a bit easier, a couple of us at Datrium have configured a simple CentOS7 Linux VM to help with workload generation on target storage devices and wrapped it up in an easy to deploy ova file. For now, please contact your nearest Datrium team member for access to this ova.

The workload VM has been configured with a simple setup to run the fio workload tool with predefined scripts that are contained in /home/datrium directory.  Login as root with password “datrium#1”.

There are two easy ways to run this workload VM:

  1. automatically through crontab
  2. manually with CLI

Method 1

To run the workload VM on a continuous basis, simply verify (or add) this line to the /etc/crontab file where the initial “*/5” field indicates a restart every 5 minutes. Longer or shorter runs are possible with easy edits to the the crontab entry and corresponding runtime script values as described below.

*/5 *  *  *  * root /home/datrium/do-work

With the automatic crontab driven approach, simply start the VM and wait – up to 5 minutes for the first IO pass to start. The VM should then run the script over and over until powered off.

Method 2

To run the workload manually, first remove or comment out the datrium specific entry in /etc/crontab. This will keep any secondary fio jobs from running during the manually invoked runs.

From the root user home directory simply type the following command:


This will run the control script which in turn calls the fio tool with the prescribed workload. Note that there will not be any output to the CLI until the job has finished. The job is being run with the “–minimal” option to provide a terse output in file fio.txt that can be post processed with a simple perl script listed later.

The fio script – “worker-config” – details are shown here:

# general setup parameters - typically unchanged

# data variation control
# using something other than the defaults

# these parameters can be easily modified

# this needs to be less than the size of the 
# vmdk attached to the VM

# greater queue depth may lead to higher latencies

# match this to crontab interval for automatic runes
# or to the desired test length if manual run

# IO workload profile parameters
# match these to your test objective
bs=8K          # blocksize
rwmixread=70   # read percentage

The perl script is included as an example of post processing the fio output.

# simple perl script to extract IOPS, throughput (MBs) and latency values from fit minimal output data
# Mike McLaughlin (@storageidealist) 12/17

use strict;
# row counter for csv output control and calculations
my $row=1; 

# this script takes a file name, 
# opens the file, and 
# prints the selected contents in csv format
if($#ARGV != 0) {
 print STDERR "You must specify exactly one argument.\n";
 exit 4;

# Open the file
open(INFILE, $ARGV[0]) or die "Cannot open $ARGV[0]: $!.\n";

# print csv header row
print "Job name, MB/s, IOPS, R-lat(ms), W-lat(ms), Read BW(KB/s), Read IOPS, Read C-lat(usec), Write BW(KB/s), Write IOPS, Write C-lat(usec)\n";

while(my $line = <INFILE>) {
 # split the line into fields
 my @fields = split /;/, $line;

# increment row counter used in cell calculations

# notes: 
 # converting KB/s to MB/s for total throughput sum
 # converting usec to ms time unit for latency display

print "$fields[2],=\(\(f$row\/1024\)+\(i$row/1024\)\),=\(g$row+j$row\),=h$row\/1000,=k$row\/1000, $fields[6], $fields[7], $fields[15], $fields[47], $fields[48], $fields[56]\n";

close INFILE;
# end of script