AROW Performance Testing

Data Diodes perform an essential function by preventing data falling into the wrong hands.

However a Data Diode is yet another element in the already complex network, so what is the effect on performance and how can it be verified?

First a recap of the Data Diode principles.

We divide the network into ‘low’ and ‘high’ , representing an insecure network and a high-security network. The only connection between the two is the Data Diode.

There is no return path across a network Data Diode so the low side cannot know that data has been successfully received by the high side. One strategy to overcome this is to send the data multiple times, so called redundant transmission. Obviously the aggregate data rate across the diode is limited by this, if the data is transmitted twice, the maximum data rate that can be achieved is half the maximum flow rate and so on. In low-traffic networks this can be tolerated but in high-traffic networks, it is essential that this maximum flow-rate be as high as possible, preferably effectively transparent to the rest of the network by being as fast as possible.

In the case of the AROW Data Diode, this is Gigabit Ethernet speeds. AROW is an entirely hardware based product so GBE connections are very fast, and the internal flow rate of AROW is over twice the maximum GBE rate. Coupled with a large, fast hardware buffer AROW provides an exceptionally quick route between low and high networks.

To illustrate this, we use AROW’s supplied Python-scripted file management tool. This provides a layer of file management that includes sectioning files to be transmitted, calculating and applying CRC and header management information to the files and includes statistical real-time measurement of performance parameters such as flow rate, time taken for transfer and so on.

In this example, two files are transmitted, one quite small (35kB), one large (460MB). The script is set to transmit files twice with a 100 second interval between transmissions. In practical terms, this means that the high-side copy of the file is never more than 100 seconds behind the low-side file if it is changed, ie the latency is 100 seconds.

The screenshot shows a real data transfer taking place from a Linux Server to the Data Diode. The first line shows the name of the file being sent. This is the very small one, so takes an almost immeasurable amount of time. the effect of the buffer can be seen here, with the burst data rate being over 1.6Gbps. A more realistic rate is shown by the second file transmission. This 450MB file ( actually 461,750,272 Bytes) took 3.884 seconds to cross the diode giving a very respectable data flow rate of 952,259Kbps, almost 1Gbps!

Later on, and with a little more network traffic from other applications, the files are re-transmitted, the larger taking 3.879 seconds or a 953,380 kbps flow rate.

The screenshot also illustrate the sequence of events that takes place during a data transfer
a) Backup recovery file – a special file is created that can be used to perform post transmission recovery in the event of a network breakdown that causes files to be lost during transmission
b) the entire transmission tree is scanned to determine what files are present and their status.
c) Deleted files are processed – you can choose to tell the high side that some file previously in the tree have been deleted, their counterparts on the high side will also be deleted to maintain perfect synchronism between the low and high side file
d) the tree is checked for new files – ie files that have been added to the low-side tree structure for transmission to the high side
e) the tree is checked for modifications to a file already in the low-side tree, for example, emails added to a .pst file or updated operating system files added to a repository
f) there is no point in re-transmitting identical files that haven’t changed….
g) finally some management information is created and the files selected for transmission are sent. The whole process is repeated at the interval parameter entered on application start.

Bandwidth Hogging.

Of course there will be situations where System Administrators will want to prevent one network route taking all the available bandwidth to the exclusion of other traffic, so the application also includes a user-settable rate control, simply added to the command line parameter