On 01/19/2014 04:42 AM, Eliezer Croitoru wrote:
> While working here and there I have seen that ZFS is a very robust FS.
> I will not compare it to any others because there is no need for that.
>
> OK so zfs, ext3, ext4 and others are FS which sits on SPINNING disks or flash drives.
> The SATA and SAS interfaces are limited to a serial interface in the limits (by a standard) to 3,6 Gbps.
> So SATA is max 6Gbps while a NIC can have a bandwidth of 10Gbps.
> Are there any real reasons to not use a 10Gbps line?
>
> For example if I have 10 SAS or SATA disks SSD or Spinning under a machine with 128GB of ram it is possible to allow some flow of data to be used and be "faster" then only one and even SSD drive.
>
> A dual 10Gbps machine can potentially be faster in lots of aspects then a local disk.
>
> I do not have the answer but a NAS might be sometimes the right choice as a cache storage.
>
> Indeed there are overheads for each and every TCP connection and indeed there are many aspects that needs to be tested and verified but I still do suspect that there are some assumptions that needs to
> be verified to make sure that a SAN\NAS might worth a lot then it is assumed to be.
>
> Eliezer
The raw transfer speed of a disk is only interesting when an application does
very large sequential I/Os and squid does not do that.
Squid writes a lot to disk and reads relatively little and since the average object size is
often around 13 KB, this is also the average I/O size.
A better performance parameter of disks is I/Os per second (IOPS).
Average latency is also an interesting parameter but usually the IOPS is the
more important parameter.
The following numbers indicate the speed of disk systems for random 16K I/O:
individual disk: 75-200 IOPS
individual SSD: 1000-60000 IOPS
internal RAID disk array with 12 disks and battery backed cache: 600-2000 IOPS
high end SAN or NAS with RAID: 600-20000+ IOPS
Other parameters like squid memory cache size are also important since it determines
how many cached object are in the memory cache and hence determines the percentage
of reads vs writes on the disk system.
Also the choice between aufs and rock store type determines different write patterns
for the disk system and hence different performance characteristics.
For NAS and SAN performance varies because multiple hosts use the storage array
and configuration parameters determine a lot. On a disk array one can also make
a virtual disk based on 60 or more physical disks so IOPS can be very high.
And some disk arrays also support SSD which result in higher IOPS.
With disk arrays the rule of thumb is that the more money you spend the faster
they are.
For those Squid caches with very high requirements for disk system performance
there are various implementation options:
- split the cache in multiple caches
- get an (expensive) NAS/SAN
- do not cache on disk, which may be faster if the memory cache is large and
the internet pipe is big enough
Marcus
Received on Mon Jan 20 2014 - 01:21:55 MST
This archive was generated by hypermail 2.2.0 : Wed Jan 22 2014 - 12:00:05 MST