I am happy to release the new RPM for squid version 3.3.10.(links at the
bottom of the article)
The new release includes the big addition of cache_dir type *rock*, big
thanks for Alex Rousskov work on rock ssl-bump and many other small and
big things that makes squid what it is!
What is *rock* cache_dir type? What it gives me?
Speed! and SMP support for cache_dir.
A small introduction to FileSystems and Squid:
Squid uses UFS\AUFS types cache directories for a very long time in a
very nice way to overcome and try to beat the OS and the FileSystems
limits in order to allow millions of objects\files to be cached.
The UFS type that can be used with either reiserFS, ext4 or any other FS
you can think about that is supported by the OS.
There are limits to each and every FS like the reiserFS that was
designed to work with lots of small\tiny files and does that in a very
nice way.
A FS far as it is perfected it is still a *FileSystem* which is very
global and has a design which effects directly on the performance of itself.
An example for this point is being demonstrated when creating a file on
a FS can be quite easy in one while erasing a file can result in a very
CPU and I\O intensive task on some FS.
If you are interested in understanding a bit more about FS complexity
you can watch Ric Wheeler at his video and presentation:
* video:
http://video.linux.com/videos/one-billion-files-pushing-scalability-limits-of-linux-file-systems
* or: http://www1.ngtech.co.il/squid/videos/37.webm
* pdf:
http://www.redhat.com/summit/2011/presentations/summit/decoding_the_code/thursday/wheeler_t_0310_billion_files_2011.pdf
* or:
http://www1.ngtech.co.il/squid/fs/wheeler_t_0310_billion_files_2011.pdf
What heavy lifting do the FS and squid needs to handle with?
UFS\AUFS actually uses the FileSystem in order to store for an example
200 requests per second which 50 of them are not even cacheable so 150
requests per second to be placed in files in the FileSystem based on the OS.
60 secs doubles 60 minutes doubles 100 requests per second(yes I reduced
it..) it means creation of about 3600 files on the FS per hour for a
tiny Small Office squid instance.
While some squid systems can sit on a very big machine with more then
one instance that has more then 500 requests per second per instance,
the growth can be about 14,400,000 per hour.
It do sounds like a very big number but a MegaByte is about 1 Million
bytes and today we are talking about speeds which exceeds 10Gbps..
So there might be another design that is needed in order to store all
these HTTP objects and which rock comes to unleash.
In the next release I will try to describe it in more depth.
* note that the examples do demonstrate the ideas in a wild way.
The RPMS at:
http://www1.ngtech.co.il/rpm/centos/6/x86_64/
The package includes 3 RPMs one for the squid core and helpers, the
other is for debuging and the third is the init script.
http://www1.ngtech.co.il/rpm/centos/6/x86_64/squid-3.3.10-1.el6.x86_64.rpm
http://www1.ngtech.co.il/rpm/centos/6/x86_64/squid-sysvinit-3.3.10-1.el6.x86_64.rpm
http://www1.ngtech.co.il/rpm/centos/6/x86_64/squid-debuginfo-3.3.10-1.el6.x86_64.rpm
To Each and everyone of them there is an asc file which contains PGP and
MD5 SHA1 SHA2 SHA256 SHA384 SHA512 hashes.
I also released the SRPM which is very simple at:
http://www1.ngtech.co.il/rpm/centos/6/x86_64/SRPM/squid-3.3.10-1.el6.src.rpm
* I do hope to release in the next weeks a RPM of 3.HEAD build for ALPHA
testers of the newest bug fixes and squid improvements.
* Sorry that the I686 release is not out yet but since I do not have on
me a I686 running OS it will be added later to the repo.
Eliezer
Received on Mon Nov 11 2013 - 20:57:36 MST
This archive was generated by hypermail 2.2.0 : Tue Nov 12 2013 - 12:00:06 MST