Hello everyone,
I currently have a server which stores many terabytes of rather static
files, each one having tenths of gigabytes. Right now, these files are
only accessed through a local connection, but in some time this is
going to change. One option to make the access acceptable is to deploy
new servers on the places that will most access these files. The new
server would keep a copy of the most accessed ones so that only a LAN
connection is needed, instead of wasting bandwidth to external access.
I'm considering almost any solution to these new hosts and one of then
is just using a cache tool like squid to make the downloads faster,
but as I didn't see someone caching files this big, I would like to
know which problems I may find if I adopt this kind of solution.
The alternatives I've considered so far include using a distributed
file system such as Hadoop, deploying a private cloud storage system
to communicate between the servers or even using bittorrent to share
the files among servers. Any comments on these alternatives too?
thank you all,
Thiago Moraes - EnC 07 - UFSCar
Received on Tue Aug 16 2011 - 08:33:40 MDT
This archive was generated by hypermail 2.2.0 : Thu Aug 18 2011 - 12:00:04 MDT