hi
no this box is not a router but working as a transparent cache
hence the need for ip forwarding
yeah u r righ i think that the inode-max and file-max are a bit on the
higher side..
can that be th reason that the amount of buffers/cache keeps on
increasing
im using kernel 2.2.19
right now free -m shows me
total used free shared buffers
cached
Mem: 505 502 2 11 174
52
-/+ buffers/cache: 275 230
Swap: 4000 0 4000
and the buffers/cache used row keeps on increasing .....
are my huge values for inode-max and file-max responsible for that
i had compiled the kernel with
FD_SETSIZE = 16384
changed fs.h NR_FILE to 65536
NR_RESEND_FILE 128
changed tasks.h
NR_TASK 4000
MAX_TASK_PER_USER 2000
------------------
aprt from this i get a lot of SYN_RECEIVED ..about 6000 outstanding at
any time ... so ihv increased tcp_max_syn_backlog to 10000
also set syn_cookies to 1
also at abt 50 req/sec squid eats upto 80% cpu
is this due to the excessive syn recieved ???
pls do get back ..
i dont want my box to go into swapping
aprt from squid process there is absolutely nothing else running on the
system except dor ipchains for port redirection for transparent
proxying
pls do get back ..
--- "Chemolli Francesco (USI)" <ChemolliF@GruppoCredit.it> wrote:
> <I am forwarding this to the mailing list too, others might be
> interested>
>
> > thnaks
> > i have a compaq prolinea 5500 with 512 MB ram and 6 hard disks
> ultra 2
> > SCSI 10000 rpm of 9 GB each
> > but i have alreay been using the foll startupscripts
>
> > echo 1 >/proc/sys/net/ipv4/ip_forward
>
> Is this box a router too?
>
> > ulimit -HSn 16834
> > echo 64000 >/proc/sys/fs/file-max
> > echo 128000 >/proc/sys/fs/inode-max
>
> Whoa! High! Wasted kernel RAM!
>
> > echo 8192 >/proc/sys/net/ipv4/tcp_max_syn_backlog
> > echo 1024 65000 >/proc/sys/net/ipv4/ip_local_port_range
> > echo 100 > /proc/sys/net/ipv4/tcp_fin_timeout
> > echo 300 > /proc/sys/net/ipv4/tcp_keepalive_time
>
> Uhm, this will add some overhead..
>
> > echo 0 > /proc/sys/net/ipv4/tcp_sack
> > echo 0 > /proc/sys/net/ipv4/tcp_timestamps
> >
> > iam not very sure about your
> > > echo 60 1000 128 256 500 3000 500 1884 2 >/proc/sys/vm/bdflush
> > > echo '256 512 1024' >/proc/sysv/vm/freepages
>
> IIRC those lazeify dirty buffer flushing, making it more bursty.
>
> > > If you use the SysVinit startup scripts, make sure to add to it
> > > ulimit -H -n 4096 >/dev/null
> > > ulimit -n 4096 >/dev/null
> > > ulimit -H -c unlimited >/dev/null
> > > ulimit -c unlimited >/dev/null
> > >
> > can u explain them pls
>
> The two former allow up to 4096 filedescriptors per process
> (still not many, but will do), the latter allow squid todump core.
>
> > i am using async io compile of squid with 40 threads
> > at abt 40 req/sec squid takes abt 80% cpu ..
>
> Looks high. On my system (more RAM than yours but less CPU) I get
> that load
> at 120-150 reqs/s. Maybe you have too many threads? Is the box doing
> other
> things?
>
> --
> /kinkie
__________________________________________________
Terrorist Attacks on U.S. - How can you help?
Donate cash, emergency relief information
http://dailynews.yahoo.com/fc/US/Emergency_Information/
Received on Wed Sep 12 2001 - 22:51:27 MDT
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 17:02:09 MST