On Fri, Dec 05, 2003 at 05:48:22PM +0200, Victor Ivanov wrote:
> On Fri, Dec 05, 2003 at 03:26:26PM +0100, Henrik Nordstrom wrote:
> > On Fri, 5 Dec 2003, Victor Ivanov wrote:
> >
...cut...
>
> Being able to work for some months with the default settings have fooled
> me that it should still work after increasing load and requests per second.
> It doesn't. I'll fix the values for the message queues. Automatic assignment
> for shared memory seems fine though.
>
Alright now, I started it with the following limits:
kern.ipc.msgmax: 131072
kern.ipc.msgmni: 40
kern.ipc.msgmnb: 2048
kern.ipc.msgtql: 40
kern.ipc.msgssz: 64
kern.ipc.msgseg: 2048
kern.ipc.shmmax: 33554432
kern.ipc.shmmin: 1
kern.ipc.shmmni: 192
kern.ipc.shmseg: 128
kern.ipc.shmall: 8192
(shm* are defaults, msg* except msgmax were set by me)
First of all, with diskd it rebuilds the store way slower than with ufs.
with ufs:
2003/12/05 15:53:29| Took 340.8 seconds (13477.7 objects/sec).
with diskd:
2003/12/05 18:12:07| Took 1014.7 seconds (4529.7 objects/sec).
After validation, this happens using diskd:
2003/12/05 18:13:33| Completed Validation Procedure
2003/12/05 18:13:33| Validated 4596471 Entries
2003/12/05 18:13:33| store_swap_size = 80268874k
2003/12/05 18:13:36| storeDiskdSend: msgsnd: (35) Resource temporarily unavailable
2003/12/05 18:13:36| storeDiskdSend UNLINK: (35) Resource temporarily unavailable
2003/12/05 18:13:36| storeDiskdSend: msgsnd: (35) Resource temporarily unavailable
2003/12/05 18:13:36| storeDiskdSend UNLINK: (35) Resource temporarily unavailable
...etc etc...
2003/12/05 18:13:37| storeDiskdSend WRITE: (35) Resource temporarily unavailable
2003/12/05 18:13:37| storeSwapOutFileClosed: dirno 0, swapfile 00023EEE, errflag=-1
(35) Resource temporarily unavailable
...and...
2003/12/05 18:13:37| storeDiskdSend OPEN: (35) Resource temporarily unavailable
2003/12/05 18:13:37| ctx: enter level 0: 'http://www.government.bg/photos/ico/mail.gif'
...finally...
2003/12/05 18:13:38| assertion failed: diskd/store_io_diskd.c:494: "++send_errors < 100"
(and restart)
2003/12/05 18:13:39| Starting Squid Cache version 2.5.STABLE3 for i386-portbld-freebsd5.1...
And the whole thing repeats. I switched back to ufs, it seems fine.
BTW, there's a hanging diskd process:
nobody 503 0.0 0.0 2168 100 ?? Is 5:55PM 0:00.59 diskd 494592 494593 494594
With some resources left:
q 65536 494592 --rwa------ nobody nobody
q 65537 494593 --rwa------ nobody nobody
m 65536 494594 --rw------- nobody nobody
Does cache_mem impact store rebuild speed? When it's less it seems the
speed is slower?
This archive was generated by hypermail pre-2.1.9 : Thu Jan 01 2004 - 12:00:06 MST