Are you seeing high IO wait CPU use, or high IO wait times on IO?
Adrian
2009/8/2 smaugadi <adi_at_binat.net.il>:
>
> Dear Adrian,
> Well my conclusion that this is an IO problem came from the fact that I see
> huge IO waits as the volume of traffic increase (with tools such as mpstat),
> when using ramdisk there is no such issue.
> I have configured the SSD drive with ext2, no journal, noatime. Used the
> “noop” I/O scheduler.
> In /etc/fstab
> /dev/sdb1 /cache ext2 defaults,noatime 1 2
>
> hdparm results:
> hdparm -t /dev/sdb1
>
> /dev/sdb1:
> Timing buffered disk reads: 304 MB in 3.01 seconds = 100.93 MB/sec
> ----
> hdparm -T /dev/sdb1
>
> /dev/sdb1:
> Timing cached reads: 4192 MB in 2.00 seconds = 2096.58 MB/sec
>
> Any ideas?
>
> Regards.
>
>
>
> Adrian Chadd-3 wrote:
>>
>> 2009/8/2 smaugadi <adi_at_binat.net.il>:
>>>
>>> Dear Adrian,
>>> During the implementation we encountered issues with all kind of
>>> variables
>>> such as:
>>> Limit of file descriptors (now the squid is using 204800).
>>> TCP port range was low (increased to 1024 65535) TCP timers (changed
>>> them)
>>> The ip_conntrack and hash size were low (now 524288 262144 respectively)
>>>
>>> Now we are at a point that IO is the only issue.
>>
>> What profiling have you done to support that? For example, one of the
>> issues I had which looked like IO performance was actually because the
>> controller was completely unhappy. Upgrading the firmware on the
>> controller card signficantly increased performance.
>>
>> But I think you need to post some further information about the
>> problem. "IO" can be rooted in a lot of issues. :)
>>
>>
>> Adrian
>>
>>
>
> --
> View this message in context: http://www.nabble.com/Squid-high-bandwidth-IO-issue-%28ramdisk-SSD%29-tp24775448p24776193.html
> Sent from the Squid - Users mailing list archive at Nabble.com.
>
>
Received on Sun Aug 02 2009 - 09:45:17 MDT
This archive was generated by hypermail 2.2.0 : Sun Aug 02 2009 - 12:00:02 MDT