Hi Eliezer/All,
Please suggest how to go ahead on this.
Thanks & Regards,
Bhagwat
On Mon, Nov 25, 2013 at 2:54 PM, Bhagwat Yadav
<yadav.bhagwat87_at_gmail.com> wrote:
> Hi,
>
> Please find below packet dump taken at one of the intermediate
> machines on network while processing the request.
>
> 108 1.186439 10.0.11.22 165.254.58.18 TCP 76 38682 > http [SYN] Seq=0
> Win=14360 Len=0 MSS=1436 SACK_PERM=1 TSval=123422360 TSecr=0 WS=64
> 109 1.254231 165.254.58.18 10.0.11.22 TCP 76 http > 38682 [SYN, ACK]
> Seq=0 Ack=1 Win=14480 Len=0 MSS=1460 SACK_PERM=1 TSval=1476780015
> TSecr=123422360 WS=2
> 110 1.254765 10.0.11.22 165.254.58.18 TCP 68 38682 > http [ACK] Seq=1
> Ack=1 Win=14400 Len=0 TSval=123422377 TSecr=1476780015
> 111 1.273058 10.0.11.22 165.254.58.18 HTTP 302 GET / HTTP/1.1
> 112 1.273696 165.254.58.18 10.0.11.22 HTTP 1121 HTTP/1.1 503 Service
> Unavailable (text/html)
> 113 1.273706 165.254.58.18 10.0.11.22 TCP 56 http > 38682 [FIN, ACK]
> Seq=1066 Ack=235 Win=450 Len=0
> 114 1.274300 10.0.11.22 165.254.58.18 TCP 68 38682 > http [ACK]
> Seq=235 Ack=1066 Win=17216 Len=0 TSval=123422382 TSecr=1476780015
> 115 1.275840 10.0.11.22 165.254.58.18 TCP 68 38682 > http [FIN, ACK]
> Seq=235 Ack=1067 Win=17216 Len=0 TSval=123422382 TSecr=1476780015
> 116 1.275949 165.254.58.18 10.0.11.22 TCP 56 http > 38682 [RST]
> Seq=1067 Win=538 Len=0
> 938 10.795716 10.0.11.22 165.254.58.18 TCP 76 [TCP Port numbers
> reused] 38682 > http [SYN] Seq=0 Win=14360 Len=0 MSS=1436 SACK_PERM=1
> TSval=123424762 TSecr=0 WS=64
>
> From the dump when "TCP Port numbers reused" message is shown, Squid
> is hanging at the same time.
>
> Please help us on this.
>
> Thanks,
> Bhagwat
>
> On Mon, Nov 25, 2013 at 12:36 PM, Bhagwat Yadav
> <yadav.bhagwat87_at_gmail.com> wrote:
>> Hi,
>>
>> Upgraded squid to 3.1.20-2.2 from debian.org. Issue still persists.
>> Note: I have not disable the stats collection as mentioned in earlier mails.
>>
>> Please suggest how to resolve this?
>>
>> Thanks,
>> Bhagwat
>>
>> On Fri, Nov 22, 2013 at 4:47 PM, Eliezer Croitoru <eliezer_at_ngtech.co.il> wrote:
>>> And what does this 503 page content??
>>> I do not know what the issue in hands is but there are couple things to
>>> first test before running into full debug or try to fix issues that might
>>> not exists.
>>>
>>> The version upgrade is there for a reason.
>>> I do know why an upgrade might not solve the issues but still if you have
>>> testing environment try to make sure what are the results with the latest
>>> 3.1.X branch which should be 3.1.21 if I am not wrong.
>>>
>>> It is very critical for you to test it.
>>> Since squid can run on many OS and many specs your logs are nice but not
>>> helping to understand the whole issue.
>>>
>>> There are many bugs that was fixed from the 3.1 list but I have used it for
>>> a very long time.
>>>
>>> If you need help installing 3.1.21 or any newer version I can try to assist
>>> you.
>>> Also it can be installed alongside another version.
>>>
>>> Best Regards,
>>> Eliezer
>>>
>>>
>>> On 21/11/13 09:38, Bhagwat Yadav wrote:
>>>>
>>>> Hi Eliezer/All,
>>>>
>>>> Thanks for your help.
>>>>
>>>> PFA log snippets.
>>>> Log1.txt is having sample 1 of cache.log in which you can find the time
>>>> gap.
>>>> Log2.txt is having sample 2 of client output and cache.log showing the
>>>> time gap.
>>>>
>>>> It seems that there is some in memory operation "StatHistCopy" which
>>>> is causing this issue, not sure though.
>>>>
>>>> Squid version is: Squid Cache: Version 3.1.6.
>>>>
>>>> Please let me know that if these logs are helpfull.
>>>>
>>>>
>>>> Thanks & Regards,
>>>>
>>>> On Wed, Nov 20, 2013 at 6:11 PM, Eliezer Croitoru <eliezer_at_ngtech.co.il>
>>>> wrote:
>>>>>
>>>>> Hey,
>>>>>
>>>>> Can you try another test?
>>>>> It is very nice to use wget but there are couple options that needs to be
>>>>> consider.
>>>>> Just to help you if was not there until now add: --delete-after
>>>>> to the wget command line.
>>>>>
>>>>> It's not related to squid but it helps a lot.
>>>>> Now If you are up to it I will be happy to see the machine specs and OS.
>>>>> Also what is "squid -v" output?
>>>>>
>>>>> Can you ping the machine at the time it got stuck? what about tcp-ping or
>>>>> "nc -v squid_ip port" ?
>>>>> we need to verify also in the access logs that it's not naukri.com that
>>>>> thinks your client is trying to covert it into a DDOS target.
>>>>> What about trying to access other resources?
>>>>> What is written in this 503 response page?
>>>>>
>>>>> Eliezer
>>>>>
>>>>>
>>>>> On 20/11/13 12:35, Bhagwat Yadav wrote:
>>>>>>
>>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> I enable the logging but didn't find any conclusive or decisive logs
>>>>>> so that I can forward you.
>>>>>>
>>>>>> In my testing, I am accessing same URL 500 times in a loop from the
>>>>>> client using wget.
>>>>>> Squid got hanged sometimes after 120 requests ,sometimes after 150
>>>>>> requests as:
>>>>>>
>>>>>> rm: cannot remove `index.html': No such file or directory
>>>>>> --2013-11-20 03:52:37--http://www.naukri.com/
>>>>>> Resolvingwww.naukri.com... 23.72.136.235, 23.72.136.216
>>>>>> Connecting towww.naukri.com|23.72.136.235|:80... connected.
>>>>>>
>>>>>> HTTP request sent, awaiting response... 503 Service Unavailable
>>>>>> 2013-11-20 03:53:39 ERROR 503: Service Unavailable.
>>>>>>
>>>>>>
>>>>>> Whenever it got hanged, it resumes after 1 minute e.g in above example
>>>>>> after 03:52:37 the response came at 03:53:39.
>>>>>>
>>>>>> Please provide more help.
>>>>>>
>>>>>> Many Thanks,
>>>>>> Bhagwat
>>>>>
>>>>>
>>>>>
>>>
Received on Thu Nov 28 2013 - 05:51:40 MST
This archive was generated by hypermail 2.2.0 : Thu Nov 28 2013 - 12:00:06 MST