On 6/01/2012 3:37 p.m., Eliezer Croitoru wrote:
> i put squid on debug section: 89 to follow tproxy and 17 to see what
> is going on inside other stuff and i found out this:
>
> section 89 fine not showing anything about using the client ip as
> 192.168.102.100 :
>
> 2012/01/06 04:23:54.072| IpIntercept.cc(381) NatLookup: address BEGIN:
> me= 212.179.154.226:80, client= 212.179.154.226:80, dst=
> 192.168.102.100:1063, peer= 192.168.102.100:1063
The IPs which arrived in the OS socket API, and what that API calls
them. As you can see both the Squid listening IP (me=) and the remote
end IP (client=) and where the packet was headed (dst=) and the Squid
port it was delivered to (peer=) are all mixed up. This is normal for
TPROXY (some NAT packets too) and the reason why the recent Squid have
such strict guidelines around mixing traffic types.
> 2012/01/06 04:23:54.074| IpIntercept.cc(166) NetfilterTransparent:
> address TPROXY: me= 212.179.154.226:80, client= 192.168.102.100
>
After the TPROXY deciphering algorithm switches the IPs back to
"correct". We see that client=192.168.102.100:1063
dst=212.179.154.226:80. This is correct for TPROXY. Squid-3.1 still uses
the variables me= for the connection local endpoint as seen in the
packet. Operation is correct despite the fuzzy wording.
>
> section 17 show abnormail thing:
> (the outgoing address to the server is the client address and not one
> of the server address)
>
> 2012/01/06 04:28:36.782| store_client::copy:
> 7DEA6A0583B90AB461F576C6AEE4AA50, from 0, for length 4096, cb 1,
> cbdata 0x882b5b8
> 2012/01/06 04:28:36.783| storeClientCopy2:
> 7DEA6A0583B90AB461F576C6AEE4AA50
> 2012/01/06 04:28:36.784| store_client::doCopy: Waiting for more
> 2012/01/06 04:28:36.785| FwdState::start() 'http://link
> 2012/01/06 04:28:36.787| fwdStartComplete: http://link
> 2012/01/06 04:28:36.789| fwdConnectStart: http://1link
> 2012/01/06 04:28:36.791| fwdConnectStart: got outgoing addr
> 192.168.102.100, tos 0
"outgoing addr" is the address Squid assigns to its end of the
squid->server connection. This appears to be correct for TPROXY.
> 2012/01/06 04:28:36.791| fwdConnectStart: got TCP FD 13
>
>
> so the main problem is that the request that comes from squid is not
> using the right address in tproxy mode.
>
> Thanks
> Eliezer
>
>
>
>
> On 05/01/2012 17:20, Eliezer Croitoru wrote:
>> i made a squid url_rewriter for cache purposes and it works on ubunut
>> and on fedora 16(i686).
>> also it works on fedora 15 with the 3.2.0.12 rpm from fedora 16 repo.
>> the problem is that when the re_rewriter is replying with the address to
>> squid the session that squid is creating is : from the client to the
>> server instead from the squid machine to the web server.
>> what is see using ss is:(tproxy is port 8081)
>> SYN-SENT 0 1 192.168.102.100:38660 192.168.102.3:tproxy
I'm unclear what this is and what you mean by "squid session". I assume
that is the details Squid sent to the helper?
If so that is a second strong sign of the loop mentioned below.
>>
>> but using the 3.2.0.12 and on other systems i see from
>> 192.168.102.3:high_port_number 192.168.102.3:tproxy
>> or
>> 127.0.0.1:hight_port_number 127.0.0.1:tproxy
>>
>> and everything works fine.
Er, this looks like the TPROXY looping traffic like so:
Client --(TPROXY)> ... Internet
. |
\..... Squid --> Rewriter --(TPROXY)> ... Internet
| \-------<<--------|
|
\---> Internet
Because, the re-writer is not sending its background fetch requests out
of a socket the kernel has marked as Squids for TPROXY. It needs Squid
or the rewriter to bypass the rewriter fetch if the request was coming
from the re-writer on a Squid IP (or, localhost). Or to byass TPROXY
globally traffic generated internally by the Squid box.
>>
>> the rewritter has a log function build-in and only when it's redirecting
>> and with tproxy squid is doing this thing.
>> on regular forward proxy everything is working fine.
>>
>> my config is the basic one with the exception of tproxy and the
>> rewritter
>>
>> #start lines added
>> http_port 3129 tproxy
>> url_rewrite_program /opt/nginx.cache.rb
>> url_rewrite_host_header off
If the domain is being changed in the URL Host: header re-writing ON is
critical if the traffic is going back to a tproxy or intercept port.
Given the above loop is likely, this could be the problem.
>> #end lines added
>>
>> so : with the 3.2 branch it works but not on 3.1.(3.1.10-3.1.18)
>>
>> also i cant compile the 3.2 branch on fedora 15 cause always it ends up
>> with some error.
>> i need to know the list of dependencies for compilation.
Your guess is as good as mine. It is specific to the features you are
building. The official Fedora RPM or its documentation should be a good
guideline for what Fedora packages are related or needed.
>> i had some sasl problem and i installed the sasl dev libs but now its
>> stuck on ftp error:
>> g++: warning: switch ג-fhuge-objectsג is no longer supported
>> ftp.cc: In function גvoid ftpReadEPSV(FtpStateData*)ג:
>> ftp.cc:2371:9: error: variable גnג set but not used
>> [-Werror=unused-but-set-variable]
>> cc1plus: all warnings being treated as errors
Aha. That was fixed as part of a later update. There was a missing
condition in the if() statement around line 2440. The code there should
contain the following, with line 2371 the definition of "int n;" moved
down as shown:
char h1, h2, h3, h4;
unsigned short port;
int n = sscanf(buf, "(%c%c%c%hu%c)", &h1, &h2, &h3, &port, &h4);
if (n < 4 || h1 != h2 || h1 != h3 || h1 != h4) {
debugs(9, DBG_IMPORTANT, "Invalid EPSV reply from " <<
Amos
Received on Fri Jan 06 2012 - 05:32:58 MST
This archive was generated by hypermail 2.2.0 : Fri Jan 06 2012 - 12:00:02 MST