On 30/03/2012 9:06 a.m., pr0xyguy wrote:
> Hi guys, hope you can help me here,
>
> Setup:
>
> Intranet = inside firewall
> userA --> 8080:dansguardian --> 3128: http_port transparent --> Internet =
This is broken. DG sending traffic to Squid port 3128 is an explict
client (DG) configuration, *not* interception. The upcoming Squid which
validate the received NAT data will reject this traffic.
>
> Extranet = outside firewall (mobile / remote users)
> userB --> SOHO router --> corp firewall:80 --> 8080:dansguardian --> 3128:
> http_port transparent --> Internet =
Same again.
> userB --> SOHO router --> corp firewall:443 --> 3129: https_port transparent
> ssl-bump cert=... key=... - -> Internet =
Is the firewall doing port forwarding? (same problem as mentioned for
DG). NAT ('forwarding') *must* be done on the Squid box where Squid can
grab the kernel NAT records from.
Or is it doing proper policy routing? (with NAT on the Squid box for the
intercept)
>
> The Issue:
>
> userB request Google which we convert from HTTPS to HTTP using DNS trickery
> (setup by Google for schools/corps ie. explicit.google.com=0.0.0.0 to
> prevent encrypted searches). So far, so good.
Sort of. I have found Google systems sometimes still automatically
redirect HTTP to HTTPS anyway when they have decided that TLS is
mandatory for that service.
If you are going to use SSL intercept trickery anyway, I think use that
instead of adding the two types of trickery together.
NOTE: If you have control over userB DNS lookups why are you not simply
setting their DNS WPAD records and using a PAC file?
> However HTTPS coming from
> userB (outside our firewall) is not CONNECT, but straight SSL. Thus the
> ssl-bump setup which is working, with invalid cert warnings which is ok for
> us, but the google/calendar site gets stuck in a loop. From my access.log:
>
> *173.162.48.224* TCP_MISS/302 890 GET http://www.google.com/calendar? -
> DIRECT/216.239.32.20 text/html
> *173.162.48.224 *TCP_MISS/302 1198 GET
> http://www.google.com/calendar/render? - DIRECT/216.239.32.20 text/html
> *10.0.10.171 *TCP_MISS/302 845 GET http://www.google.com/calendar/render? -
> DIRECT/216.239.32.20 text/html
> *173.162.48.224 *TCP_MISS/302 717 GET http://www.google.com/calendar/render?
> - DIRECT/216.239.32.20 text/html
> *173.162.48.224* TCP_MISS/302 717 GET http://www.google.com/calendar/render?
> - DIRECT/216.239.32.20 text/html
> *173.162.48.224* TCP_MISS/302 717 GET http://www.google.com/calendar/render?
> - DIRECT/216.239.32.20 text/html
>
>
> ...and then the session times out with the agent usually returning a "page
> isn’t redirecting properly" warning. IE will try forever of course, and
> eventually crash the system.
This looks like calendar is one of the systems they have not rolled that
DNS trickery support into properly.
>
> squid.conf:
>
> acl manager proto cache_object
> acl localhost src 127.0.0.1/32 ::1
> acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1
> access_log /var/log/access.log
> acl localnet src all
So the entire Internet is part of your LAN? wow.
> http_access allow localnet
Then you bypass all security for that huge LAN. Ouch.
> acl SSL_ports port 443
> acl Safe_ports port 80 # http
> acl Safe_ports port 21 # ftp
> acl Safe_ports port 443 # https
> acl Safe_ports port 70 # gopher
> acl Safe_ports port 210 # wais
> acl Safe_ports port 1025-65535 # unregistered ports
> acl Safe_ports port 280 # http-mgmt
> acl Safe_ports port 488 # gss-http
> acl Safe_ports port 591 # filemaker
> acl Safe_ports port 777 # multiling http
> acl CONNECT method CONNECT
> http_access allow manager localhost
> http_access deny manager
> cache_effective_user squid3
> http_access allow localnet
> http_access deny all
> ssl_bump allow all
> http_port 10.0.10.100:3128 intercept
> https_port 10.0.10.100:3129 intercept cert=/www.sample.com.pem
> key=/www.sample.com.pem
This is a fixed certificate, same one for all domains registered on the
Internet. To intercept SSL you *need* the dynamic cert generation
feature in Squid-3.2. And you also need the external users to trust your
local certificate generators signing CA.
> hierarchy_stoplist cgi-bin ?
> coredump_dir /var/cache
> refresh_pattern ^ftp: 1440 20% 10080
> refresh_pattern ^gopher: 1440 0% 1440
> refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
> refresh_pattern . 0 20% 4320
> dns_nameservers 8.8.8.8
>
>
> Thanks guys!!! Right now I'm shooting in the dark trying this, trying that.
> I have a tone of work in this setup, if we can't resolve this I must find
> another solution for our external users.
>
> Scott
>
>
Received on Sun Apr 01 2012 - 02:25:30 MDT
This archive was generated by hypermail 2.2.0 : Sun Apr 01 2012 - 12:00:04 MDT