NOTE: this is an updated version of a mail that didn't go into the mailing list.
Hi all,
I'm
currently experimenting with Squid. The idea is the following: during
office hours you can only go to certain sites. Outside these hours you
can browse to whichever site you want. This is for specific IP(s) or IP
ranges/subnets. No client side configuration. This is how the network
looks:
Code: Select all
client --- --- router/firewall --- internet
|
squid proxy
The
router routes packets for the client to the Squid proxy. The client has
the Squid proxy as the gateway. I have FreeBSD 9.2 on the proxy. Squid
3.4 compiled from source with the --with-nat-devpf and
--enable-pf-transparent options. Firewall is PF.
Somewhere in the
process I got confused with the different ways to configure Squid and
PF. I need intercept or TPROXY as http_port(s) option. From what I
understand intercept will see packets that are for a webserver on the
internet through port 80. The Squid proxy server will then set up its
own connection to the webserver with its own IP.
Therefor
this won't work with HTTPS as it can't see the packet contents unless
you use SslBump. SslBump creates a secure connection with the proxy
server, which in his turn creates an SSL connection to the webserver. So
this is a man-in-the-middle. Because of this reason and you need to
install a root certificate in the client browser I would like to have another method.
From
what I understand TPROXY can accomplish this(however not sure how squid
will check then which domain name is accessed). If I understand it
correctly TPROXY acts as the client (sending IP packets to the webserver
with the client's IP). It's not clear if the client will still have an
SSL connection to the proxy server or not (I would think it still has,
as otherwise the client should be aware that it has to sent requests
with https:// just in cleartext to the proxy, which is not secure, and
this doesn't seem very "transparent" anymore). Does TPROXY still need
SslBump configured? As there is no man-in-the-middle here I would think
domain names in requests can't be seen and squid can't react accordlingy
to it.If the latter is the case, what are the pros for using TPROXY? In the end the packets still have to go through your Squid server where they can be intercepted.
Anyway
I configured PF. I read that I need ipdivert.ko which is available in
my kernel and need divert-to rules for using TPROXY.
I got squid running with Intercept using the rdr rules:
rdr pass on $lan_if inet proto tcp from 192.168.1.32 to any port 443 -> 127.0.0.1 port 3129
rdr pass on $lan_if inet proto tcp from 192.168.1.32 to any port 80 -> 127.0.0.1 port 3128
However for TPROXY I should use divert-to right?
pass in quick log on em0 proto tcp from 192.168.1.32 to any port 80 divert-to localhost port 3128
pass in quick log on em0 proto tcp from 192.168.1.32 to any port 443 divert-to localhost port 3129
From what I understand divert-to doesn't actually change anything to the packets. It just delivers the packet on another port.The
rdr rules rewrite the destination IP, so the HTTP/HTTPS request has to
be read anyway to open a connection to the destination webserver.
Might
it be crucial to have 2 interfaces to make TPROXY(or intercept) work
alltogether(as you can see in the drawing it is like a "probe" and not
between the internet and the clients; there is only one physical
interface; the first tests were with 1 interface in the VM)? What I have
currently is 2 interfaces in the VM(on the same physical interface on
the host) with different subnet. Checked with tcpdump that traffic is
going in the right directions(through the squid proxy).
Please
correct me if I'm wrong at any point. It would greatly increase my
understanding of the matter and probably help in solving issues. What
way should I go here to get the explained goal?
Thanks a lot in advance for any help!
Received on Thu Jan 16 2014 - 09:23:14 MST
This archive was generated by hypermail 2.2.0 : Thu Jan 16 2014 - 12:00:05 MST