This attached patch seeks to add a bit more leniency to the Host: header
validation without re-opening Squid to the cache poisoning risks
involved. As Squid-3.2 is becoming more widely used issues have been
cropping up more regularly with websites using geo-based DNS results
and/or short DNS TTL for load balancing.
NOTE: this is a patch looking for more testing. I have as yet done only
a few checks.
It adds a host_verify_loose directive which allows requests which fail
Host: validation to continue through processing. The default is OFF for
now to encourage safety. Enabling this does open the clients to some
minor aspects of same-origin bypass again. BUT ...
* blocks caching on the response to protect all other network clients
against one compromised client spreading infections.
* forces the original (untrusted) destination IP to be used instead of
any alternative Squid might find. Preventing Squid or peer DNS lookup
being the point of vulnerability for the same-origin bypass. For any
client to be vulnerable it must be vulnerable inside the browser agent
where the original TCP connection is established.
It also adds a new error template ERR_CONFLICT_HOST to replace the
confusing invalid request message with a clear explanation of the
problem and some client workarounds.
FUTURE WORK:
adapt processing to allow these requests to safely be passed to peers.
adapt caching to permit safe sharing between clients making identical
requests to same sources.
Amos
This archive was generated by hypermail 2.2.0 : Sun Jan 22 2012 - 12:00:13 MST