Hi Amos,
thank you very much for taking the time to help me out!
I had no idea that this was supposed to work by default.
This explains why I wasn't able to find anything :D
I used the default "privacy" config (explicitly allowing some 
request_headers and deny the rest) from the wiki.
The problem was caused by not allowing the "Range" request header 
("Range-Request" was allowed already).
Now it works :)
 > NOTE: download managers which open parallel connections are *degrading*
 > the TCP congestion controls and reducing available network resources
 > across the Internet. Reducing their parallel requests to a single fetch
 > is actually a good thing.
 >
Understood.
Using this squid proxy for my tiny home network it's just that it can 
get really annoying when I have to download a single file only and the 
maximum I get with 1 connection is a few kb/s. So I'm unable to resist 
the temptation ;)
Thanks again for all the info. I was on the wrong trail all the time.
Kind regards,
Max
On 01/01/14 06:27, Amos Jeffries wrote:> On 1/01/2014 12:15 a.m., 
mxx_at_mailspot.at wrote:
 >> Hi,
 >>
 >> Maybe because most of the time squid is used differently I'm having
 >> troubles finding an answer to this question.
 >> It would be very nice if someone could help me out with this :)
 >>
 >> I only use it to filter ads and to redirect traffic to some domains
 >> through different uplinks. I don't really need the caching.
 >>
 >> Squid 3.4 does all of that perfectly (Linux 3.12) in intercept mode.
 >> But download managers using multiple connections concurrently to
 >> download 1 file are only able to use 1 connection/destination anymore.
 >
 > Squid does not impose any such limitation. Unless you have explicitly
 > configured the maxcon ACL to prohibit >1 connections per client IP.
 >
 > The default behaviour of a non-caching Squid should be exactly what you
 > are requesting to happen.
 >
 >>
 >> What I've found so far are only options like range_offset_limit in
 >> regards to cache management.
 >
 > If you have configured that range limit or the related *_abort settings
 > then they may cause a behaviour similar to what you describe. Not
 > exactly a prohibition, but Squid downloading the entire object from
 > start until the requested range is reached. Doing that N times in
 > parallel can slow down the 2..N+1 transactions until they appear to be
 > one-at-a-time occurances.
 >
 >>
 >> Is it possible in any way to let squid pass through and simply ignore
 >> all connection requests to destinations with certain Content-Types so a
 >> client could connect multiple times to the destination concurrently?
 >
 > Content-Type is not known until after the request has been made and
 > reply received back. What you ask is like deciding whether to make an
 > investment now based on next years stock exchange prices (the URL can
 > give hints of likelihood, but is not very reliable).
 >
 > Amos
 >
Received on Thu Jan 02 2014 - 21:38:59 MST
This archive was generated by hypermail 2.2.0 : Fri Jan 03 2014 - 12:00:03 MST