On Sat, Oct 10, 1998 at 04:00:08PM -0600, Alex Rousskov wrote:
> Not necessarily. More important question is how much
> bandwidth/response time one would safe by violating HTTP?
More than 0. In truth, I don't know, this only came up because of
some inconsistencies I saw debugging an apllication quirk that was
caused by different machines getting different copies out of cache
for the same document - because of mixed case issues.
And I would argue if done properly, its not necessarily a violation.
For it to be a violation, the servers would have to treat FILE.blah
and file.BLAH differently - which many servers don't.
> Since most URLs are encoded in HTML documents rather than typed by
> users, I doubt the benefits would be noticeable.
The assumption page authors are less likely to create tYpos than most
users is probably correct, but, I think you'll still see it now and
then, and because URLs encoded in URLs tend to get more use that some
random URL typed into the location bar or whatever, they might count
for more.
Then again, none of these errors may be significant - the only reason
I brought it up was because I got bitten by it.
> The biggest problem is that a user may not be able to purge _all_
> copies of an outdated object by pressing "Reload".
If for all URLs all parts of the URL are case-sensitive regardless of
the response, then this has to be considered a different issue, more
akin to the first one I brought up about invalidating objects based
upon a regex.
> So let's not encourage that behavior. :)
I could stand in the middle of the road, slit my wrists and scream,
it won't stop people installing certainly buggy OSs and crap software.
Part of my day to day life it trying to make systems more
idiot-proof, but I've decided its not a solvable problem - a handful
of programmers are no match for a god who can create better idiots
by the millions, and these idiots breed...
-cw
Received on Tue Jul 29 2003 - 13:15:54 MDT
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:11:56 MST