> This would be very cool. Some aspects of this already exist,
> but a clear
> cachemgr page would be most excellent. I presume you are thinking of
> something like:
>
> Mean elapsed time to:
> permit or deny request: 0.02 ms
> redirect request : 0.1 ms
> establish a connection to a server: 45.03 ms
--- Something like that. One of the things I'm focusing on lately (for better or worse) is getting to know WinXP. I stayed with Win98 through most of my time working in IRIX and Linux the past several years and used Vmware starting, maybe, 3 years ago to run the Office apps I needed so I didn't have to exit Linux. Needing voice recogn. and a friendlier UI for my RSI I had to go to Win as main OS. Linux is nice, but definitely not as much point and click. But studying Win tech as much as possible while I'm at it. It's been real frustrating dealing with trying to find information with a closed source -- not like...say a few months ago, I was playing around with some SCSI disks and noticed I could get maybe a Gig or more extra space if I formatted an 18G drive with a 4k block size vs. 512bytes. Unfortunately, I'm living on the edge with XFS and XFS croaked on the 4k block size. It was easy to verify -- yep...there it was, hard coded values for 512byte sectors....You-gly (ugly). At least I knew where not to beat my head against the wall. With Win, you can search for stuff for days and come up with nothin'. The folks at MS often don't know the answer either. It's more like pathfinding in an unmapped jungle -- and in come cases, the information just isn't there. One of the things NT has besides CAPP/C2-compliant auditing and a great ability to 'backstep' when you mess things up, is its performance subsystem. Builtin to many subsystems are tons of performance counters. Something like 2500 of them for various parts of the OS. I can think of many things they don't provide, but another feature provided is the ability for users to supply their own counters -- which can then be displayed with the system tools, and also -- a programmatic way for users to use their own tools to pull information out of the counters to play with as they wish. You don't like the level of job (process + its descendents) accounting detail? You can write a program in user space that can collect the info. Internally, parts of the OS use the counters to determine what order blocks load in from a program allowing a couple of things: 1) developers can use the info and feed it into the linker to order the way their routines are laid out in memory to optimize application responsiveness and read-in time. 2) The OS can determine what blocks from what files are loaded during, say, boot -- and keeps a selectable (numbers in the registry) number of boot traces (default, 8). Then every couple of days, the OS automatically reorganizes the files on disk to optimize for fast boot times. One could argue that optimizing for boot could save time at the expense of other operations -- say opening a database while a compile is going on. But! Waiting for 30-45 seconds for a boot is a far cry better than the 2-3 minutes my linux dual processor machine takes. MS did studies and found that long boot times were something that stood out in users' minds as a single longest point of frustration (like because it blue-screened so much! :-)). So addressing user-perception even if it might be at the expense of later on processing was considered a priority. In addition to boot optimization, the OS also keeps track of application usage. Frequently used directories are hashed and stored in the registry and the hash and file names are stored in a pre-fetch queue based on what files users access during a given session. Again -- last 8 traces are saved. Along with boot optimization, the OS automatically reorganizes to place the files you use the most toward the front of the disk continguously. There's a file that tells the organizer "here's a list of 4000-5000 files, and here is the order they should be in based on the user's usage". Dunno about the current optimizer, but previous optimizers actually padded the executables so that individual segments could be on disk bounaries so raw-io could be done directly into user memory -- no going through a disk-cache buffer. Of course there is a performance counter to find out how many reads had to be broken apart due to the data not being contiguous on disk or the lockable memory limit not being set high enough to allow an entire 128M file to be read in one large read -- the lockable I/O page limit -- another tunable. Anyway...maybe you get the picture. WinNt OS's (like XP) with all of their bad points have some good points that other OS's could learn from. Too often, people want to throw out the baby with the bathwater, so to speak. > Another good idea. There are several log alteration projects > around, we > are looking for a sponsor to make the most generic one 'happen', which > is very similar to what you suggest (in fact, it's a superset > in all but > one aspect - the recording of all data). --- Well, I have no idea how much time I'll have. Even though I'm off work on disability (on top of having been laid off), I still seem to find not enough hours in the day to do all the things I want done -- and I need to spend more time in the real world doing things as well. Computers are an infinte timesink. The disability comes from RSI problems which haven't gone away -- any time I seem to actually approach doing serious programming, I start getting more symptoms -- so I have to watch the intensity level and try to stay in a healing mode whilst waiting for the insurance company to get the ergo-equipment suggested by my ex-companies ergo person 2 months before I was laid off (they didn't want to foot the bill for the equipment either, prefering to show me the door, instead). Not having the built in recording has been a thorn in my side for almost a year now since I bought my Sonic blue recorder. To use it, I have to allow it free-access to Sonic Blue's servers, I've not been willing to do that. So it's sat on a shelf -- since I'd like to be able to record the initial conversations. Another project on a back burner. Some of my projects take a long time to get done....*sigh*....if I can just figure out a way to do without the need for sleep....:-) > > Yep. We use that with diskd. I'm currently rewriting the IPC support > routines to be more modular, but that shouldn't interact > badly with what > you mention. --- Ah, very cool. It seems like one of the things linux has lost along the way is the pluggable-modularity that characterized the initial power of linux -- the chaining together of disparate utilities to come up with new uses -- allowing users to use more powerful basic building blocks to come up with new paradigms to use and process information. With linux -- if you want to do anything, you really pretty much have to be a system programmer with knowledge in C/C++. No object oriented Visual Basic or javascript interfaces to the OS or utils here. With Win -- tons of utils have interfaces -- hooks, so they can be called programatically -- the registry interface is a prime example. Try developing a user-space program that allows you to tweak your screen and OS internals that has been around for 8 years. The basic interface has remained the same -- some of the variables have *slowly* changed, but with Linux, you'd be lucky if your interface program lasted a year before it needed to be rewritten to comply with the latest Linus creation. Etc. etc. etc.... With squid, I've used it for over a year, maybe 2, and before that, Netscape's proxy. Always wanted to accelerate my internet here at home. I'm always wondering where some of the delays are -- especially some things I know should be in the cache. I also play with DNS/named to look for tweaks there since many times its just a 5 second delay looking for the site I notice...oh well... -lindaReceived on Thu Nov 21 2002 - 10:08:41 MST
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:18:46 MST