1 INFO-VAX	Tue, 06 Jun 2006	Volume 2006 : Issue 312       Contents: Re: .txt to .doc Re: .txt to .doc Can't deassign Re: Can't deassign Re: Can't deassign Re: Can't deassign Re: Can't deassign HP to cut down on telecommuting  Re: InfoServer 100 trouble KVM switch for rx2600, DS20E? 2 need to determin shortened C symbolname at runtime Re: Personal note  Re: Personal note  Re: Personal note / Re: Problem INITializing SCSI drives over 8Gb?? / Re: Problem INITializing SCSI drives over 8Gb?? / Re: Problem INITializing SCSI drives over 8Gb??  rrd47-aa vs. rrd47-vc  Re: rrd47-aa vs. rrd47-vc 8 Re: SA 5300A initial configuration: changing system disk Re: SimH 3.6-0 Re: SimH 3.6-0 Re: SimH 3.6-0 Re: SimH 3.6-0 Re: SimH 3.6-0 Re: SimH 3.6-0 Re: Unix runs faster, maybe  Re: Unix runs faster, maybe D RE: Unix runs faster, maybe (was: Re: Educating potential VMS users)D RE: Unix runs faster, maybe (was: Re: Educating potential VMS users)D Re: Unix runs faster, maybe (was: Re: Educating potential VMS users)D RE: Unix runs faster, maybe (was: Re: Educating potential VMS users)  F ----------------------------------------------------------------------  % Date: Mon, 05 Jun 2006 14:16:06 -0400 - From: JF Mezei <jfmezei.spamnot@teksavvy.com>  Subject: Re: .txt to .doc , Message-ID: <448474E5.51B232AE@teksavvy.com>   himansu114@gmail.com wrote:  >  > JF Mezei:  > F > Can you dirdct me to a site where I can download the CDA converters?  C http://www.hp.com/go/vms  look down to in the openVMS resoruces and B you,ll find the freeware. You can browse teh freeware via HTTP and8 you'll find one of the disk that has the cda converters.  E CDACVTLIB022 I think is the kit. If you can't find it, i have the VAX > kits I could provide via FTP, but I don't have the alpha kits.   ------------------------------  % Date: Mon, 05 Jun 2006 19:40:55 -0500 6 From: "Craig A. Berry" <craigberry@mac.com.spamfooler> Subject: Re: .txt to .doc @ Message-ID: <craigberry-52410E.19405505062006@free.teranews.com>  B In article <1149516618.985873.36540@h76g2000cwa.googlegroups.com>,  himansu114@gmail.com wrote:  3 > Your suggestion worked, but here's another issue:  > " > 1.  I do need special formattig.I > 2. I do have a "command file" in DCL that generates a ".rtf" file.  But L > as you know ".doc" is the standard for end-users.  Any ideas on how to get+ > the special formatting done on the alpha?   H You don't say much about where your data are coming from (database, RMS G file) or what tools you are familiar with or have standardized on.  So  = here's what I do, which may or may not work for you.  I have  F successfully generated MS Word tables on VMS using the Perl extension  RTF::Writer:  / http://search.cpan.org/~sburke/RTF-Writer-1.11/   < Although I don't recall using it myself, the Perl extension F Spreadsheet::WriteExcel might be suitable for tabular reports and the  like:   > http://search.cpan.org/~jmcnamara/Spreadsheet-WriteExcel-2.17/  H There are also various text-to-PDF converters around, and various tools 1 for generating PDF, XML, HTML, and other formats.    --  = Posted via a free Usenet account from http://www.teranews.com    ------------------------------  % Date: Mon, 05 Jun 2006 11:11:57 -0700 # From: "Tom Linden" <tom@kednos.com>  Subject: Can't deassign ) Message-ID: <op.taop57mfzgicya@hyrrokkin>    Why am I getting following?   . FREJA> sho log/all MX_SITE_CLIENT_ACCESS_CHECKB     "MX_SITE_CLIENT_ACCESS_CHECK" =3D "MX_EXE:ACCESS_CHECK.EXE"  =   (LNM$SYSTEM_TABLE)2 FREJA> deassign/system MX_SITE_CLIENT_ACCESS_CHECK) %SYSTEM-F-NOLOGNAM, no logical name match    ------------------------------  # Date: Mon, 05 Jun 2006 18:20:55 GMT + From: Ryan Moore <rmoore@rmoore.dyndns.org>  Subject: Re: Can't deassign < Message-ID: <Pine.LNX.4.64.0606051119480.22705@jaipur.local>  G It's probably in Exec mode.  Do a SHOW LOGICAL/FULL to find out.  Then  ) you'd have to deass/sys/exec the logical.   % On Mon, 5 Jun 2006, Tom Linden wrote:  > Why am I getting following?  > 1 > FREJA>  sho log/all MX_SITE_CLIENT_ACCESS_CHECK = >  "MX_SITE_CLIENT_ACCESS_CHECK" = "MX_EXE:ACCESS_CHECK.EXE"   > (LNM$SYSTEM_TABLE)5 > FREJA>  deassign/system MX_SITE_CLIENT_ACCESS_CHECK + > %SYSTEM-F-NOLOGNAM, no logical name match    ------------------------------  $ Date: Mon, 5 Jun 2006 14:33:40 -0400) From: "Ken Robinson" <kenrbnsn@gmail.com>  Subject: Re: Can't deassign H Message-ID: <7dd80f60606051133x15db5a83n7e230c94fb051534@mail.gmail.com>  - On 6/5/06, Tom Linden <tom@kednos.com> wrote:  > Why am I getting following?  > 0 > FREJA> sho log/all MX_SITE_CLIENT_ACCESS_CHECK? >     "MX_SITE_CLIENT_ACCESS_CHECK" = "MX_EXE:ACCESS_CHECK.EXE"  > (LNM$SYSTEM_TABLE)4 > FREJA> deassign/system MX_SITE_CLIENT_ACCESS_CHECK+ > %SYSTEM-F-NOLOGNAM, no logical name match  >   E What does sho log/fu MX_SITE_CLIENT_ACCESS_CHECK show? If the logical E name is an exec mode logical you would need to specify "/exec" on the  deassign command.    Ken    ------------------------------   Date: 5 Jun 2006 14:47:37 -0500  From: briggs@encompasserve.org Subject: Re: Can't deassign 3 Message-ID: <A1tmSTdE3Ia7@eisner.encompasserve.org>   O In article <op.taop57mfzgicya@hyrrokkin>, "Tom Linden" <tom@kednos.com> writes:  > Why am I getting following?  > 0 > FREJA> sho log/all MX_SITE_CLIENT_ACCESS_CHECKD >     "MX_SITE_CLIENT_ACCESS_CHECK" =3D "MX_EXE:ACCESS_CHECK.EXE"  = >  > (LNM$SYSTEM_TABLE)4 > FREJA> deassign/system MX_SITE_CLIENT_ACCESS_CHECK+ > %SYSTEM-F-NOLOGNAM, no logical name match   & The proper command is $ SHOW LOG /FULL  E [The /ALL qualifier does nothing much of any importance unless you've + mucked about with the LNM$DCL_LOGICAL name]   D What you will probably discover is that the logical name in questionF is assigned in executive mode.  This requires that you do the deassign in executive mode as well.  4 $ DEASSIGN /SYSTEM /EXEC MX_SITE_CLIENT_ACCESS_CHECK   ------------------------------  % Date: Mon, 05 Jun 2006 13:22:05 -0700 # From: "Tom Linden" <tom@kednos.com>  Subject: Re: Can't deassign ) Message-ID: <op.taov63mvzgicya@hyrrokkin>   E On Mon, 05 Jun 2006 12:47:37 -0700, <briggs@encompasserve.org> wrote:   I > In article <op.taop57mfzgicya@hyrrokkin>, "Tom Linden" <tom@kednos.com=  >  =  	 > writes:  >> Why am I getting following? >>1 >> FREJA> sho log/all MX_SITE_CLIENT_ACCESS_CHECK J >>     "MX_SITE_CLIENT_ACCESS_CHECK" =3D3D "MX_EXE:ACCESS_CHECK.EXE"  =3D=   >> >> (LNM$SYSTEM_TABLE) 5 >> FREJA> deassign/system MX_SITE_CLIENT_ACCESS_CHECK , >> %SYSTEM-F-NOLOGNAM, no logical name match > ( > The proper command is $ SHOW LOG /FULL > G > [The /ALL qualifier does nothing much of any importance unless you've - > mucked about with the LNM$DCL_LOGICAL name]  > F > What you will probably discover is that the logical name in questionI > is assigned in executive mode.  This requires that you do the deassign=    > in executive mode as well. > 6 > $ DEASSIGN /SYSTEM /EXEC MX_SITE_CLIENT_ACCESS_CHECK   Thanks.    ------------------------------  % Date: Mon, 05 Jun 2006 16:44:09 -0400 - From: JF Mezei <jfmezei.spamnot@teksavvy.com> ( Subject: HP to cut down on telecommuting, Message-ID: <4484978C.674A9F97@teksavvy.com>  < http://www.mercurynews.com/mld/mercurynews/news/14732974.htm  F About 1000 employess in the IT division will no longer be able to workA from home. Those who don't accept to work in one of 25 designated * offices will be let go without severance.   3 Not known if this is to spread to other divisions.    @ HP had been a world leader in flexible work rules startting withB introduction of flextime back in 1967. Last July, they hired an exE Walmast IT director and it seems he doesn't believe in telecommuting.    ------------------------------  $ Date: Mon, 5 Jun 2006 14:13:52 -0400, From: "Richard Tomkins" <tomkinsr@istop.com># Subject: Re: InfoServer 100 trouble 9 Message-ID: <448466f4$0$26754$88260bb3@free.teranews.com>   < I believe engineering used VAXeln to code the InfoServer OS.  I As far as figuring out what an OS is looking at, CPUID, SYSTEM ID, memory J structures, it's been so long I cannot remember, but of course the supportD in the ROM is minimalist. I seem to recall it went after a hardcodedI filename for booting, but alas, my memory is now failing me so many years  later.   .          --  = Posted via a free Usenet account from http://www.teranews.com    ------------------------------  # Date: Mon, 05 Jun 2006 22:00:04 GMT L From: winston@SSRL.SLAC.STANFORD.EDU (Alan Winston - SSRL Central Computing)& Subject: KVM switch for rx2600, DS20E?6 Message-ID: <00A56C39.30A938E2@SSRL.SLAC.STANFORD.EDU>  L I think this is actually a hardware question rather than a VMS question, but what the heck.  M Anyway, is anybody out there successfully using a KVM switch to run keyboard, L mouse, and video for both a DS20E and an rx2600?  If so, what make/model of M each piece are you using?  (There are various problems with our setup, and it M seems like the USB keyboard support to the RX2600 will work for only a little  while and then quit.)    Thanks,    -- Alan    ------------------------------  % Date: Tue, 06 Jun 2006 00:08:28 +0200 ' From: Martin Corino <mcorino@remedy.nl> ; Subject: need to determin shortened C symbolname at runtime 6 Message-ID: <4484ab5c$0$31646$e4fe514c@news.xs4all.nl>  E Can anyone provide with (a link to) C/C++ sourcecode to calculate the = mangled C symbolname like the OpenVMS C compiler creates when ; using /NAMES=SHORT (i.e. <first 23 char>+<7 char CRC>+'$')?   K I know lib$crc with AUTODIN II is involved but need to be able to reproduce 2 exactly what the compiler is doing at compiletime.   regards, Martin.    ------------------------------   Date: 5 Jun 2006 13:35:48 -0700 ) From: "Sue" <susan_skonetski@hotmail.com>  Subject: Re: Personal noteB Message-ID: <1149539748.181392.278040@f6g2000cwb.googlegroups.com>  C Just an update this morning a few hours before sugery they moved it ? until next Tuesday at a different hospital.  Hurry up and wait.    Sue       
 Sue wrote: > Dear Newsgroup,  > E > Thank you for your kind words, I am fortunate to be just one member ( > this team of which you are all a part. >  > See you soon.  >  > sue  >  >  > JF Mezei wrote:  > > Sue wrote:I > > > the office. I will be having surgery for a total knee replacment on 
 > > > Monday.  > > D > > As they say to wish you luck:  BREAK A LEG ! :-) :-) :-) :-) :-)   ------------------------------  % Date: Mon, 05 Jun 2006 16:47:01 -0400 - From: JF Mezei <jfmezei.spamnot@teksavvy.com>  Subject: Re: Personal note, Message-ID: <44849837.467FD2E9@teksavvy.com>  
 Sue wrote: > E > Just an update this morning a few hours before sugery they moved it A > until next Tuesday at a different hospital.  Hurry up and wait.   , Are you back at work until next week then ?   D And there I was, thinking about designing a get well card for you..." I'll have to postpone that too :-)   ------------------------------   Date: 5 Jun 2006 16:16:24 -0700 ) From: "Sue" <susan_skonetski@hotmail.com>  Subject: Re: Personal noteC Message-ID: <1149549384.660157.159130@y43g2000cwc.googlegroups.com>   # Yep at work until Friday this week.    sue    JF Mezei wrote:  > Sue wrote: > > G > > Just an update this morning a few hours before sugery they moved it C > > until next Tuesday at a different hospital.  Hurry up and wait.  > - > Are you back at work until next week then ?  > F > And there I was, thinking about designing a get well card for you...$ > I'll have to postpone that too :-)   ------------------------------  * Date: Mon, 5 Jun 2006 19:38:39 +0000 (UTC)< From: gartmann@nonsense.immunbio.mpg.de (Christoph Gartmann)8 Subject: Re: Problem INITializing SCSI drives over 8Gb??) Message-ID: <e6217v$a88$1@news.BelWue.DE>   h In article <1149522443.312795.5040@h76g2000cwa.googlegroups.com>, "Bob Armstrong" <bob@jfcl.com> writes: > D >  I've got a VS4000/90 with OVMS 7.3 and two ST19171WC SCSI drives.C >These drives are supposed to be 9100Mb, but whenever and however I H >initialize them they always end up with 16777216 total blocks.  This is6 >exactly 8*2^30 bytes, or 8192Mb.  Seems suspicious... > E >  The funny thing is, there's also a 18Gb SCSI drive and a 36Gb SCSI E >drive on the same system, same OS, and they initialized just fine at  >the size I'd expect.  > G >  I know there used to be some bugs in the SCSI disk driver for drives F >over 8Gb, and there was even an ECO for it at one time, but I thoughtE >that had all been incorporated into the standard distribution by VMS  >7.3.  Am I wrong? > F >  The only thing unusual about these two 9Gb drives is that they wereC >previously used in another VAXstation with OVMS 7.1, which had the F >aforementioned 8Gb bug.  It's almost like it remembers that, but thatE >shouldn't be when I'm re-initializing the drives with the INITIALIZE 
 >command.  >  >  Any advice?  L You could try to low-level format them. It it a single SCSI command. Usually> there is some command available at the boot prompt to do that.   Regards,    Christoph Gartmann    --  E  Max-Planck-Institut fuer      Phone   : +49-761-5108-464   Fax: -452   ImmunbiologieI  Postfach 1169                 Internet: gartmann@immunbio dot mpg dot de   D-79011  Freiburg, Germany 9                http://www.immunbio.mpg.de/home/menue.html    ------------------------------  % Date: Mon, 05 Jun 2006 17:56:18 -0400 ' From: Dave Froble <davef@tsoft-inc.com> 8 Subject: Re: Problem INITializing SCSI drives over 8Gb??9 Message-ID: <ceqdnaRAcfv6OhnZnZ2dnUVZ_oCdnZ2d@libcom.com>    Christoph Gartmann wrote: j > In article <1149522443.312795.5040@h76g2000cwa.googlegroups.com>, "Bob Armstrong" <bob@jfcl.com> writes:E >>  I've got a VS4000/90 with OVMS 7.3 and two ST19171WC SCSI drives. E >> These drives are supposed to be 9100Mb, but whenever and however I J >> initialize them they always end up with 16777216 total blocks.  This is8 >> exactly 8*2^30 bytes, or 8192Mb.  Seems suspicious... >>F >>  The funny thing is, there's also a 18Gb SCSI drive and a 36Gb SCSIG >> drive on the same system, same OS, and they initialized just fine at  >> the size I'd expect.  >>H >>  I know there used to be some bugs in the SCSI disk driver for drivesH >> over 8Gb, and there was even an ECO for it at one time, but I thoughtG >> that had all been incorporated into the standard distribution by VMS  >> 7.3.  Am I wrong? >>G >>  The only thing unusual about these two 9Gb drives is that they were E >> previously used in another VAXstation with OVMS 7.1, which had the H >> aforementioned 8Gb bug.  It's almost like it remembers that, but thatG >> shouldn't be when I'm re-initializing the drives with the INITIALIZE  >> command.  >> >>  Any advice?  > N > You could try to low-level format them. It it a single SCSI command. Usually@ > there is some command available at the boot prompt to do that. > 
 > Regards, >    Christoph Gartmann  >   H Yeah, exactly what I'm thinking.  Someone knew they had an 8GB max, and E formatted the drives so they were only 8GB, ignoring the rest of the  > drive.  Actually, the proper thing to do for the other system.   --  4 David Froble                       Tel: 724-529-0450> Dave Froble Enterprises, Inc.      E-Mail: davef@tsoft-inc.com DFE Ultralights, Inc.  170 Grimplin Road  Vanderbilt, PA  15486    ------------------------------  % Date: Tue, 06 Jun 2006 00:28:06 -0400 - From: JF Mezei <jfmezei.spamnot@teksavvy.com> 8 Subject: Re: Problem INITializing SCSI drives over 8Gb??+ Message-ID: <4485042B.CB442AF@teksavvy.com>    healyzh@aracnet.com wrote:N > You're re-INITing drives that already been INITIALIZEd as 8GB drives, right?    0 INIT doesn't change a drive's geometry/capacity.  H You can change its capacity with some SCSI tools that change some of theH "pages" on the drive. Then, when the OS asks the drive for its size, theH drive reponds with the new size that was entered.  (number of cylinders, sectors and total size)   ? A low level format doesn't change pages thatc onstain the drive : geometry. You really need to reset these numbers yourself.  E VMS has a RZDISK utility I think in SYS$EXAMPLES that lets you change  the stuff inside the drive.   G When a drive runs out of spare blocks, does it automatically reduce its ' capacity when it finds new bad blocks ?    ------------------------------   Date: 5 Jun 2006 12:12:40 -0700 ' From: "syslost" <wm.reynolds@gmail.com>  Subject: rrd47-aa vs. rrd47-vcB Message-ID: <1149534760.633374.183670@f6g2000cwb.googlegroups.com>  > Anyone know the differences between the rrd47-aa and rrd47-vc?  2 I need one for an AlphaServer 1000a 5/333, VMS 7.3   ------------------------------  # Date: Mon, 05 Jun 2006 21:24:46 GMT & From: hoffman@xdelta.zko.dec.nospam ()" Subject: Re: rrd47-aa vs. rrd47-vc1 Message-ID: <yi1hg.1513$Xr1.912@news.cpqcorp.net>   B In article <1149534760.633374.183670@f6g2000cwb.googlegroups.com>,) "syslost" <wm.reynolds@gmail.com> writes: A |> Anyone know the differences between the rrd47-aa and rrd47-vc?  |>  5 |> I need one for an AlphaServer 1000a 5/333, VMS 7.3   C   If this drive follows typical practice, the RRD47-VC is packaged, F documented and configured for installation into the AlphaServer 1000A.B If you can use a screwdriver, a few common spare parts, and aren'tC overly oncerned about cosmetics and color schemes, you can probably E get most any "recent" RRD to fit and to work in this or in most other  AlphaServer series systems.    H   I haven't looked at the bill of materials to see the exact difference,F and all of the RRD47 series drives I know of all have the same kernel.@ (The usual differences are in cable lengths and bezels and doc.)   ------------------------------   Date: 5 Jun 2006 15:42:39 -0700 ( From: "Rich Jordan" <jordan@ccs4vms.com>A Subject: Re: SA 5300A initial configuration: changing system disk C Message-ID: <1149547359.838872.121330@c74g2000cwc.googlegroups.com>    Rich Jordan wrote:F > This is getting very annoying.  I built VMS on a single logical diskG > (console: DYA0; VMS: DKB0) provided by the SA5300A via ORCA.  Got ACU E > XE running, and can configure the remaining disks in the array, but E > can't resize/reduce the one being used for the system disk.  I also E > finally got a spare universal disk, so installed that on the normal F > SCSI bus as DKA0, and made an image backup of the system disk to it. > > > When I booted the new disk SSL failed (hardcoded DKB0 in theH > startup!!!).  Fixed that, rebooted, and management agents work but ACUI > doesn't show up.  Found several more hardcoded DKB0 entries in .com and D > .dat files under WBEM, so I added logical definitions for DKB0 andC > node$DKB0 pointing to node$DKA0: (/exec/trans=(conceal,term)) and + > rebooted again.  ACU still will not show.  > F > I booted DKB0 again to check and everything works there.  So there'sB > apparently something hardcoded in there that a logical won't get	 > around.  >   C More fun.  I got ACU XE working on the scratch disk by removing and D reinstalling it, then rebooting.  I was able to configure the arraysF and logical disks I needed.  I then imaged the system disk back to theD new system logical drive, changed the SSL root logical back to DKB0, and booted teh array drive.   @ Everything except ACU XE is working.  Management agents come up,A queues, networks, etc.  When I try to start ACU XE it immediately E crashes with an access violation (per the log).  For the heck of it I G removed/reinstalled/rebooted, same result.  I then removed it again and G ran the ACUXE_CLEANUP.COM script as required for an upgrade (as opposed G to a fresh installation) and rebooted again.  Same thing; ACUXE crashes / with an access violation as soon as I start it.   " Any thoughts?  The logged info is:   .  .  .  Remote connection enabled!; %SYSTEM-F-ACCVIO, access violation, reason mask=00, virtual : address=0000000000000000, PC=0000000001F15978, PS=0000001B/ %TRACE-F-TRACEBACK, symbolic stack dump follows G   image    module    routine             line      rel PC           abs  PC3  LIBCPQIMGR  CXX$EMPTY25LSTP15DVCATTR2841BNI  Empty >                                          7578 0000000000000068 0000000001F159783  LIBCPQIMGR  CXX$DT25LSTP15DVCATTRBTX048B15V  ~List >                                          7194 0000000000000038 0000000001F15828-  LIBCPQIMGR  CPATH  ~CPath               9677   00000000000011A40000000001EFC1B45  LIBCPQIMGR  CARRAYHOSTCONTROLLER  IMBX_GetObjectInfo >                                         10666 0000000000000954 0000000001F0B774>  LLPI                                       0 0000000000039B44 0000000001E71B44>  LLPI                                       0 0000000000039E94 0000000001E71E94>  ACUXEBIN                                   0 0000000000931AA4 0000000000931AA4>  ACUXEBIN                                   0 0000000000931B30 0000000000931B30>  ACUXEBIN                                   0 0000000000931C38 0000000000931C38>  ACUXEBIN                                   0 0000000000930A1C 0000000000930A1C>  ACUXEBIN                                   0 0000000000930324 0000000000930324>  ACUXEBIN                                   0 0000000000930640 0000000000930640>  ACUXEBIN                                   0 00000000008E9FE8 00000000008E9FE8>  ACUXEBIN                                   0 00000000008E7AA0 00000000008E7AA0>  ACUXEBIN                                   0 00000000008E1470 00000000008E1470>  ACUXEBIN                                   0 0000000000574AEC 0000000000574AEC>  ACUXEBIN                                   0 0000000000574580 0000000000574580>  PTHREAD$RTL                                0 000000000005601C 000000007BD3E01C>  PTHREAD$RTL                                0 0000000000042E10 000000007BD2AE10>                                             0 0000000000000000 0000000000000000>  PTHREAD$RTL                                                 ?       ? >                                             0 FFFFFFFF80269ED4 FFFFFFFF80269ED4   ------------------------------  # Date: Mon, 05 Jun 2006 18:39:58 GMT " From:   VAXman-  @SendSpamHere.ORG Subject: Re: SimH 3.6-0 0 Message-ID: <00A56C36.6203DD26@SendSpamHere.ORG>  l In article <mno882pi9ggabkn92b5m9si1babs69uijm@4ax.com>, Bob Supnik <bob.supnik@sicortex.nospam.com> writes: >  >  >SimH 3.6-0 has been released. >  >This release includes:  > D >- first release of the IBM 7094 simulator, with IBSYS demonstration	 >software E >- major update to the VAX-11/780 simulator, fixing more than 30 bugs B >- new feature to limit simulated tape capacity to a specific size, >- numerous other bug fixes and improvements > 3 >SimH is available at http://simh.trailing-edge.com  >  >/Bob Supnik >   9 A Mac OS X version of SimH (VAX) with Network capability, 9 A Mac OS X version of SimH (VAX) with Network capability,  My Kingdom for  9 A Mac OS X version of SimH (VAX) with Network capability!    --  K VAXman- A Bored Certified VMS Kernel Mode Hacker   VAXman(at)TMESIS(dot)COM              5   "Well my son, life is like a beanstalk, isn't it?"     ------------------------------   Date: 5 Jun 2006 15:45:06 -0700 / From: "David B Sneddon" <dbsneddon@bigpond.com>  Subject: Re: SimH 3.6-0 B Message-ID: <1149547506.418151.86670@u72g2000cwu.googlegroups.com>   VAXman-@SendSpamHere.ORG wrote:  > ; > A Mac OS X version of SimH (VAX) with Network capability, ; > A Mac OS X version of SimH (VAX) with Network capability,  > My Kingdom for; > A Mac OS X version of SimH (VAX) with Network capability!  >  > --M > VAXman- A Bored Certified VMS Kernel Mode Hacker   VAXman(at)TMESIS(dot)COM   3 I finally got V3.5-2 going with network capability.  What problem are you having?   Dave   ------------------------------  # Date: Mon, 05 Jun 2006 23:32:53 GMT " From:   VAXman-  @SendSpamHere.ORG Subject: Re: SimH 3.6-0 0 Message-ID: <00A56C5F.4D3DA2AC@SendSpamHere.ORG>  t In article <1149547506.418151.86670@u72g2000cwu.googlegroups.com>, "David B Sneddon" <dbsneddon@bigpond.com> writes: >  >  >   >VAXman-@SendSpamHere.ORG wrote: >>< >> A Mac OS X version of SimH (VAX) with Network capability,< >> A Mac OS X version of SimH (VAX) with Network capability, >> My Kingdom for < >> A Mac OS X version of SimH (VAX) with Network capability! >> >> -- N >> VAXman- A Bored Certified VMS Kernel Mode Hacker   VAXman(at)TMESIS(dot)COM > 4 >I finally got V3.5-2 going with network capability. >What problem are you having?   I I gave up on the early versions because it was more a curio than anything - else.  Without network, it was quite useless.   $ Are you willing to share your .EXEs?  5 Power and Intel OS X would be great if you have them.  --  K VAXman- A Bored Certified VMS Kernel Mode Hacker   VAXman(at)TMESIS(dot)COM              5   "Well my son, life is like a beanstalk, isn't it?"     ------------------------------   Date: 5 Jun 2006 18:56:21 -0700 / From: "David B Sneddon" <dbsneddon@bigpond.com>  Subject: Re: SimH 3.6-0 B Message-ID: <1149558981.853600.285980@f6g2000cwb.googlegroups.com>   VAXman-@SendSpamHere.ORG wrote:  > K > I gave up on the early versions because it was more a curio than anything / > else.  Without network, it was quite useless.  > & > Are you willing to share your .EXEs?  4 I will zip up the executables and let you know where you can get them...    > 7 > Power and Intel OS X would be great if you have them.    Sorry, only Power...   > --M > VAXman- A Bored Certified VMS Kernel Mode Hacker   VAXman(at)TMESIS(dot)COM  > 6 >   "Well my son, life is like a beanstalk, isn't it?"   Dave   ------------------------------  % Date: Tue, 06 Jun 2006 00:09:43 -0500 % From: Dan Foster <usenet@evilphb.org>  Subject: Re: SimH 3.6-0 5 Message-ID: <slrne8a3gn.3uh.usenet@zappy.catbert.org>   q In article <1149558981.853600.285980@f6g2000cwb.googlegroups.com>, David B Sneddon <dbsneddon@bigpond.com> wrote:  > ! > VAXman-@SendSpamHere.ORG wrote:  >>L >> I gave up on the early versions because it was more a curio than anything0 >> else.  Without network, it was quite useless. >>' >> Are you willing to share your .EXEs?  > 6 > I will zip up the executables and let you know where > you can get them...    How did you build them?    -Dan   ------------------------------   Date: 5 Jun 2006 22:15:21 -0700 / From: "David B Sneddon" <dbsneddon@bigpond.com>  Subject: Re: SimH 3.6-0 C Message-ID: <1149570921.173951.284600@y43g2000cwc.googlegroups.com>    Dan Foster wrote:  >  > How did you build them?  >  > -Dan  / I first installed libpcap 0.9.4 then I followed / the instructions for building simh, remembering , to specify the option to include networking.. I don't recall the exact commands but whatever was in the instructions worked.    Dave   ------------------------------  # Date: Tue, 06 Jun 2006 00:33:55 GMT % From: Rick Jones <rick.jones2@hp.com> $ Subject: Re: Unix runs faster, maybe1 Message-ID: <T34hg.1525$ws1.108@news.cpqcorp.net>   ) Bill Todd <billtodd@metrocast.net> wrote: F > I'll ask you to provide specific examples, especially in the case of7 > Unix: my impression is that their defaults are rather D > performance-oriented (to the degree that 'significant' enhancementD > might be somewhat difficult to attain with only 'basic tweaking').  C Just about any SPECweb submittal will have lots of tweaks.  SPECsfs ? may have quite a few too (it has been a while since I've looked = closely).  Knuth only knows what one could find in a TPC FDR.   
 rick jones --  E The computing industry isn't as much a game of "Follow The Leader" as B it is one of "Ring Around the Rosy" or perhaps "Duck Duck Goose." @                                                     - Rick JonesF these opinions are mine, all mine; HP might not want them anyway... :)D feel free to post, OR email to rick.jones2 in hp.com but NOT BOTH...   ------------------------------  % Date: Mon, 05 Jun 2006 20:41:55 -0400 ( From: Bill Todd <billtodd@metrocast.net>$ Subject: Re: Unix runs faster, maybeG Message-ID: <B4SdnfBRSZDJUhnZnZ2dnUVZ_oqdnZ2d@metrocastcablevision.com>    Rick Jones wrote: + > Bill Todd <billtodd@metrocast.net> wrote: G >> I'll ask you to provide specific examples, especially in the case of 8 >> Unix: my impression is that their defaults are ratherE >> performance-oriented (to the degree that 'significant' enhancement E >> might be somewhat difficult to attain with only 'basic tweaking').  > E > Just about any SPECweb submittal will have lots of tweaks.  SPECsfs A > may have quite a few too (it has been a while since I've looked ? > closely).  Knuth only knows what one could find in a TPC FDR.   D Er, Kerry's post was referring to 'basic tweaks' applied in typical I production environments, not to vendor benchmark submissions massaged to  3 within an inch of their lives for a single purpose.    - bill   ------------------------------  $ Date: Mon, 5 Jun 2006 17:18:30 -0400' From: "Main, Kerry" <Kerry.Main@hp.com> M Subject: RE: Unix runs faster, maybe (was: Re: Educating potential VMS users) T Message-ID: <FA60F2C4B72A584DBFC6091F6A2B86840150AAA7@tayexc19.americas.cpqcorp.net>   > -----Original Message-----4 > From: Bill Todd [mailto:billtodd@metrocast.net]=20 > Sent: June 4, 2006 5:42 PM > To: Info-VAX@Mvb.Saic.Com = > Subject: Re: Unix runs faster, maybe (was: Re: Educating=20  > potential VMS users) >=20 > Main, Kerry wrote: >=20 > ...  >=20? > > Point is that when both UNIX and VMS are setup and tuned=20  > appropriately @ > > on the same HW, there is not much difference in performance. >=208 > The point, of course, is that while virtually *any*=20, > half-competently-implemented OS will be=20% > performance-competitive with any=20 I > other *when set up and tuned*, *most* use of OSs is in installations=20 A > which are *not* 'set up and tuned appropriately', but rather=20  > either just=20I > booted up and run or, at most, given some very general tweaking that=20 F > won't compromise any of the many applications they're likely to run. >=20? > And under those circumstances, VMS comes out rather poorly=20 < > performance-wise when compared to Unix, while providing=20 > rather limited=20 H > increased reliability of a statistical sort - e.g., that it's *less=20J > likely* by X% that the data you naively thought you wrote won't be on=20F > the disk after, say, a power failure, rather than of the far more=20A > significant variety that if the data you naively thought you=20  > wrote isn't=20B > on the disk after a power failure, you should file a bug report. >=20= > 'A bit better reliability with a lot slower performance'=20  > really isn't a=20 > > *default* trade-off one can legitimately boast about, I'm=20 > afraid:  one=20 ? > should instead pick something closer to one extreme or the=20  > other as the=20 > > default so that not *all* customers need tweak the system=20 > significantly=20@ > to get something reasonable - especially since applications=20 > with special=20 I > needs have the ability to attain them *regardless* of how the system=20 G > defaults are set up, so really don't derive much aid from half-way=20  > default measures anyway. >=20? > The bottom line is that Unix provides significantly better=20  > file-system=20F > performance out of the box than VMS does, and this is an entirely=20@ > legitimate knock on VMS - not so much on VMS's capabilities=20 > per se as in=20 : > its choices of defaults, but the effect on real-world=20 > perceptions is not=20  > markedly different.  >=20 > - bill >=20  F Well, my experience is that, regardless of what platform is used, mostF production shops in med-large shops will always tune their environment to some basic level.=20   G Even UNIX and Windows servers performance can be significantly enhanced E with some basic tuning as compared to "out-of-the-box" parameters.=20   E I am sure most UNIX admins would not simply install their UNIX OS and G then start loading app's for testing without adjusting OS and/or kernel / parameters to make the target apps work better.   H Having stated this, the performance numbers for the majority (70-80%) ofF Windows servers today are 10% to 15% busy in peak time and majority ofF UNIX servers are only slightly better at 20-30% busy in peak times.=20  = CIO's have recently become a tad disenchanted with all vendor D performance discussions as in their view, it is a case of "oh great,C with this new xyz server, I can have even more wasted cycles that I  currently have ..".   B Course, that's not the case for all environments, but it certainlyF something that is becoming a big factor these days ... And by the way,E one of the reasons why virtualization is such a hot topic with CIO's.    Regards   
 Kerry Main Senior Consultant  HP Services Canada Voice: 613-592-4660  Fax: 613-591-4477  kerryDOTmainAThpDOTcom (remove the DOT's and AT)=20  4 OpenVMS - the secure, multi-site OS that just works.   ------------------------------  # Date: Tue, 06 Jun 2006 00:12:07 GMT 0 From: John Santos <john.santos@post.harvard.edu>M Subject: RE: Unix runs faster, maybe (was: Re: Educating potential VMS users) > Message-ID: <MPG.1eee926019d500a998971b@news.bellatlantic.net>  4 In article <FA60F2C4B72A584DBFC6091F6A2B86840150AAA7: @tayexc19.americas.cpqcorp.net>, Kerry.Main@hp.com says... >  > > -----Original Message-----4 > > From: Bill Todd [mailto:billtodd@metrocast.net]  > > Sent: June 4, 2006 5:42 PM > > To: Info-VAX@Mvb.Saic.Com = > > Subject: Re: Unix runs faster, maybe (was: Re: Educating   > > potential VMS users) > >  > > Main, Kerry wrote: > >  > > ...  > > ? > > > Point is that when both UNIX and VMS are setup and tuned   > > appropriately B > > > on the same HW, there is not much difference in performance. > > 8 > > The point, of course, is that while virtually *any* , > > half-competently-implemented OS will be % > > performance-competitive with any  I > > other *when set up and tuned*, *most* use of OSs is in installations  A > > which are *not* 'set up and tuned appropriately', but rather   > > either just I > > booted up and run or, at most, given some very general tweaking that  H > > won't compromise any of the many applications they're likely to run. > > ? > > And under those circumstances, VMS comes out rather poorly  < > > performance-wise when compared to Unix, while providing  > > rather limited  H > > increased reliability of a statistical sort - e.g., that it's *less J > > likely* by X% that the data you naively thought you wrote won't be on F > > the disk after, say, a power failure, rather than of the far more A > > significant variety that if the data you naively thought you   > > wrote isn't D > > on the disk after a power failure, you should file a bug report. > > = > > 'A bit better reliability with a lot slower performance'   > > really isn't a  > > > *default* trade-off one can legitimately boast about, I'm  > > afraid:  one  ? > > should instead pick something closer to one extreme or the   > > other as the  > > > default so that not *all* customers need tweak the system  > > significantly @ > > to get something reasonable - especially since applications  > > with special  I > > needs have the ability to attain them *regardless* of how the system  G > > defaults are set up, so really don't derive much aid from half-way   > > default measures anyway. > > ? > > The bottom line is that Unix provides significantly better   > > file-system F > > performance out of the box than VMS does, and this is an entirely @ > > legitimate knock on VMS - not so much on VMS's capabilities  > > per se as in  : > > its choices of defaults, but the effect on real-world  > > perceptions is not   > > markedly different.  > > 
 > > - bill > >  > H > Well, my experience is that, regardless of what platform is used, mostH > production shops in med-large shops will always tune their environment > to some basic level.   > I > Even UNIX and Windows servers performance can be significantly enhanced E > with some basic tuning as compared to "out-of-the-box" parameters.   > G > I am sure most UNIX admins would not simply install their UNIX OS and I > then start loading app's for testing without adjusting OS and/or kernel 1 > parameters to make the target apps work better.  > J > Having stated this, the performance numbers for the majority (70-80%) ofH > Windows servers today are 10% to 15% busy in peak time and majority ofF > UNIX servers are only slightly better at 20-30% busy in peak times.  >   E I think you have shifted gears here to talking about CPU performance. ? I think Bill (and this thread in general) was referring to I/O.   H Now if you can easily trade some of those free CPU cycles for better I/OC performance, then your off to college.  (This is an old expression, G meaning about the same thing as "now we're cooking with gas", according ? to my 6-year-old neighbor, who also claims to have made it up.)     ? > CIO's have recently become a tad disenchanted with all vendor F > performance discussions as in their view, it is a case of "oh great,E > with this new xyz server, I can have even more wasted cycles that I  > currently have ..".  > D > Course, that's not the case for all environments, but it certainlyH > something that is becoming a big factor these days ... And by the way,G > one of the reasons why virtualization is such a hot topic with CIO's.  > 	 > Regards  >  > Kerry Main > Senior Consultant  > HP Services Canada > Voice: 613-592-4660  > Fax: 613-591-4477  > kerryDOTmainAThpDOTcom > (remove the DOT's and AT)  > 6 > OpenVMS - the secure, multi-site OS that just works. >  >  >  >  >    --   John   ------------------------------  % Date: Mon, 05 Jun 2006 20:09:24 -0400 ( From: Bill Todd <billtodd@metrocast.net>M Subject: Re: Unix runs faster, maybe (was: Re: Educating potential VMS users) G Message-ID: <BaGdncHz_qn0WhnZnZ2dnUVZ_vidnZ2d@metrocastcablevision.com>    Main, Kerry wrote: >> -----Original Message----- 3 >> From: Bill Todd [mailto:billtodd@metrocast.net]   >> Sent: June 4, 2006 5:42 PM  >> To: Info-VAX@Mvb.Saic.Com< >> Subject: Re: Unix runs faster, maybe (was: Re: Educating  >> potential VMS users)  >> >> Main, Kerry wrote:  >> >> ... >>= >>> Point is that when both UNIX and VMS are setup and tuned   >> appropriately@ >>> on the same HW, there is not much difference in performance.7 >> The point, of course, is that while virtually *any*  + >> half-competently-implemented OS will be  $ >> performance-competitive with any H >> other *when set up and tuned*, *most* use of OSs is in installations @ >> which are *not* 'set up and tuned appropriately', but rather  >> either just  H >> booted up and run or, at most, given some very general tweaking that G >> won't compromise any of the many applications they're likely to run.  >>> >> And under those circumstances, VMS comes out rather poorly ; >> performance-wise when compared to Unix, while providing   >> rather limited G >> increased reliability of a statistical sort - e.g., that it's *less  I >> likely* by X% that the data you naively thought you wrote won't be on  E >> the disk after, say, a power failure, rather than of the far more  @ >> significant variety that if the data you naively thought you  >> wrote isn't  C >> on the disk after a power failure, you should file a bug report.  >>< >> 'A bit better reliability with a lot slower performance'  >> really isn't a = >> *default* trade-off one can legitimately boast about, I'm   >> afraid:  one > >> should instead pick something closer to one extreme or the  >> other as the = >> default so that not *all* customers need tweak the system   >> significantly  ? >> to get something reasonable - especially since applications   >> with special H >> needs have the ability to attain them *regardless* of how the system F >> defaults are set up, so really don't derive much aid from half-way  >> default measures anyway.  >>> >> The bottom line is that Unix provides significantly better  >> file-system  E >> performance out of the box than VMS does, and this is an entirely  ? >> legitimate knock on VMS - not so much on VMS's capabilities   >> per se as in 9 >> its choices of defaults, but the effect on real-world   >> perceptions is not  >> markedly different. >>	 >> - bill  >> > H > Well, my experience is that, regardless of what platform is used, mostH > production shops in med-large shops will always tune their environment > to some basic level.    G 1)  It's not clear exactly how that statement differs from what I said  I just above ("some very general tweaking that won't compromise any of the  H many applications they're likely to run").  Tuning an entire server for > one specific application may occur more frequently in Windows D environments (where admins are frequently scared to run more than a G single application on the server), but far less so in the Unix and VMS  # environments under discussion here.   D 2)  One might also ask just how much of that experience was focused C solely on VMS, which *definitely* requires performance tuning when  H performance is required to a significantly greater degree than Unix (or  even Windows, God forbid).   > I > Even UNIX and Windows servers performance can be significantly enhanced E > with some basic tuning as compared to "out-of-the-box" parameters.    E I'll ask you to provide specific examples, especially in the case of  7 Unix:  my impression is that their defaults are rather  I performance-oriented (to the degree that 'significant' enhancement might  < be somewhat difficult to attain with only 'basic tweaking').   > G > I am sure most UNIX admins would not simply install their UNIX OS and I > then start loading app's for testing without adjusting OS and/or kernel 1 > parameters to make the target apps work better.   E I'm not so sure (especially by comparison with what may typically be  D required on VMS, given that some of its defaults - e.g., RMS buffer F sizes - aren't particular desirable for *any* situations, having been E established back when RAM was added in units of KB rather than MB or  H GB), but let's let people with more experience in that area than either  of us has chime in here.   > J > Having stated this, the performance numbers for the majority (70-80%) ofH > Windows servers today are 10% to 15% busy in peak time and majority ofF > UNIX servers are only slightly better at 20-30% busy in peak times.   E Well, duh:  what part of the fact that even with well-optimized file  B access waiting for disks *still* dominates a lot of workloads has @ managed to escape you?  For quite a few years now it has become I difficult to purchase a processor *weak* enough to be challenged by such  H workloads (save in situations where they scale up sufficiently to allow ? that single processor or small MP system to service many, many    actively-working disks at once).  G Note that in such environments having an efficient file system *still*  F makes a significant difference to system throughput, though - even if F the processor is still loafing a lot of the time.  And that advantage D remains even under the virtual consolidation scenarios which you're G touting (though diverging from the topic under discussion by doing so).    - bill   ------------------------------  $ Date: Mon, 5 Jun 2006 23:58:49 -0400' From: "Main, Kerry" <Kerry.Main@hp.com> M Subject: RE: Unix runs faster, maybe (was: Re: Educating potential VMS users) T Message-ID: <FA60F2C4B72A584DBFC6091F6A2B86840150AB4B@tayexc19.americas.cpqcorp.net>   > -----Original Message-----4 > From: Bill Todd [mailto:billtodd@metrocast.net]=20 > Sent: June 5, 2006 8:09 PM > To: Info-VAX@Mvb.Saic.Com = > Subject: Re: Unix runs faster, maybe (was: Re: Educating=20  > potential VMS users) >=20 > Main, Kerry wrote: > >> -----Original Message----- 7 > >> From: Bill Todd [mailto:billtodd@metrocast.net]=20  > >> Sent: June 4, 2006 5:42 PM  > >> To: Info-VAX@Mvb.Saic.Com@ > >> Subject: Re: Unix runs faster, maybe (was: Re: Educating=20 > >> potential VMS users)  > >> > >> Main, Kerry wrote:  > >> > >> ... > >>A > >>> Point is that when both UNIX and VMS are setup and tuned=20  > >> appropriatelyB > >>> on the same HW, there is not much difference in performance.; > >> The point, of course, is that while virtually *any*=20 / > >> half-competently-implemented OS will be=20 ( > >> performance-competitive with any=20> > >> other *when set up and tuned*, *most* use of OSs is in=20 > installations=20D > >> which are *not* 'set up and tuned appropriately', but rather=20 > >> either just=20 > > >> booted up and run or, at most, given some very general=20 > tweaking that=20= > >> won't compromise any of the many applications they're=20  > likely to run. > >>B > >> And under those circumstances, VMS comes out rather poorly=20? > >> performance-wise when compared to Unix, while providing=20  > >> rather limited=20@ > >> increased reliability of a statistical sort - e.g., that=20 > it's *less=20 A > >> likely* by X% that the data you naively thought you wrote=20  > won't be on=20I > >> the disk after, say, a power failure, rather than of the far more=20 D > >> significant variety that if the data you naively thought you=20 > >> wrote isn't=20 E > >> on the disk after a power failure, you should file a bug report.  > >>@ > >> 'A bit better reliability with a lot slower performance'=20 > >> really isn't a=20A > >> *default* trade-off one can legitimately boast about, I'm=20  > >> afraid:  one=20B > >> should instead pick something closer to one extreme or the=20 > >> other as the=20A > >> default so that not *all* customers need tweak the system=20  > >> significantly=20 C > >> to get something reasonable - especially since applications=20  > >> with special=20A > >> needs have the ability to attain them *regardless* of how=20  > the system=20 J > >> defaults are set up, so really don't derive much aid from half-way=20 > >> default measures anyway.  > >>B > >> The bottom line is that Unix provides significantly better=20 > >> file-system=20 I > >> performance out of the box than VMS does, and this is an entirely=20 C > >> legitimate knock on VMS - not so much on VMS's capabilities=20  > >> per se as in=20= > >> its choices of defaults, but the effect on real-world=20  > >> perceptions is not=20 > >> markedly different. > >> > >> - bill  > >> > >=20B > > Well, my experience is that, regardless of what platform is=20 > used, mostA > > production shops in med-large shops will always tune their=20 
 > environment  > > to some basic level.=20  >=20? > 1)  It's not clear exactly how that statement differs from=20  > what I said=20B > just above ("some very general tweaking that won't compromise=20 > any of the=20 A > many applications they're likely to run").  Tuning an entire=20  > server for=20 B > one specific application may occur more frequently in Windows=20H > environments (where admins are frequently scared to run more than a=20> > single application on the server), but far less so in the=20 > Unix and VMS=20 % > environments under discussion here.  >=20H > 2)  One might also ask just how much of that experience was focused=20G > solely on VMS, which *definitely* requires performance tuning when=20 > > performance is required to a significantly greater degree=20 > than Unix (or=20 > even Windows, God forbid). >=20 > >=207 > > Even UNIX and Windows servers performance can be=20  > significantly enhancedI > > with some basic tuning as compared to "out-of-the-box" parameters.=20  >=20I > I'll ask you to provide specific examples, especially in the case of=20 ; > Unix:  my impression is that their defaults are rather=20 ; > performance-oriented (to the degree that 'significant'=20  > enhancement might=20> > be somewhat difficult to attain with only 'basic tweaking'). >=20 > >=20@ > > I am sure most UNIX admins would not simply install their=20
 > UNIX OS and @ > > then start loading app's for testing without adjusting OS=20 > and/or kernel 3 > > parameters to make the target apps work better.  >=20I > I'm not so sure (especially by comparison with what may typically be=20 H > required on VMS, given that some of its defaults - e.g., RMS buffer=20J > sizes - aren't particular desirable for *any* situations, having been=20I > established back when RAM was added in units of KB rather than MB or=20 @ > GB), but let's let people with more experience in that area=20 > than either=20 > of us has chime in here. >=20 > >=20: > > Having stated this, the performance numbers for the=20 > majority (70-80%) ofA > > Windows servers today are 10% to 15% busy in peak time and=20 
 > majority of J > > UNIX servers are only slightly better at 20-30% busy in peak times.=20 >=20I > Well, duh:  what part of the fact that even with well-optimized file=20 F > access waiting for disks *still* dominates a lot of workloads has=20D > managed to escape you?  For quite a few years now it has become=20: > difficult to purchase a processor *weak* enough to be=20 > challenged by such=20 6 > workloads (save in situations where they scale up=20 > sufficiently to allow=20C > that single processor or small MP system to service many, many=20 " > actively-working disks at once). >=20< > Note that in such environments having an efficient file=20 > system *still*=20 J > makes a significant difference to system throughput, though - even if=20J > the processor is still loafing a lot of the time.  And that advantage=20H > remains even under the virtual consolidation scenarios which you're=20A > touting (though diverging from the topic under discussion by=20  > doing so). >=20 > - bill >=20  C The environments I am talking about in the majority of Windows/UNIX D servers today are not CPU lite, but disk IO heavy. They are just not3 utilized that much at all in peak periods - period.   F Part of this might be attributed to the one-app, one server philosophyH as repeated refreshes makes for a much faster server at lower costs, but= if the workload does not increase that much, then the overall G utilization goes down. Multiply this by hundreds of x86 servers in many C environments and you have one of the basic reasons why CIO's are so + concerned about their Windows environments.   G Repeated one-app, one server refreshes will mean even lower utilization  rates in the future.  D And btw, a server with low cpu utilization, but heavy IO makes for aD poor candidate for virtualization. Remember that virtualization addsH another level of overhead for both IO and CPU loads. It is a trade-off -4 not everything about virtualization is a good thing.   Regards   
 Kerry Main Senior Consultant  HP Services Canada Voice: 613-592-4660  Fax: 613-591-4477  kerryDOTmainAThpDOTcom (remove the DOT's and AT)=20  4 OpenVMS - the secure, multi-site OS that just works.   ------------------------------   End of INFO-VAX 2006.312 ************************