0 INFO-VAX	Mon, 09 Jan 2006	Volume 2006 : Issue 17      Contents: Re: Adopting 3 stray Vaxen :-) Re: DAT drive compatability  Re: DESTA Memory hog Re: DESTA Memory hog1 Re: DVNETRTG (Phase IV routing) Hobbyist license?  Free: AlphaServer 2100 5/250$ Re: HSZTerm under newer VMS versions Java 1.4.2 installation problem F Re: MCI is not more. Curly gets 40 million to kill yet another company) Re: Need model number for Infoserver-1000 ! Re: old EZ disks with latest VMS? * Re: R400X: converting DSSI shelves to SCSI* Re: R400X: converting DSSI shelves to SCSI* Re: R400X: converting DSSI shelves to SCSI* Re: R400X: converting DSSI shelves to SCSI! Re: Round internal SCSI cables ??  UNIX shm* functions for VMS?  Re: VLC & SCSI Drive Help Needed+ Re: WVNETcluster uptime reaches 10 years... + Re: WVNETcluster uptime reaches 10 years...   F ----------------------------------------------------------------------  % Date: Sun, 08 Jan 2006 14:21:03 -0600 . From: Bob Blunt <RobertDOTblunt@digitalDOTcom>' Subject: Re: Adopting 3 stray Vaxen :-) : Message-ID: <-rqdnQMidK0P7lzenZ2dnUVZ_s2dnZ2d@comcast.com>   JF Mezei wrote: J > Yesterday was a big day. I adopted 3 stray VAXes, a 400-500a (which saysB > it is a -600a when booted), 2 400-200s, and a DSSI disk cabinet. > I > However, I didn't get the keys for the cabinets... I'll haier to hire a D > locksmith (or just use my screwdriver to turn the locking knob :-) > J > At one point, all 4 boxes were on a snowy sidewalk in downtown montreal,D > awaiting transport, and it was quite interesting to watch faces ofI > people as they walked past them. One old lady asked me what these boxes  > were.  >  > I > Basically 3 BA440s, and 1 younger/smaller sibbling, the BA215 with only G > 4 Qbus slots. To lift this into a minivan, you need 2 normal people.  H > But all 4 fit in. Fitting a 5th one would have been quite a challenge, > so I guess I was lucky.  >  > C > While from the outside, they look "plastic", these guys are still D > solidly build with a heavy metal cage. And hey ! they were made inI > Canada (Kanata).  Certaintly quite a big step since the era of Microvax H > IIs in terms of cabinet design, but still quite some ways from currentE > Dell 1U  rackmountable units. The Dell boxes also show the shape of F > things to come: servers won't have expansion cards, they'll just use, > gigabit ethernet to talk to other devices. > I > And since it was very cold  yesterday, it was perhaps the first time in J > my life where I realised I really had to wait for the boxes to warm backF > up before doing anything with them. (Bringing cold equipment indoors5 > causes humidity to condensate on the cold machine).  > I > Ok, this is where "hobbyist" really differs from "business". What's the H > first thing one does with a new toy ? You take it apart, blow the dustI > out (having compressor really helps !), wash the cabinet, check out the F > cabinet design etc, and put it back together. So far, I've only done > this with the 400-500A.  > I > One of the 4000-200s has the same config as the ones I had handled in a C > business environment 13 years ago (KLESI and a DPV11 (synchronous B > comms).  But you never get to play with such beasts in a mission > critical envrionment ! > 4 > About interfacing with the all mighty Microvax II: > I > The 4000-500/600 definitely cannot interface. The memory boards and CPU , > have very different connectors from Q-BUS. > H > However, for the 4000-200, the memory and CPU boards appear compatible9 > (at least physically) with the QBUS boards on the MVII.  >  > J > I am not entirely sure yet how I will integrate all of this. However, asE > time progresses, the idea of disposing of my 18 year old all mighty I > Microvax II is no longer so harshly rejected. I would likely re-use the H > big Q5 cabinet as an 19" rack to hold those machines. ( you can removeA > the sides of the Q5 cabinet to make it into a slimmer cabinet).  > G > However, this leaves me with the issue of how to power and connect my 7 > SCSI drives, as well as how to hold the serial ports.  > H > What is nice with this arrangement is that I can play a lot on this toG > see what can be done without affecting my running apps/web server etc J > since once you're above 2 nodes on a cluster, it becomes a lot easier toE > manage quorum. And if I plan this right, I may be able to do a full 5 > transplant to the new systems without any downtime.  > G > And I'll have to find  names for each disk and new node... that's the 
 > hard part !  >  > J > Question: could I use the QBUS expander card that links the two BA23s inI > the Q5 cabinet of the MVII, change the cable for a longer one and use a J > BA23 as a qbus expansion where I could have my SCSI drives and SCSI card. > etc ? How long a ribbon cable could I have ? >  > H > The problem with those BA440s is that the don't provide standard power" > connectors for disks, CDROM etc.  H 12-17119-01 is the key's part number.  It does look very similar to the I blue ones that were used with the MDS01 but I've never tried to exchange  3 them.  The same key is used on the VAX6000 console.   G The Q-Bus expansion should probably work between the BA4xx and BA23 if  I you want to give it a whirl.  Naturally, you'll be missing the nicer and  H more (physically) stable S-Box bulkhead type, but it should work.  I've G never seen them with ribbon cables, honestly.  All the Q-Bus expanders  D I've seen have used fairly thick, sometimes long, round cables with E angled dense connectors at each end.  Usually the shortest ones have  G been just long enough to connect from the far left of one Q-Bus to the   far right of the next Q-bus.  H For Q-Bus to CD or tape, get a KZQSA-SA.  A word of caution, these were H not REALLY intended for SCSI disk access.  You can install SCSI devices A into the drive bays of the BA4xx chassis but note that there are  G definite differences between the equivalent DSSI mounting hardware and  H the SCSI mounting hardware.  You'll need to run a short SCSI cable from D the KZQSA to the SCSI connector on the left of the BA4xx cabinet to I "enable" the use of those SCSI-specific backplane connectors (which look  H deceptively like their DSSI bretheren).  I've installed SCSI tapes into A the proper SCSI mounting bracketry so I expect that you could do  D something cute with a supported SCSI CDrom (or two) like the RRD42, > RRD43 or that genre CD reader.  If you're looking for old and E interesting, you could use a KRQ50-SA that has S-box handles.  You'd  I have to find the RRD50 CD reader or the right cable so you could connect  H to a RRD40 instead.  You might also want to look for either a HSD05- or H HSD10-JA.  I believe there was one of those that allowed you to install I the HSD directly into the Q-Bus backplane for power so you could connect  E it's SCSI into those backplane SCSI drive bays somehow.  I know they  I were salable items, but I've never been able to scrounge one to actually  ! lay hands on and get one working.   > The side vents on the Q5 cabinet aren't as important with the E top-to-bottom flowing BA215 or BA4xx cabinet, but mounting the BA215   might be "interesting."    bob    ------------------------------  % Date: Sun, 08 Jan 2006 16:12:18 -0600 . From: Bob Blunt <RobertDOTblunt@digitalDOTcom>$ Subject: Re: DAT drive compatability0 Message-ID: <TKadnU80lvo4EFzeRVn-iQ@comcast.com>   Bob Koehler wrote:F >    After being out of maintanence for a while, we had HP come in and= >    repair some problems prior to going back on maintenance.  > G >    We had a DEC TLZ06 in a BA350 and I was using /media=compaction to I >    store my backups.  Searching for DEC TLZ06 I find references that it , >    was a 2/4GB drive accepting 120m tapes. > G >    The replacement drive shows up (via show dev/full) as an "ARCHIVE  C >    Python 28454-XXX", and support for compaction is not listed.   F >    Searches have found references that it's a 2.5GB drive accepting  >    90m tapes.  > H >    Is it true that I've lost 30m and compaction?  (IIRC TLZ04 were 60m> >    drives and could be damaged by using a 90m or 120m tape). > H >    Will VMS quietly ignore the /media=compaction in my backup scripts,H >    or must I change them?  (My tests seem to indicate no errors when II >    use /media=compaction, I've no idea what the drive is really doing).  >   G First, the TLZ06 was a fairly early DAT drive.  Granted, but it SHOULD  A be capable of writing 2GB native on 90M DDS-1 cartridges and 4GB  H compressed.  If you regularly were using 120M carts, I'm surprised they ( were working correctly 100% of the time.  E IF you're saying then that you had the TLZ06 replaced by HP?  If the  G drive isn't working properly and you've paid for it to be replaced and  G NOW it's under support, call 'em back.  If you had that TLZ06 replaced  H with a tapedrive of the same part number and it's not showing itself as H a TLZ06 like it should, the drive is either an incorrect replacement or F it's switches are not properly setup inside the SBB container.  DON'T F open it to check them, call HP and 'splain what's up.  You replaced a H TLZ06-VA, the replacement NEEDS to report itself as a TLZ06-VA.  If you C can't get one that works correctly, ask your local unit manager to  I replace it with something that does work.  But at this point it would be  H imprudent to say if you've lost your data on tape or not.  The only way 0 to check is with a properly working replacement.   ------------------------------  % Date: Sun, 08 Jan 2006 14:45:07 -0600 . From: Bob Blunt <RobertDOTblunt@digitalDOTcom> Subject: Re: DESTA Memory hog 0 Message-ID: <HIednecKZK6r5FzeRVn-gA@comcast.com>   Dr. Dweeb wrote:    >>comp.os.vms@hotmail.com wrote: >> >>>>From a DS25 4GB machine. >>> ( >>>$ pipe show sys |sear sys$input destaI >>>0000045D DESTA Director  HIB      6  1729374   0 00:11:02.78    189857 
 >>>34869 M >>>  >>>This seems a tad excessive. >> > Thanks for that info.  > N > Actually, I am the DBA not the SYSADMIN, so this is not my area, but I care M > about memory, bigtime, because the machines run business critical database  K > applications.  If these machines stop, then the  clock ticks very, very   > fast.  > J > Luckily, these are not clusters - so the event you cite will likely not  > occur. > N > I am actually clueless as to what WEBES and co. does or if it is needed.  I  > just want the memory back :-)  >  > Dr. Dweeb   ? WEBES and the DESTA Director are tools that provide active and  D predictive error analysis on the fly for the newer Alpha (generally H speaking DS/ES/GS systems) and IA64 systems.  DECevent was the previous D solution for most "older" Alpha systems and also had a "predictive" H active analysis mode.  Before that we used ANALYZE/ERROR and tools like  SIM and SPEAR.  D Suggest to your system manager that the sizes of ERRLOG.SYS be kept ' small.  Maybe try renaming them weekly.    bob    ------------------------------  % Date: Sun, 08 Jan 2006 22:06:21 -0600 2 From: David J Dachtera <djesys.nospam@comcast.net> Subject: Re: DESTA Memory hog + Message-ID: <43C1E13D.75D643C6@comcast.net>    Bob Blunt wrote: >  > WEBES and the DESTA Director  5 ..., both Java programs and, therefore, CPU hogs, ...   # > are tools that provide active and E > predictive error analysis on the fly for the newer Alpha (generally I > speaking DS/ES/GS systems) and IA64 systems.  DECevent was the previous E > solution for most "older" Alpha systems and also had a "predictive" I > active analysis mode.  Before that we used ANALYZE/ERROR and tools like  > SIM and SPEAR. > E > Suggest to your system manager that the sizes of ERRLOG.SYS be kept ) > small.  Maybe try renaming them weekly.    Even better: daily.   E It is a known limitation of DESTA and WEBES that they may choke on an F ERRLOG.SYS bigger than 1MB (2048 blocks). If you can spare the machineD cycles, you may even want to monitor it and start a new one when the size approaches 1MB.   --   David J Dachtera dba DJE Systems  http://www.djesys.com/  ) Unofficial OpenVMS Hobbyist Support Page: " http://www.djesys.com/vms/support/  ( Unofficial Affordable OpenVMS Home Page: http://www.djesys.com/vms/soho/   " Unofficial OpenVMS-IA32 Home Page: http://www.djesys.com/vms/ia32/    Coming soon:& Unofficial OpenVMS Marketing Home Page   ------------------------------  % Date: Sun, 08 Jan 2006 13:26:01 -0600 . From: Bob Blunt <RobertDOTblunt@digitalDOTcom>: Subject: Re: DVNETRTG (Phase IV routing) Hobbyist license?0 Message-ID: <GLOdnXzrv4Mx-1zeRVn-iQ@comcast.com>   Bob Armstrong wrote:E >>As an alternative, if all the nodes are connected on the same or to K >> a bridged LAN and you have just one circuit and one line configured ....  >  > H >   Thanks, but that doesn't help in this case.  Bridging geographicallyI > separated LANs over Internet requires specialized routers (e.g. a Cisco < > - not exactly rare, but not something every hobbyist has). >  > D >> You need either the DECnet routing license or the appropriate NAS >> license to enable it, >  > F >   You had mentioned the possibility of adding the NAS license to theG > Alpha hobbyist PAKs, but we'd actually want to add it to both VAX and  > Alpha. >  > Thanks again,  > Bob  >   G Since inception Full-function DECnet Phase IV routing was NOT possible  I on Alpha, simply put.  The intent was to only have one Phase IV line and  A circuit pair.  Some folk have had trouble when they've taken the  I defaults and allowed the configuration command procedure for Phase IV to  C enable ALL their lines and circuits for Phase IV use.  I generally  I recommend that you just have one line and circuit configured AND enabled   on Alpha at a time.   G Full-function Routing on VAX is still viable with the correct license.  G This could be enabled with either DVNETRTG or one of the NAS licenses.  G Having the correct license, either one, won't make enable the Alpha to  C use full-function routing.  All you can do is, as mentioned, is to   enable cluster aliasing.   bob    ------------------------------  # Date: Sun, 08 Jan 2006 22:29:52 GMT 2 From: Scott Squires <squires@zargon.hobbesnet.org>% Subject: Free: AlphaServer 2100 5/250 1 Message-ID: <umf893-pm8.ln1@zargon.hobbesnet.org>   C I have moved into a studio apartment and no longer have room for my G AlphaServer.  It is very difficult to part with, but I don't see a time F in the near future that I will be able to use it.  I hope someone will have a better home for it.  = If you are interested, email me: squires@zargon.hobbesnet.org    Location: Milwaukee, WI  Delivery: Come pick it up , Photo:    http://www.hobbesnet.org/spiff.jpg   AlphaServer 2100 5/250   * Processors/     o CPU 0 - Alpha 21164 250 MHz - Board B2040 /     o CPU 1 - Alpha 21164 250 MHz - Board B2040      o CPU 2 - Not installed      o CPU 3 - Not installed 
   * Memory     o MEM 0 - 128 MB     o MEM 1 - Not installed      o MEM 2 - Not installed      o MEM 3 - Not installed    * Video board    * StorageWorks
      o Bank 0         + ID 0 - 2.1 GB        + ID 1 - 2.1 GB  
      o Bank 1         + Not installed  	   * Power       o PSU 0 - 600 Watts      o PSU 1 - Not installed    E I'll provide a DEC keyboard and a DEC 14" color monitor.  I also have E an extra terminal I can part with, if it is desired.  I think it is a / VT420 but I could be wrong on the model number.   D For those unfamiliar with the 2100, it is very large and very heavy.; However its power consumption is comparable to a modern PC.    Regards, Scott    ------------------------------  % Date: Sun, 08 Jan 2006 22:12:53 -0600 2 From: David J Dachtera <djesys.nospam@comcast.net>- Subject: Re: HSZTerm under newer VMS versions + Message-ID: <43C1E2C5.F8C0C186@comcast.net>    Malcolm Dunnett wrote: > [snip]- >    Is there a place I can get HSZterm from?   ' See http://www.djesys.com/freeware/vms/    > Does it work with an HSG80?    Yes. Caveat: Unsupported.    --   David J Dachtera dba DJE Systems  http://www.djesys.com/  ) Unofficial OpenVMS Hobbyist Support Page: " http://www.djesys.com/vms/support/  ( Unofficial Affordable OpenVMS Home Page: http://www.djesys.com/vms/soho/   " Unofficial OpenVMS-IA32 Home Page: http://www.djesys.com/vms/ia32/    Coming soon:& Unofficial OpenVMS Marketing Home Page   ------------------------------  % Date: Sun, 08 Jan 2006 21:49:05 -0500 2 From: "Stanley F. Quayle" <squayle@insight.rr.com>( Subject: Java 1.4.2 installation problem. Message-ID: <43C188D1.5734.106AA896@localhost>  E I have just installed the latest Java from the HP web site (the 142p- * 2 kit) and always get the following error:   Case 1 ------) $ @SYS$COMMON:[SYSMGR]JAVA$142_SETUP FAST  $ java -version * Error: no `classic' JVM at `java$jvm_shr'.   Case 2 ------, $ @SYS$COMMON:[SYSMGR]JAVA$142_SETUP CLASSIC $ java -version * Error: no `classic' JVM at `java$jvm_shr'.     Any suggestions?  
 --Stan Quayle  Quayle Consulting Inc.  
 ----------- Stanley F. Quayle, P.E. N8SQ  +1 614-868-1363 3 8572 North Spring Ct., Pickerington, OH  43147  USA 0 stan-at-stanq-dot-com       http://www.stanq.com) "OpenVMS, when downtime is not an option"    ------------------------------  % Date: Sun, 08 Jan 2006 15:09:09 -0500 - From: JF Mezei <jfmezei.spamnot@teksavvy.com> O Subject: Re: MCI is not more. Curly gets 40 million to kill yet another company , Message-ID: <43C1715D.31612AC0@teksavvy.com>  / Phillip Helbig---remove CLOTHES to reply wrote: H > I'm not saying that all of this is good for society, or the employees,G > or VMS or whatever.  I'm just saying that once you decide to play the + > game, you can't complain about the rules.     C Yep. But it is wrong to state that Worldcome shareholders got their C money back. Worldcom was fully recapitalised with totally new stock C offering given to the creditors in exchange for wiping debts clean. E Original shareholders have 0 shares in the new recapitalised ocmpany. 1 They are not happy puppies, they lost everything.   F Essentially, the creditors stole the equity from the shareholders, andH this is legal under backrupcy protection law. While Curly didn't declareE the bankrupcy, he is the one who agreed to give all of the company to - the creditors and leave shareholders out dry.   E Shareholders contend that Worldcom's bankrupcy was engineered and not E necessary. While its debt load was high, it was mostly long term debt B not due for 10-20 years. But they missed a payment, and instead ofF negotiating with creditors, immediatly sought bankrupcy protection andB then announced accounting irregularities in the past. (which don't@ affect you current cash positions and ability to pay creditors).  H Shareholders contend that Worldcom would have been viable and profitableE if it had recapitalised only 50% of the equity, which would have left E shareholders with 50% onwership of the company, the creditors 50% and F the company still haveving a manageable debt level. (The capellas deal wiped the debts clean).   9 Capellas did NOT act in the best shareholders' interests.   ? So when Verizon decides to spend megabucks to buy the corpse of G Worldcom, the original shareholders aren't the ones getting the bulk of  the money.    < In the end, the ultimate responsability goes back to the MCIE shareholders who agreed to let a virtual .com company (Worldcom) lure D them into exchanging MCI stock for Worldcom stock. Had they refused,F they would be much better off today. But short term profits often goes4 against long term success. And they lost everything.    C Curly did the hospital equivalent of putting a nice dressing on the B wound and finding another hospital willing to take the patient andH really heal it. And lets not kid ourtselves. When Worldcom emergend fromG bankrupcy protection, Capellas was already talkint with Verizon, so his C public statements about MCI surviving as an independnat company and ) revamping his network was just bullshit.        F And Curly has been lucky in that the media keep on talking to his as aH "corporate rescue artist" and helping companies come back from the dead.H Problem is that in the case of Compaq, he is the one who killed it. JustF because pfeiffer took a little longer to integrate Digital into Compaq" doesn't mean that he was that bad.   ------------------------------  % Date: Sun, 08 Jan 2006 13:12:26 -0600 . From: Bob Blunt <RobertDOTblunt@digitalDOTcom>2 Subject: Re: Need model number for Infoserver-1000: Message-ID: <2qSdnUKIyOzh_lzenZ2dnUVZ_tWdnZ2d@comcast.com>   Bob Koehler wrote:G >    I'm trying to get a hardware maintenance contract from HP.  One of F >    my systems, an Infoserver-1000, has the following model number on
 >    its tag:  >  >       70-30343-03. A02 > = >    HP claims that "703034303A" is not a valid model number.  > C >    Is this a punctuation problem, or is HP looking for some other  >    number? >   H Look at the bottom of your Infoserver 1000.  Hopefully it should have a I sticker with TWO entries for the model number and the 70-30343-03 A02 is  F the bottom one and in parenthesis.  As others have stated, the number H you need will be of the format SEADx-xx.  The 70-30343-03 is really the I "option" part number of the daughter card that plugs onto the Infoserver  G motherboard that provides your console, ethernet and power connections  F (in the case of the 70-30343-03, this is the AUI variant).  There are H also variants of those daughter cards options that are Thinwire and one  that plugs into the Infotower.  ' 70-30343-01  SEADB, standalone thinwire " 70-30343-03  SEADC, standalone AUI 70-30343-04  Infotower   ------------------------------  % Date: Sun, 08 Jan 2006 15:49:05 -0600 . From: Bob Blunt <RobertDOTblunt@digitalDOTcom>* Subject: Re: old EZ disks with latest VMS?0 Message-ID: <pcSdnTmaorqpFVzeRVn-vQ@comcast.com>  / Phillip Helbig---remove CLOTHES to reply wrote: > > In article <43B30456.6090697F@comcast.net>, David J Dachtera& > <djesys.nospam@comcast.net> writes:  >  > D >>Potential dumb question: is the failure rate of SSD's (Solid StateG >>Disks) such that you can afford to not shadow them (since you're only " >>using them for page/swap space)? >  > J > Like most stuff I have, these were picked up for free.  Presumably it's H > a matter of "less to go wrong, less to repair" and the MTBF should be I > less than for a real disk, right?  Probably, everything else will fail  G > before they do.  (Probably when they fail, it will be the batteries,  K > right?  Can they be (easily) replaced?)  On the other hand, IF a page or  K > swap disk fails, won't that hang that process (and, if it's an important  K > process, potentially hang the entire machine)?  Although things might be  J > different in the future, at the moment the machine is 500 km away, so I H > like to keep it operating in a "hands-off" mode, i.e. I don't want to D > rely on someone being there to pull a disk, reboot it or whatever. >   C When all this happens are you getting ANYTHING in errorlog?  Also,  E knowing what TYPE disk can be very helpful.  Some of those SSDs have  C ways you can poke them to find out if you're hitting some internal   limitation or problem.  D For instance, I have two EZ54s that were in lab systems that can be D "tricked" into working well enough, but when pressed hard or if the C power fails you have to go thru many gyrations to get them working  G again.  In a nutshell, they have a built-in battery and RZ25 long-term  H retention backup but the batteries have died and expired.  All sorts of ? weird things happen and the replacement battery packs are VERY  F expensive.  BUT if I had those, those EZ54s would be "right as rain." A When they're working, they seem fine.  Once they encounter their  G problems, I'm back to talking to the internal processor on the drives,  C re-internally formatting, re-tricking the batteries, etc.  And, of  C course, that means the data's long gone...  A massive pain and not  B really worth the 500MB or supposed "added performance" of the SSD.   bob    ------------------------------  % Date: Sun, 08 Jan 2006 15:19:49 -0600 . From: Bob Blunt <RobertDOTblunt@digitalDOTcom>3 Subject: Re: R400X: converting DSSI shelves to SCSI 0 Message-ID: <_6KdnZhggN_EHFzeRVn-vQ@comcast.com>   JF Mezei wrote: E > This week, I adopted a stray R400X with 5 drives (4*2gig, 1*1gig).   > I > Those drives are noisy and generate heat and eat electricity. (they are $ > the big 5.25" full height format). > J > The specs for the R400X mention that the backplane is actually 3 layers:H > power, SCSI and DSSI.  They also mention that it is possible to insertF > SCSI drives next to DSSI drives, with the SCSI drives picking up theI > signals frm the SCSI signals and the DSSI drives from the DSSI signals.  > Neat design. > H > Tough question: if I were to cannabalise the 1 gig DSSI drive, could IH > reuse the mounting aparatus and get it to feed me SCSI signals ? (i.e:J > does the connector into the backplane get all signals and then just feed > the DSSI stuff to the drive ?  > C > I know that the mounting bracket provides power to the drive in a  > standard power plug.   > J > Failing this, I could always string my own ribbon cable in the backplane- > and use that, but it wouldn't be as "neat".  > G > Also, are DSSI drives hot removable/replaceable  from the cabinet, or  > must it be powered down ?  >  > I > My goal to keep my cluster up and web server running while I move to my D > newsly acquired machines is proving more difficult than I though.  > G > Also, does anyone know if there are QBUS SCSI interfaces which allow  $ > clustering/dual access to drives ? > J > Right now, the advantage of the DSSI drives is that I can get 2 vaxes toH > access them directly. Just need to make sure that the DSSI controllersG > have different BUS IDs. (and I think some SCS traffic travels through  > the DSSI, right ?) > F > Any chance I could do this with 1992 vintage SCSI controllers on the. > QBUS ?  (2 vaxes accessing the same drives).  E Answered this in another DSSI/SCSI question you posted.  If you peek  G into the R400X where the drives go you'll probably see (and I have NOT  H checked this myself, just remembering how the BA440 is setup) TWO long, H vertical connectors for each drive bay slot on the backplane.  The SCSI C devices that could be purchased at the time the R400X was new were  G probably 5.25" drives, RZ56, RZ57, RZ58, RZ73 and RZ74.  The ones that  G could be bought to install into a R400X would have the SCSI variant of  G the mounting bracketry and connectors so it would "just" slip into the  C drive bay and plug into the correct SCSI connector.  Documentation  C indicates that the R400X supports up to seven DSSI OR SCSI devices.   B I don't know if the R400X SCSI interface was an option.  From the I documentation it looks like you should have two 50-pin, low density SCSI  H plugs next to the power supply on the expansion box.  As I stated in my I previous response (different note), the KZQSA-SA is really only good for  I tape and CD.  If you're able to find a HSD05 or HSD10 that goes into the  H Q-Bus (for power only, I think this is the -JA variant), then you could F run regular SCSI disks.  The majority of the ones I think you'll find G that fit directly into the R400X are going to be 5.25" drives.  If you  C find other SCSI interfaces from 3rd parties that use the same type  F cabling, you might be able to get by with those instead of the KZQSA, 8 but support on modern VMS versions might be interesting.  I The HSD solution would provide you with SCSI drives and native multihost  I capability.  The down side to this configuration would be if the machine  G with the HSDxx-JA were to be powered off, you'd lose the DSSI's access  H to the disks anyway.  A BA350 shelf with a HSD05 or HSD10 would provide G a way to continue access from machine A even if machine B was down and  G powered off.  That would also give you access to a wider range of 3.5"  I StorageWorks SBB drives than those you could find in a 5.25" form factor  + with the "unique" R400X mounting bracketry.    bob    ------------------------------  % Date: Sun, 08 Jan 2006 19:53:39 -0500 - From: JF Mezei <jfmezei.spamnot@teksavvy.com> 3 Subject: Re: R400X: converting DSSI shelves to SCSI , Message-ID: <43C1B3F8.9B6DCDFF@teksavvy.com>   Bob Blunt wrote:F > Answered this in another DSSI/SCSI question you posted.  If you peekH > into the R400X where the drives go you'll probably see (and I have NOTI > checked this myself, just remembering how the BA440 is setup) TWO long, ? > vertical connectors for each drive bay slot on the backplane.   G nop. There is only one. But the docymentation states that it can supply ? both DSSSI and SCI signals depending on what is inserted in it.   F The problem is that I have no way to know what exact parts aere neededG to plug a SCSI disk into that proprietary connector. I don't even know    H > could be bought to install into a R400X would have the SCSI variant ofH > the mounting bracketry and connectors so it would "just" slip into the6 > drive bay and plug into the correct SCSI connector.   E Yep, but the problem is to know the exact model number for a ISE scsi E disk that I could hunt for in the internet. It would then give me the Q necessary hardware to retrofit a more modern SCSI disk into the mounting bracket.      C > I don't know if the R400X SCSI interface was an option.  From the J > documentation it looks like you should have two 50-pin, low density SCSI6 > plugs next to the power supply on the expansion box.  @ Yes. But that odesn't give me anything. It requires I put in theH QBUS-SCSI controller on one machine and string a cable to the R400X, andE then use their proprietary plugs/mounting brackets to put normal SCSI N disks in. And that doesn't give me any of the DSSI advantages of dual hosting.  H What I though was available was some DSSI ISE form factor HSDxx-yy gizmoC that would bridge the DSSI to SCSI chasm. This would give me a dual H hosted DSSI controller to which I could connect my SCSI disks. This way,< the disks can be independantly accessed from either machine.    J > previous response (different note), the KZQSA-SA is really only good for > tape and CD.  H I have a DILOG SQ739 which is more versatile, but it is QBUS and SCSI-1.G And that would mean no dual hosting of disks. And it would mean I would F have to find identical SCSI drives and another SQ739 for the secon VAX> and use volume shadowing. Not as elegant as DSSI interconnect.  J > The HSD solution would provide you with SCSI drives and native multihost
 > capability.   D If the HSD is pluigged into the QBUS, does this mean I would need to? find 2 of this so that each host has its own HSD ? Can the SCSI F controller ID be changed so that you woudl have SCSI ID7 for the first# HSD and SCSI 6 for the second HSD ?   H On my all mighty Microvax II, I have 11 gigs in 2 cavities with standardC 50 pin ribbon cable.  On my new systems, I have 9 gigs in an R400X, D taking up much more space, much more electricity, generting far more heat and noise.   H I've begun to take apart my MV 2 Q5 cabinet in which I planed to put theG 4000-500 and R400X. But now, it seems that the R400X will be useless to D me and I might as well just install the 4000-500 and 4000-200 in theG cabinet (back to back), and use the R400X as a temporary solution until F I can retrofit the storage bays in the VAXes to be SCSI. Stringing theC robbon cable isn't a problem, the problem is getting power with the  standard connectors.   ------------------------------  % Date: Sun, 08 Jan 2006 21:59:49 -0600 . From: Bob Blunt <RobertDOTblunt@digitalDOTcom>3 Subject: Re: R400X: converting DSSI shelves to SCSI 0 Message-ID: <reWdnWXVRI-HQlzeRVn-jA@comcast.com>   JF Mezei wrote:  > Bob Blunt wrote: > F >>Answered this in another DSSI/SCSI question you posted.  If you peekH >>into the R400X where the drives go you'll probably see (and I have NOTI >>checked this myself, just remembering how the BA440 is setup) TWO long, ? >>vertical connectors for each drive bay slot on the backplane.  >  > I > nop. There is only one. But the docymentation states that it can supply A > both DSSSI and SCI signals depending on what is inserted in it.  > H > The problem is that I have no way to know what exact parts aere neededI > to plug a SCSI disk into that proprietary connector. I don't even know   >  > H >>could be bought to install into a R400X would have the SCSI variant ofH >>the mounting bracketry and connectors so it would "just" slip into the6 >>drive bay and plug into the correct SCSI connector.  >  > G > Yep, but the problem is to know the exact model number for a ISE scsi G > disk that I could hunt for in the internet. It would then give me the S > necessary hardware to retrofit a more modern SCSI disk into the mounting bracket.  >  >    > C >>I don't know if the R400X SCSI interface was an option.  From the J >>documentation it looks like you should have two 50-pin, low density SCSI6 >>plugs next to the power supply on the expansion box. >  > B > Yes. But that odesn't give me anything. It requires I put in theJ > QBUS-SCSI controller on one machine and string a cable to the R400X, andG > then use their proprietary plugs/mounting brackets to put normal SCSI P > disks in. And that doesn't give me any of the DSSI advantages of dual hosting. > J > What I though was available was some DSSI ISE form factor HSDxx-yy gizmoE > that would bridge the DSSI to SCSI chasm. This would give me a dual J > hosted DSSI controller to which I could connect my SCSI disks. This way,> > the disks can be independantly accessed from either machine. >  >  > J >>previous response (different note), the KZQSA-SA is really only good for >>tape and CD. >  > J > I have a DILOG SQ739 which is more versatile, but it is QBUS and SCSI-1.I > And that would mean no dual hosting of disks. And it would mean I would H > have to find identical SCSI drives and another SQ739 for the secon VAX@ > and use volume shadowing. Not as elegant as DSSI interconnect. >  > J >>The HSD solution would provide you with SCSI drives and native multihost
 >>capability.  >  > F > If the HSD is pluigged into the QBUS, does this mean I would need toA > find 2 of this so that each host has its own HSD ? Can the SCSI H > controller ID be changed so that you woudl have SCSI ID7 for the first% > HSD and SCSI 6 for the second HSD ?  > J > On my all mighty Microvax II, I have 11 gigs in 2 cavities with standardE > 50 pin ribbon cable.  On my new systems, I have 9 gigs in an R400X, F > taking up much more space, much more electricity, generting far more > heat and noise.  > J > I've begun to take apart my MV 2 Q5 cabinet in which I planed to put theI > 4000-500 and R400X. But now, it seems that the R400X will be useless to F > me and I might as well just install the 4000-500 and 4000-200 in theI > cabinet (back to back), and use the R400X as a temporary solution until H > I can retrofit the storage bays in the VAXes to be SCSI. Stringing theE > robbon cable isn't a problem, the problem is getting power with the  > standard connectors.  H HSD05-JA or HSD05-JF (the former is factory installed, the latter field I installed.  Can you or should you have two of them on the same SCSI like  H a HSD30/50?  No.  You can only put one HSD05 or HSD10 into single BA35x G shelf and the hardware isn't changed significantly from the units that  G plug into StorageWorks.  I don't know if you can change the SCSI ID or  G set them up as "dual redundant."  I presume not as I've never seen two  I in a single shelf.  They're just basic DSSI-to-SCSI interfaces that were  F intended to  give folks with DSSI a smaller incremental step into the C StorageWorks/SBB arena.  The next step is to a full-blown HSD30 or  G "bigger" which requires controller shelves, dual controllers (for dual  > redundancy or whatever), multiple device shelves, cables, etc.  $ SCSI ISEs were available as follows:  B HSD05-JF HSD05 option field installable in BA430/BA440 enclosures.< HSD05-JH HSD05 option field installable in BA441 enclosures.4 RZ26L-AF Single drive RZ26L-E ISE field installable.2 RZ262-AF Dual drive RZ26L-E ISE field installable.3 RZ28E-AF Single drive RZ28-E ISE field installable. 1 RZ282-AF Dual drive RZ28-E ISE field installable. & RZ73E-AF RZ73-E ISE field installable.& RZ74E-AF RZ74-E ISE field installable.& TZ86-JF TZ86-BA ISE field installable.& TZ87-JF TZ87-BA ISE field installable.9 DL-TLZ07-JF TLZ07-BA/RRD43-AA dual ISE field installable. ( TLZ07-JF TLZ07-BA ISE field installable.( RRD43-JF RRD43-AB ISE field installable.  I If the R400X isn't what you expected or wanted, it wasn't intended to be  I a one-size-fits-all storage solution.  It was intended to give customers  F some flexibility to select SCSI or DSSI and to work within particular B DSSI configurations.  SCSI was a bonus, but will not give you the E flexibility of other configurations.  The VAXen family including the  B VAX4000s were not designed for shared SCSI (or SCSI 'clustering') A configurations.  DSSI is probably the easiest, most reliable and  G predictable "shared storage and clustering" solution you can configure  8 on the VAX4000s.  Check for other free hardware instead:  ? HSD30 3-SCSI channels to DSSI, can be setup for dual redundancy ? HSD50 6-SCSI channels to DSSI, can be setup for dual redundancy F BN21K-01/02 cables from HSD30/50 to BA350 storage shelves, need 3 or 6 BA350s  Need at least three # RZxx-VA Need at least one per shelf ; And various power supplies, power cables, DSSI cables, etc.   6 Sorry you weren't satisfied with my responses earlier.   ------------------------------  % Date: Mon, 09 Jan 2006 00:20:38 -0500 - From: JF Mezei <jfmezei.spamnot@teksavvy.com> 3 Subject: Re: R400X: converting DSSI shelves to SCSI , Message-ID: <43C1F27A.820D2BD6@teksavvy.com>   Bob Blunt wrote:I > HSD05-JA or HSD05-JF (the former is factory installed, the latter field 
 > installed.    F Thanks. So the HSD stuff is a CPU based SCSI adaptor. When I had firstH looked at the manual, I was confused because I had been expecting an ISEG form factor plugging into an ISE box to bridge the DSSI-SCSI buses. But ? it really appears to be just a glorified host based SCSI card.    D Had it been an ISE form factor, I could have put it in the R400X, itF would benefit from DSSI dual hosting and use the R400X's power supply.  6 > RZ26L-AF Single drive RZ26L-E ISE field installable.4 > RZ262-AF Dual drive RZ26L-E ISE field installable.5 > RZ28E-AF Single drive RZ28-E ISE field installable. 3 > RZ282-AF Dual drive RZ28-E ISE field installable. ( > RZ73E-AF RZ73-E ISE field installable.( > RZ74E-AF RZ74-E ISE field installable.  G Thanks. I guess I have to find any  RZ*-AF to get the enclosure and put @ a more modern disk in it. Then I can just put a cable between myD existing QBUS SCSI controller and the R400X cabinet and I would then have a SCSI drive in it.  F The problem is that there aren't really any Systems options cataloguesF for that time period available, so it is very hard for me to know whatH the exact model numbers are for what I would need. And when you buy fromD a used parts retailer, you're never sure of getting the right cablesC with it. (And that is expecially true of the HSD05 shich needs very G specific cables to bridge the DSSI in-cabinet as well as the SCSI cable 3 between the HSD05 and the SCSI bus in the backpane.     J > If the R400X isn't what you expected or wanted, it wasn't intended to be' > a one-size-fits-all storage solution.   B I got it at a very good price :-( :-( But in the end, its hardwareG design is less flexible than the cavities in the all mighty Microvax II G because the MVII at least provided non proprietary connectors for power + and it was easy to add a SCSI ribbon cable.   H And of course nowadays, one cannot get cheap drive enclosures since theyC are all part of fancy expensive disk arrays, not something which is  compatible with hobbyists.  B > configurations.  DSSI is probably the easiest, most reliable andH > predictable "shared storage and clustering" solution you can configure: > on the VAX4000s.  Check for other free hardware instead:  F I was able to put 10 gig drives in my all mighty microvax II. I cannotH do that easily in the VAX 4000 and R400X because of the lack of standardD power connectors.  I'll look into cannabalising a RF drive ISE whichE does have a converter from the proprietary connectoer to teh standard M power connector that goes into drives. I can then string a SCSI ribbon cable.   8 > Sorry you weren't satisfied with my responses earlier.  F Oh don't take it this way. I am just trying to convert a R400X cabinetF into a SCSI array. But perhaps it isn't really worth the hassles. (theF 4000-500 and 200 have room for 4 ISEs on top so I can have enough disk9 storage if I can get the power connectors to the devices.   E I will probably keep 1 DSSI drive per machine for paging and keep the D machines DSSI connected for clustering. Right now, the cabinet takesH more room for 9 gigs of disk than my microvax II takes in the Q5 cabinet) and it has 11 gigs + the CPU and boards).    ------------------------------  % Date: Sun, 08 Jan 2006 16:20:50 -0600 . From: Bob Blunt <RobertDOTblunt@digitalDOTcom>* Subject: Re: Round internal SCSI cables ??0 Message-ID: <jeqdnUTFBKI5ElzeRVn-jg@comcast.com>   Jilly wrote:G > Anyone ever seen round internal SCSI cables al what you can find for  O > internal IDE cables?  I need to replace the 17-04474-01 & 17-04009-01 cables  G > on my PWS 600au so that I can fit a few more options inside the case.  >  > Jilly  >  >   < Jilly, I got some longer/denser SCSI cables from a local PC I reseller/scrapper.  In my case I was looking for the internal types that  E have 50-pin HIGH density cables with more "drops" and they had some.  I Our cables aren't magic and the differences between round and ribbon are  7 negligible as long as some odd pin-out isn't thrown in.   I How many plugs do you need on the cable and what type connectors?  I may  C have something in my festering pile of Lab scraps that I can spare.   C If you'd like to email me thru CSC32 or outlurk, you got my number.    bob    ------------------------------  * Date: Sun, 8 Jan 2006 17:52:23 -0600 (CST)* From: sms@antinode.org (Steven M. Schweda)% Subject: UNIX shm* functions for VMS? 2 Message-ID: <06010817522345_20331674@antinode.org>  H    I seek VMS-compatible code (ideally, already written and working) forG the UNIX shared memory functions, like shmat(), shmctl(), shmget(), and  friends.      I found:   D       http://vmsone.com/~decuslib/vmssig/vmslt02a/vu/gnv-crtlsup.htm  1 which looked good but offered only a broken link:   H       http://cvs.sourceforge.net/cgi-bin/cvsweb.cgi/crtlsup/?cvsroot=gnv  6    Anyone with more than that is welcome to inform me.  H ------------------------------------------------------------------------  4    Steven M. Schweda               (+1) 651-699-98183    382 South Warwick Street        sms@antinode-org     Saint Paul  MN  55105-2547    ------------------------------  % Date: Sun, 08 Jan 2006 22:09:50 -0600 2 From: David J Dachtera <djesys.nospam@comcast.net>) Subject: Re: VLC & SCSI Drive Help Needed + Message-ID: <43C1E20E.FE2C4F98@comcast.net>    Mike Rechtman wrote: >  > VLC User wrote:  > > 2 > > I guess i didn't make myself clear (my error). > > J > > On the RZ24L-E, there are jumpers (in addition the SCSI ID) labeled SSJ > > (off), EP (on) and WS (on).  I could not find info on these jumpers to/ > > know how to translate them to the ST32171N.  > > D > > For example, assuming the EP jumper on the RZ24L-E is for EnableJ > > Parity, does the "on" actually enable (logical assumption) or disable?F > > This is the info I could not find (I guess because the drive is so	 > > old).  > > I > > I'm a bit of a novice user when it comes to VAX hardware, so I'm just J > > trying to be careful about how I set things up so I don't fry anything > > or whatever. > >  > > >VLC User wrote: > > > K > > >> I believe the RZ24-L drive in my VLC is going to die soon (I've been L > > >> getting mount verification messages and the error count has been 2 orI > > >> more lately when I do a SHO DEV), so I'd like to replace it with a 2 > > >> spare Seagate ST32171N I have sitting here. > > >># > > >> So, I have two questions ...  > > >>4 > > >> A) Will a Seagate ST32171N work in a VLC, andM > > >> B) If so, what jumper settings on the Seagate ST32171N do I need to be - > > >> aware of (besides SCSI ID, of course)?  > > >>J > > >> I've poked around the Internet trying to find this info, but wasn't > > >> able to find anything.  > > >> > > >> Thanks in advance!  > > > ; > > >http://www.seagate.com/support/disc/scsi/st32171n.html  > > > 7 > > >Gawd... use google with the info you already have!  >  > Just to remind you: F > The VLC is supposed to use a RZ23L (approx 1/8 GB) internally and isF > limited to 24MB memory, which makes it difficult to use TCPIP in any > version above V4.2. D >  To use a more normal disk (by today's standard) connect it to theF > external SCSI. I've used RZ25's booting VMS V6.2, which IIRC gave no
 > problems  D Shop eBay and other auctions for those BA353 "pizza boxes" and BA350D storage shelves. They make GREAT external storage for VLCs and other small VAXes and Alphas.    --   David J Dachtera dba DJE Systems  http://www.djesys.com/  ) Unofficial OpenVMS Hobbyist Support Page: " http://www.djesys.com/vms/support/  ( Unofficial Affordable OpenVMS Home Page: http://www.djesys.com/vms/soho/   " Unofficial OpenVMS-IA32 Home Page: http://www.djesys.com/vms/ia32/    Coming soon:& Unofficial OpenVMS Marketing Home Page   ------------------------------   Date: 8 Jan 2006 23:46:07 GMT ( From: bill@cs.uofs.edu (Bill Gunshannon)4 Subject: Re: WVNETcluster uptime reaches 10 years..., Message-ID: <42dmhvF1hgjbdU1@individual.net>  R In article <FD827B33AB0D9C4E92EACEEFEE2BA2FB773AC3@tayexc19.americas.cpqcorp.net>,* 	"Main, Kerry" <Kerry.Main@hp.com> writes: >  >> -----Original Message----- % >> From: bill@triangle.cs.uofs.edu=20 B >> [mailto:bill@triangle.cs.uofs.edu] On Behalf Of Bill Gunshannon  >> Sent: January 7, 2006 6:39 PM >> To: Info-VAX@Mvb.Saic.Com7 >> Subject: Re: WVNETcluster uptime reaches 10 years...  >>=20 6 >> In article <eWHVHo9INQti@eisner.encompasserve.org>,3 >> 	Kilgallen@SpamCop.net (Larry Kilgallen) writes: = >> > In article <43BF5649.51D600A6@teksavvy.com>, JF Mezei=20 ) >> <jfmezei.spamnot@teksavvy.com> writes:  >> >> Kenneth Farmer wrote: ; >> >>> I'm sure it's correct.  I'm wondering if other OS=20  >> fanatics could argue it >> >>> can be rigged.  >> >>=200 >> >> This rigging isn't really the question.=20 >> >>=20B >> >> Comparing apples to oranges is. Is it fair to compare the=20 >> uptime of a8 >> >> VMS cluster  against that of individual machines ? >> >=20 B >> > The purpose of computers is not to keep a particular green=20 >> light on the C >> > machine lit, but rather to provide some service to humans. =20  >> If VMS isB >> > able to provide that service on a continuous basis that is=20 >> what counts.  >> >=20 ? >> > The involvement of multiple CPUs, threads of execution,=20  >> power supplies I >> > etc. is just so much geeky technical trivia not germane to the issue 3 >> > of whether the service was provided to humans.  >> >=20 = >> >> If you have 2 Solaris machines that provide HTTP/WEB=20  >> servers and using aC >> >> router to distribute traffic and stop sending traffic to a=20  >> node that is D >> >> down, the uptimes project won't report a "cluster" up time but@ >> >> individual nodes uptime even though functionally, those=20 >> solaris boxes; >> >> would offer about the same uptime than a VMS cluster.  >> >=20 I >> > That service provided would not be the same if the web site involved G >> > updating.  For read-only applications I have an even more reliable / >> > technology pre-dating VMS called a "book".  >>=20 I >> And why could the above mentioned Solaris system not involve updating? I >> I have multiple servers with shared file systems so that any update on G >> any system is universal.  I can (and do) do rolling  updates so that I >> system availability is continuous.  There are only two things missing. E >> A "cluster uptime" value and thinking it matteried enough to care.  >>=20  > H > And could you let us know what happens to the incoming writes when theJ > system hosting the writes for other systems via the network file sharingD > you are talking about has to be rebooted or just plain halts or is > powered off?  G As long as we're not talking Linux they just wait til it comes back up. J What happens to the incoming writes on my VAX Cluster when the HSJ servingI all my disks dies?  There are failure modes for everything.  I have never G said VMS wasn't good at this, I just don't see the purpose of obsessing H ove this uptime thing.  I remember when we were being fed the line thereH was a "system" (a Vax no less) that had been up for 15 years.  Not beingH possible, it eventually turned out to be a Cluster, which is a different animal entirely.     > J > Or perhaps you could expand on how each system can do direct IO's to theF > storage sub-system without the writes taking the long treck over theI > network? Most folks think a DLM is required to do direct IO's from each 	 > system.  > H > Or perhaps you could expand on how you would shut one entire site downE > without telling the end users in a multi-site config and not impact  > application availability?  > C > Here is a pointer to a whitepaper that can refresh readers on the J > benefits of clustering and different UNIX implementations as compared to
 > OpenVMS:J > http://www.tru64unix.compaq.com/unix/illuminata_dt_unix_research_note.pd > f  > 	 > Thanks,   J You just keep on obsessing. Eventually you may figure out that in the real- world most places just aren't that concerned.   H Oh, by the way, how many VMS systems in New Orleans have Uptimes of moreF than a year?  Every system in the world is subject to be taken down byH factors out of the control of the ones running the system.  Keep tellingI people about the advantages of VMS (God know's your employers won't!) but G stop thinking that the results on some web site that tracks "Uptime" is * going to make people dance in the streets.   bill   --  J Bill Gunshannon          |  de-moc-ra-cy (di mok' ra see) n.  Three wolvesD bill@cs.scranton.edu     |  and a sheep voting on what's for dinner. University of Scranton   |A Scranton, Pennsylvania   |         #include <std.disclaimer.h>       ------------------------------  % Date: Mon, 09 Jan 2006 00:55:29 -0500 ( From: Bill Todd <billtodd@metrocast.net>4 Subject: Re: WVNETcluster uptime reaches 10 years...= Message-ID: <gOCdna4xu8xPZ1zeRVn-tw@metrocastcablevision.com>    Main, Kerry wrote: >>-----Original Message-----" >>From: bill@triangle.cs.uofs.edu    ...   H >>I have multiple servers with shared file systems so that any update onF >>any system is universal.  I can (and do) do rolling  updates so thatH >>system availability is continuous.  There are only two things missing.D >>A "cluster uptime" value and thinking it matteried enough to care. >> >  > H > And could you let us know what happens to the incoming writes when theJ > system hosting the writes for other systems via the network file sharingD > you are talking about has to be rebooted or just plain halts or is > powered off?  G Why should he bother?  I let you know multiple times several years ago  F back when NFS was the *only* cluster file system supported in Solaris G (the client times out waiting for the server's response, resubmits the  D write which is of course idempotent, and hits the fail-over partner I which has taken over the IP address previously used by the failed server  @ - seeing no problem save for what appears to have been a random I connection hiccup which the retry worked around), but you just sailed on  D obliviously spewing the same incompetent drivel - and are obviously  still doing so.    > J > Or perhaps you could expand on how each system can do direct IO's to theF > storage sub-system without the writes taking the long treck over the
 > network?  H Perhaps you should ask the author of the white paper which you yourself I cite below:  it states clearly that by the time Sun Cluster 3.0 appeared  I they had "a cluster file system with shared-write access to volumes from  G multiple nodes" (and to the best of my recollection that means exactly    what one might suspect it does).  I Not, of course, that such access is all that necessary to most uses of a  H cluster - as demonstrated by the eminent viability of VMS clusters that E don't have any directly-shared disks.  But it is kind of nice if you  H want to share a hardware RAID device or need low-latency/high-bandwidth G access to some data from multiple nodes at once (more nodes than you'd  E care to replicate it at, or for high-bandwidth update-intense loads).   @   Most folks think a DLM is required to do direct IO's from each	 > system.   I Most folks would be wrong, then:  a central lock manager can handle that  E just fine (and does in quite a few commercially-successful systems -  - SANergy having been one of the earlier ones).    > H > Or perhaps you could expand on how you would shut one entire site downE > without telling the end users in a multi-site config and not impact  > application availability?   H Applications (and application groups) in Solaris have for several years I been able to be bound to fail-overable IP addresses just as file systems  ? have - again something which I've explained to you in the past.    > C > Here is a pointer to a whitepaper that can refresh readers on the J > benefits of clustering and different UNIX implementations as compared to
 > OpenVMS:J > http://www.tru64unix.compaq.com/unix/illuminata_dt_unix_research_note.pd > f   H A paper almost entirely devoted to discussing disaster-tolerance rather C than clustering per se and offering rather little insight into the  B details of the latter on the various systems.  If you're going to H encourage people to spend time looking at material, you should at least H have made the effort to ensure that it has some actual relevance to the  subject at hand.   > 	 > Thanks,    You're welcome.    - bill   ------------------------------   End of INFO-VAX 2006.017 ************************