1 INFO-VAX	Fri, 18 Aug 2006	Volume 2006 : Issue 458       Contents: Re: Alpha remembrance day 4 Hello Guys, Please let me know if you need any Cisco Re: Ken Olsen VAX ! Re: Open Source Hardware? (SPARC)  Re: Samba / CIFS and ACLs  Re: Speaking of Clusters:  Re: Speaking of Clusters:  Re: Speaking of Clusters:  Re: Speaking of Clusters:  RE: Speaking of Clusters:  Re: Speaking of Clusters:   F ----------------------------------------------------------------------  % Date: Thu, 17 Aug 2006 21:46:27 -0400 - From: JF Mezei <jfmezei.spamnot@teksavvy.com> " Subject: Re: Alpha remembrance day, Message-ID: <44E51BF2.F3677A88@teksavvy.com>  
 Andrew wrote: F > Lets see, Palmer inherited a loss making company, a failing VAX 9000G > project, Alpha the soup to nuts "Industry Standard" 64 bit processor, H > the Hudson and UK FABS, a UNIX strategy that was in tatters, and Phase > V DECNET.   H The 9000 was not a "bet the company" product, nor a strategy product. ItG was an attempt at one technology and it didn't pan out.  Its failure to 7 impress and sell did not hurt Digital in the long term.   A Alpha showed great promise and was very respected as a design and N technical potential. The business aspects happened all during palmer's tenure.  D The FABs were at the time, a good idea, and had Digital been able toD commercialise them, they could have been a success. When you look atD DEC's disk drive business, they were starting to catch on, but since@ Compaq wasn't interested in disk drive business, DEC was told by7 Pfeiffer to sell it. Same with the networking division.   E Nobody is saying that Palmer inherited a healthy company. But at that @ time, it was still quite possible to turn DEC around and make itE competitive and most importantly, fix the problems instead of selling  any limb that had a problem.  G In terms of Phase V, while it is today quite moot, remember that at the G time it was launched, governments were mandating OSI compatibility, and G DEC was one of the first ones to market. Where DEC failed is in quickly G switching to TCPIP when the later replaced OSI as the "internetworking" F stack of choice. So yeah, DECNET 5 was a big waste of money, but other@ companies also wasted money on trying to comply with governmentsB mandates to have OSI stacks. The difference is that the other guys0 already had good TCPIP connectivity, VMS didn't.  H In fact, DEC's biggest mistake when it became obvious that OSI and X.400@ wouldn't pan out was to compete against Multinet instead of justA adopting Multinet as the de-facto standard stack on VMS and truly B integrate it with VMS with some sort fo join t venture or outrightC purchase of TGV. (TGV later went to Cisco, so it would have been to  DEC's advantage to buy it).     I > changes in the Computing market, the rise of the PC and the rise of the  > UNIX Server/Workstation.  C In terms of the rise of Unix, DEC failed to use marketing to really B outline the fact that Unix wasn't "standard" and that it was quiteE different from vendor to vendor and that VMS was just as "compatible" @ and to push that fact to negate the negative advertising made onC "proprietary systems". It could have been done, it wasn't done. VMS  flaundered because of that.     C > remotely close to a clean sheet made his chances of sucess highly  > unlikely.   H At the time Olsen was asked to retire/leave, the board should have movedH to find a REAL leader capable of turning Digital around. I can't wait toE receive my copy of DEC Is dead, Long live DEC (it's on its way :-) to ? see if there is any explanation on the choice of Palmer. In the H "Elephants can dance" book, Gerstner spends a lot of time explaining how! IBM courted him for a long while.   ? IBM was much worse off and make far more radical changes to its H philosophy than DEC did. Had there been a truly competant leader insteadH of Palmer, I think Digital might have bought Compaq instead of the other way around.    ------------------------------    Date: 17 Aug 2006 13:27:54 -0700 From: phimhongkongcom@gmail.com = Subject: Hello Guys, Please let me know if you need any Cisco B Message-ID: <1155846474.799779.59910@m79g2000cwm.googlegroups.com>  4 Hello Guys, Please let me know if you need any Cisco WS-X6704-10GE $8390 (CFC)  WS-X6724-SFP $7300 ( CFC)  PA-MC-8TE1+ $5295  PA-MC-STM-1SMI $12900  4GE-SFP-LC $21995  WS-F6700-DFC3BXL $7400 60 days warranty  " For Quick Quote. Please visit here) http://www.linkwaves.com/requestquote.asp    LinkWaves Corp 29980 Technology Drive, Suite 6  Murrieta, CA 92563 Tel: 909-725-9143  Fax: 707-221-3762    ------------------------------  % Date: Thu, 17 Aug 2006 22:50:54 -0400 ) From: "Neil Rieck" <n.rieck@sympatico.ca>  Subject: Re: Ken Olsen VAX< Message-ID: <44e529d8$0$24206$9a6e19ea@news.newshosting.com>  : "Hoff Hoffman" <hoff-remove-this@hp.com> wrote in message ( news:%f_Eg.67$EU4.24@news.cpqcorp.net... > Russ Leathe wrote: [...snip...] > H >   I can usually be standing near the Guillotine Gate in less than two G > hours from receiving a call, and yes, I know my way around the Mill.  J > (Well, I used to be able to find my way around.  I'd expect more than a M > few walls and halls and partitions have changed over the years.)  Building   > 12?  >  [...snip...]   > = >   There are hardware configuration details (still) missing.  > K >   Copying the files onto or via a Microsoft Windows or Unix or Linux box  L > can mess up the OpenVMS file attributes, particularly if you're not using J > the correct tools and the right command options.  Just last week, I was K > cleaning up a similarly-triggered corruption in a source archive someone  F > had (mis)created. It's generally better and safer and easier to use * > OpenVMS tools to preserve OpenVMS files. > N >   If you wish assistance, send me mail and we can discuss this off-line --  I > and as I stated, I can be in Maynard in a couple of hours.  I can also  J > bring along additional I/O hardware and media, if and as needed for the A > safe-keeping of the data for the transfer of the system to its   > destination. >   9 Hoff. Please pass along all our best wishes to Ken Olsen.   
 Neil Rieck Kitchener/Waterloo/Cambridge,  Ontario, Canada.9 http://www3.sympatico.ca/n.rieck/links/cool_openvms.html     ------------------------------   Date: 17 Aug 2006 22:53:02 GMT From: healyzh@aracnet.com * Subject: Re: Open Source Hardware? (SPARC), Message-ID: <ec2s0e031fh@enews3.newsguy.com>  ( Neil Rieck <n.rieck@sympatico.ca> wrote:J > OpenSPARC T1 - the open source version of Sun's UltraSPARC T1 processor.7 > http://www.theinquirer.net/default.aspx?article=33749   
 > Neil @ home     H It's not the first "open source" CPU.  IIRC, it's not even Sun's first. I However, due to the complexity of a modern CPU, I have to wonder just how I much disk space is needed if you want to download what they're providing.   , http://opensparc-t1.sunsource.net/index.html  J The FAQ is especially worth reading as it talks about the commercial toolsH that you will need in order to do anything with what they're providing. J These kinds of tools aren't cheap.  Also, how many people have access to a& FAB in order to manufacture the chips?   			Zane    ------------------------------  # Date: Thu, 17 Aug 2006 23:16:27 GMT . From: Jack Patteeuw <jack.patteeuw@nospam.net>" Subject: Re: Samba / CIFS and ACLs; Message-ID: <fN6Fg.8709$%j7.156@newssvr29.news.prodigy.net>   ' WOW !  Thanks for a detailed response !    John Malmberg wrote: > G > Currently from a CIFS client, you can not display or change the ACLs  / > that are on a file served by CIFS on OpenVMS.  > I > There are two types of ACEs that can be in ACLs.  Advanced Server ACEs   > and OpenVMS ACLs.  > F > Advanced Server ACEs are interpreted in the same way that Microsoft F > group access is handled by Advanced Server.  OpenVMS CIFS currently  > ignores those ACLS.  >  > H > OpenVMS ACEs are interpreted differently.  While Advanced Server will J > honor the ACEs because RMS does, those ACEs are not visible to Advanced 0 > Sever Clients, and can not be changed by them.  3 Samba and its clients appear to work this way also.    > K > SAMBA expects a draft POSIX ACL implementation, or some specific private   > UNIX ACL implementation.      C I have heard that NFS V4 includes ACLs of some type.  Is the POSIX  . standard compatible with the NFS V4 standard ?   ------------------------------  % Date: Thu, 17 Aug 2006 14:46:06 -0400 ( From: Bill Todd <billtodd@metrocast.net>" Subject: Re: Speaking of Clusters:G Message-ID: <aaSdnZsdYIHzJHnZnZ2dnUVZ_tidnZ2d@metrocastcablevision.com>    Main, Kerry wrote:   ...   G > For software based fault tolerant solutions with OpenVMS (a committed D > transaction is never lost - even with site or major router network= > failures), you would use RTR + OpenVMS multi-site clusters.   F Had you actually bothered to visit the reference which Keith provided F rather than simply spun out your marketing drivel as usual, you would @ have learned that the above comes nowhere near satisfying IBM's H definition of fault-tolerance (the point under discussion here) either: H   not only is RTR-aided fail-over far from instantaneous (it's merely a G layered adjunct to the underlying fail-over facilities and hence can't  E exceed their performance, though it can make life easier for them by  E eliminating the need for them to maintain certain elements of shared  I state), but it's nothing like the special hardware-based mechanisms that  H IBM's paper explicitly describes as constituting 'fault tolerance' that F can detect and fence off *any* single fault rather than merely faults C that normal software or hardware validation checks happen to catch   before they can do any harm.   - bill   ------------------------------  % Date: Thu, 17 Aug 2006 14:55:34 -0400 ( From: Bill Todd <billtodd@metrocast.net>" Subject: Re: Speaking of Clusters:G Message-ID: <aaSdnZodYIE7JnnZnZ2dnUVZ_tidnZ2d@metrocastcablevision.com>    Bill Todd wrote:   ...   ,   the special hardware-based mechanisms thatJ > IBM's paper explicitly describes as constituting 'fault tolerance' that H > can detect and fence off *any* single fault rather than merely faults E > that normal software or hardware validation checks happen to catch   > before they can do any harm.  B That should be 'any single *hardware* fault, of course:  hardware ' validation can't catch software faults.    - bill   ------------------------------    Date: 17 Aug 2006 13:32:43 -0600 From: hoffman@hp.nospam ()" Subject: Re: Speaking of Clusters:, Message-ID: <44e4c45b$1@usenet01.boi.hp.com>    K   Fault Tolerance is a continuum, and you can "park your stakes" where ever L you want, and the price of the products depend on where you park the stakes.+ The more FT you need, the higher the price.   L   Real Time is a continuum, too, and you can again "park your stakes" where I ever you want, and prices similarly vary by capability.  The tighter your ' RT requirements, the higher the price.   I   We'd either all be running [pick a low-end product] or [pick a high-end I product] otherwise.  Not everyone can afford LowEnd (eg: missing FT or RT I or other such features can be required for a particular environment), nor J can everyone afford (or even need) HighEnd.  We can certainly argue about E pushing toward LowEnd price, or pushing toward HighEnd features, too.   F   Some folks can and will continue to use a twenty-year-old 8086 box, F some can use x86/IA-32 or x86-64/IA-32e, Itanium/IA-64 is appropriate H for others, and hardware FT is a prerequisite for some.  And hot-spares G and clusters and other approaches can (hopefully) increase FT -- while   holding or decreasing costs.    
 TANSTAAFL.   ------------------------------  % Date: Thu, 17 Aug 2006 19:09:47 -0400 ( From: Bill Todd <billtodd@metrocast.net>" Subject: Re: Speaking of Clusters:G Message-ID: <RqSdnQywN_uhannZnZ2dnUVZ_qKdnZ2d@metrocastcablevision.com>    hoffman@hp.nospam wrote: >  M >   Fault Tolerance is a continuum, and you can "park your stakes" where ever 
 > you want  G You appear to have missed the whole point of the preceding discussion:  F it was not about *your* (or any other arbitrary) definition of 'fault D tolerance', but very specifically about *IBM's* definition of fault A tolerance as expressed in the citation which Keith provided but,  & obviously, didn't read very carefully.  A And that definition is precisely what I represented it to be, in  + contrast to what Keith and Kerry suggested.    - bill   ------------------------------  % Date: Thu, 17 Aug 2006 21:42:04 -0400 ' From: "Main, Kerry" <Kerry.Main@hp.com> " Subject: RE: Speaking of Clusters:T Message-ID: <FA60F2C4B72A584DBFC6091F6A2B868401912728@tayexc19.americas.cpqcorp.net>   > -----Original Message-----4 > From: Bill Todd [mailto:billtodd@metrocast.net]=20 > Sent: August 17, 2006 2:46 PM  > To: Info-VAX@Mvb.Saic.Com $ > Subject: Re: Speaking of Clusters: >=20 > Main, Kerry wrote: >=20 > ...  >=20B > > For software based fault tolerant solutions with OpenVMS (a=20 > committed F > > transaction is never lost - even with site or major router network? > > failures), you would use RTR + OpenVMS multi-site clusters.  >=20J > Had you actually bothered to visit the reference which Keith provided=20J > rather than simply spun out your marketing drivel as usual, you would=20D > have learned that the above comes nowhere near satisfying IBM's=20> > definition of fault-tolerance (the point under discussion=20 > here) either:=20= >   not only is RTR-aided fail-over far from instantaneous=20  > (it's merely a=20 ? > layered adjunct to the underlying fail-over facilities and=20  > hence can't=20I > exceed their performance, though it can make life easier for them by=20 I > eliminating the need for them to maintain certain elements of shared=20 = > state), but it's nothing like the special hardware-based=20  > mechanisms that=20< > IBM's paper explicitly describes as constituting 'fault=20 > tolerance' that=20J > can detect and fence off *any* single fault rather than merely faults=20G > that normal software or hardware validation checks happen to catch=20  > before they can do any harm. >=20 > - bill >=20  J Regardless of what IBM states in this one reference, it is talking about =E single system HW fault tolerance, while I am talking about solution = 3 fault tolerance. Big difference as I know you know.   ? Case in point - the specific IBM reference stated "no service = J interruption". However, what happens when that FT system goes offline by =H [=EEnsert favourite disaster (flood, fire etc) which takes out site].=20  J If this were to happen, do you not think the IBM (or any other FT system =; for that matter) FT system service would be interrupted?=20   G Both are examples of different approaches to solving the issue of not = C losing any committed transaction and maintaing continuous service = I availability. There are cases where either may be more appropriate than = 
 the other.  B Imho, it really boils down to $s, performance (load balancing to =I minimize hot spots) and just how available the entire solution needs to = @ be vs. the requirements for the individual system being highly =
 available. =20    Regards   
 Kerry Main Senior Consultant  HP Services Canada Voice: 613-592-4660  Fax: 613-591-4477  kerryDOTmainAThpDOTcom (remove the DOT's and AT)=20  4 OpenVMS - the secure, multi-site OS that just works.   ------------------------------  % Date: Thu, 17 Aug 2006 23:27:33 -0400 ( From: Bill Todd <billtodd@metrocast.net>" Subject: Re: Speaking of Clusters:G Message-ID: <Dt-dnVkpK4M7rnjZnZ2dnUVZ_t6dnZ2d@metrocastcablevision.com>    Main, Kerry wrote: >  >> -----Original Message----- 3 >> From: Bill Todd [mailto:billtodd@metrocast.net]    >> Sent: August 17, 2006 2:46 PM >> To: Info-VAX@Mvb.Saic.Com% >> Subject: Re: Speaking of Clusters:  >> >> Main, Kerry wrote:  >> >> ... >>@ >>> For software based fault tolerant solutions with OpenVMS (a  >> committedF >>> transaction is never lost - even with site or major router network? >>> failures), you would use RTR + OpenVMS multi-site clusters. I >> Had you actually bothered to visit the reference which Keith provided  I >> rather than simply spun out your marketing drivel as usual, you would  C >> have learned that the above comes nowhere near satisfying IBM's  = >> definition of fault-tolerance (the point under discussion   >> here) either:  < >>   not only is RTR-aided fail-over far from instantaneous  >> (it's merely a > >> layered adjunct to the underlying fail-over facilities and  >> hence can't  H >> exceed their performance, though it can make life easier for them by H >> eliminating the need for them to maintain certain elements of shared < >> state), but it's nothing like the special hardware-based  >> mechanisms that  ; >> IBM's paper explicitly describes as constituting 'fault   >> tolerance' that  I >> can detect and fence off *any* single fault rather than merely faults  F >> that normal software or hardware validation checks happen to catch  >> before they can do any harm.  >>	 >> - bill  >> > l > Regardless of what IBM states in this one reference, it is talking about single system HW fault tolerance,  5  > while I am talking about solution fault tolerance.   I Then you should be talking about it in your own thread rather than here,  G because both Andrew's post (to which you were directly responding) and  H my own (to which he\is was responding) specifically and explicitly were I about *IBM's* definition of fault-tolerance and Keith's misunderstanding   of it.   ...   @ > Both are examples of different approaches to solving the issueD  > of not losing any committed transaction and maintaing continuous  service availability.   B Horseshit.  One is an approach to surviving *any* single hardware ? failure (by using redundant hardware executing in parallel and  I cross-checking results) while maintaining *correct* execution, while the  D other is about increasing system and application availability using H variations of fail-over techniques (but guaranteeing *nothing* like the G same level of hardware-failure tolerance or, for that matter, speed of  B fail-over, only that if a hardware failure happens to be caught - A hopefully before having damaged other state in the system - than  ! something better will take over).   D In other words, the first is first and foremost about *reliability* H (getting the *right* answer regardless of a failure while continuing to @ operate through single-component failures), while the second is H primarily about *availability* (getting *an* answer despite anything up C to and including a whole-site disaster, which will usually but not  I always be correct).  Only by combining the two by using a fail-over pair  3 of hardware-fault-tolerant systems do you get both.    - bill   ------------------------------   End of INFO-VAX 2006.458 ************************