1 INFO-VAX	Thu, 05 Jul 2001	Volume 2001 : Issue 369       Contents: Re: 4 mm tape drive  Re: C-Kermit 8.0 Beta  Re: C-Kermit 8.0 Beta 1 Creating ODBC Connections to RDB databases on vms 5 Re: Creating ODBC Connections to RDB databases on vms 
 Datatrieve  Re: Datatrieve; using a COM file Re: DEC Net and TCP/IP) Re: Difference between an 8820 and 8820-N & Re: Disk Cluster Size for Oracle Disks Re: FreeVMS " Re: I didn't stick it upside down! Re: IA64 Rocks My World  Re: Nodenames in cluster Re: Nodenames in cluster Re: Nodenames in cluster Re: Nodenames in cluster Re: Nodenames in cluster# Re: PointSecure site was hacked ??? # Re: PointSecure site was hacked ??? # Re: PointSecure site was hacked ??? # Re: PointSecure site was hacked ??? $ SSH client, FISH, -- sources anyone? Re: The Alpha/IA64 Hybrid  Re: The Alpha/IA64 Hybrid  Re: The Alpha/IA64 Hybrid  Re: The Alpha/IA64 Hybrid  Re: The Alpha/IA64 Hybrid  Re: The Alpha/IA64 Hybrid  Re: The Alpha/IA64 Hybrid  Re: The Alpha/IA64 Hybrid  Re: The Alpha/IA64 Hybrid  Re: The Alpha/IA64 Hybrid  Re: The Alpha/IA64 Hybrid  Re: The Alpha/IA64 Hybrid  Re: The Alpha/IA64 Hybrid  Re: The Alpha/IA64 Hybrid  Re: The Alpha/IA64 Hybrid  Re: vax 4000/90  RE: VAX-11/780 boot disk needed  Re: VAX-11/780 boot disk needed  Re: VAX-11/780 boot disk needed ; Re: Wailing and moaning.... (was: Compilers go to Intel...)  Yahoo and OpenVMS  Re: Yahoo and OpenVMS   F ----------------------------------------------------------------------  # Date: Wed, 04 Jul 2001 19:53:29 GMT # From: ualski <ualski@earthlink.net>  Subject: Re: 4 mm tape drive- Message-ID: <3B43742D.317C322D@earthlink.net>    Jack Peacock wrote:  > A > "Robert Deininger" <rdeininger@mindspring.com> wrote in message H > news:rdeininger-0407011120110001@user-2ive7io.dialup.mindspring.com...G > > When I get my hobbyist GS320, I'm hoping it comes with a paper tape 	 > reader.  > > # > You don't back up to punch cards?  >   Jack Peacock  B Us klutzes consider paper tape to be a really long punch card that# only gets tangled when you drop it.    -- Aaron Sliwinski   ------------------------------  $ Date: Thu, 5 Jul 2001 00:14:35 +01001 From: "Chris Townley" <news@townleyc.demon.co.uk>  Subject: Re: C-Kermit 8.0 BetaA Message-ID: <994288963.21359.0.nnrp-13.d4e45fa5@news.demon.co.uk>   = "Frank da Cruz" <fdc@watsun.cc.columbia.edu> wrote in message / news:9hoc61$qit$1@newsmaster.cc.columbia.edu... A > In article <994029010.909.0.nnrp-08.d4e45fa5@news.demon.co.uk>, 2 > Chris Townley <news@townleyc.demon.co.uk> wrote:G > : I will have a go on my hobbyist alpha (Compaq C V6.2-008 on OpenVMS  Alpha 	 > : V7.3)  > : J > : However I could not get the download - got a message that IE could not4 > : download - server returned extended information. > : % > Oops, typo in the link!  Fixed now:  > ; >   ftp://kermit.columbia.edu/kermit/test/tar/ckv200b02.zip  >  > Thanks for noticing. > J > : I will hook it down at work and have a go on Monday or Tuesday. Do you want > : the executables? > : G > Yes, please.  Here are instructions for VMS C-Kermit binary builders:  > < > Here's how to build VMS C-Kermit 8.0.200 Beta.02 binaries.< > Note: Replace all "200b02" below with whatever the current2 > edit number and Alpha/Beta designation might be. > % >  . Download the VMS source archive: > >      ftp://kermit.columbia.edu/kermit/test/tar/ckv200b02.zip > $ >  . Unzip it into a fresh directory3 >    (Let me know if you don't have VMS ZIP/UNZIP).  >  > On each VMS computer:  >  >  . Build the network version:  > 5 >    @ckvker  (or use "@ckvker m" if "@ckvker" fails)  > H >    This makes a WERMIT.EXE file, which you can run and use as a Telnet& >    client, etc.  Copy WERMIT.EXE to: > $ >      ckv200b02-PPP-vmsVV-ucxUU.exe >  >    where:  > ' >      PPP the platform: "vax" or "axp" A >      VV is the VMS version (two digits, no period) e.g. 62, 73. A >      VV is the UCX version (two digits, no period) e.g. 41, 51.  >  >    For example:  > $ >      ckv200b02-axp-vms72-ucx51.exe$ >      ckv200b02-vax-vms62-ucx41.exe > $ >  . Build the non-networks version: > : >    @ckvker n  (or use "@ckvker mn" if "@ckvker n" fails) > G >    Rename the non-nets WERMIT.EXE as above, except put "nonet" in the  network  >    field, for example: > $ >      ckv200b02-vax-vms62-nonet.exe$ >      ckv200b02-vax-vms73-nonet.exe > C >  . Upload all the ckv200b02-*-vms*-*.exe files in binary mode to:  > 1 >      ftp://kermit.columbia.edu/kermit/incoming/  > < >  . Delete all the ckv200b02-*-vms*-*.exe files if desired.    L Just uploaded the VMS7.3 (plus UCX 5.1) executables - no problems compiling," except for the documented warning.E Hope the file format is OK, had to go via Win box for now ;(  - would < normally zip "-W"... especially with alpha executable sizes!   -- Chris    ------------------------------   Date: 4 Jul 2001 23:36:16 GMT 0 From: fdc@watsun.cc.columbia.edu (Frank da Cruz) Subject: Re: C-Kermit 8.0 Beta5 Message-ID: <9i099g$lps$1@newsmaster.cc.columbia.edu>   A In article <994288963.21359.0.nnrp-13.d4e45fa5@news.demon.co.uk>, 0 Chris Townley <news@townleyc.demon.co.uk> wrote:N : Just uploaded the VMS7.3 (plus UCX 5.1) executables - no problems compiling,K : except for the documented warning.  Hope the file format is OK, had to go K : via Win box for now ;( - would normally zip "-W"... especially with alpha  : executable sizes!  : % Yes, RISC = fa(s)t programs.  Thanks!    - Frank    ------------------------------  % Date: Thu, 05 Jul 2001 15:43:43 +1200 & From: A Bonaveidogo <Asena@fsc.com.fj>: Subject: Creating ODBC Connections to RDB databases on vms? Message-ID: <000001c10504$b1378b60$100a640a@Patrick.fsc.com.fj>   L I have rdb databases on VMS Alpha Machines. The version of rdb is Oracle Rdb' V7.1-2 and the version of VMS is V7.1-2 D Now what I want to do is to connect to the rdb database using a odbc" connection from a windows machine.   Zena   ------------------------------  % Date: Wed, 04 Jul 2001 22:56:54 -0500 1 From: "David J. Dachtera" <djesys.nospam@fsi.net> > Subject: Re: Creating ODBC Connections to RDB databases on vms' Message-ID: <3B43E586.28C339FD@fsi.net>    A Bonaveidogo wrote: > N > I have rdb databases on VMS Alpha Machines. The version of rdb is Oracle Rdb) > V7.1-2 and the version of VMS is V7.1-2 F > Now what I want to do is to connect to the rdb database using a odbc$ > connection from a windows machine.   I believe this is in the FAQ:   5 http://www.openvms.compaq.com/wizard/openvms_faq.html    --   David J. Dachtera  dba DJE Systems  http://www.djesys.com/  : Unofficial Affordable OpenVMS Home Page and Message Board: http://www.djesys.com/vms/soho/   F This *IS* an OpenVMS-related newsgroup. So, a certain bias in postings is to be expected.  @ Feel free to exercise your rights of free speech and expression.  F However, attacks against individual posters, or groups of posters, are strongly discouraged.    ------------------------------  $ Date: Wed, 4 Jul 2001 10:57:20 -02301 From: "Dan Kennedy" <dkennedy@marine-atlantic.ca>  Subject: Datatrieve $ Message-ID: <3b4319af.0@209.128.1.3>  H I have an application that gives me a COM file that should be able to beL used as an input file into Datatrieve. I have not used Datatrieve before andH can not find out how to imput the file into Datatrieve. Any help on this matter will be appreciated.    ------------------------------  % Date: Wed, 04 Jul 2001 20:26:11 -0400 / From: "Joe H. Gallagher" <dtrwiz@ix.netcom.com> ) Subject: Re: Datatrieve; using a COM file - Message-ID: <3B43B420.B0CA98B2@ix.netcom.com>    Dan Kennedy wrote: > J > I have an application that gives me a COM file that should be able to beN > used as an input file into Datatrieve. I have not used Datatrieve before andJ > can not find out how to imput the file into Datatrieve. Any help on this > matter will be appreciated.   N DATATRIEVE works just like all the other well behaved VMS programs.  From the  DATATRIEVE prompt,   	DTR> @yourcomfile  N where yourcomfile.com is the name of the COM file created by your application;G it (DATATRIEVE) will assume the .COM extension unless you specifically  ' override it by specifying an extension.   I If you are having difficulty starting DATATRIEVE, then try something like    	$ DTR :== $sys$system:dtr32 	$ DTR 	. . . 	DTR>   I The application which creates the COM file may or may not contain all the ? commands and statements you need to accomplish your objectives.   G If you have further specific questions on DATATRIEVE, you may E-mail me  directly at the address below.  
 Good luck.   Joe H. Gallagher, Ph. D.$ Former SIG Chair & Newsletter Editor) DATATRIEVE/4GL SIG of DECUS\\\\\Encompass  dtrwiz at ix dot netcom dot com " See "The DATATRIEVE Programmer" at2 http://www.geocities.com/SiliconValley/Pines/8958/   ------------------------------  % Date: Wed, 04 Jul 2001 21:01:58 +0100 + From: "antonio.carlini" <arcarlini@iee.org>  Subject: Re: DEC Net and TCP/IP ' Message-ID: <3B437636.9A41843C@iee.org>    Andy Proctor wrote: J > Kit: PWS433au/VMS7.1-1h2/TCPIP5.0 I have applied ECO 2 for tcpip and theK > required patches for VMS. Also i run the new Apache based web server from N > the compaq web site, and have applied the ECO's accordingly. DECnet phase V.$ > Loads of disk space and 128MB RAM. > 
 > Problem:L > I can SET HOST to any other VMS unit on our network (we don't cluster) butM > cannot set host to the unit in question. I can FTP to/from it, I can telnet K > to/from it. I use LPR/LPD and that is ok. Although FTP access is slow for   8 The IP connectivity works so you don't have a(n obvious) hardware issue.   2 You can SET HOST out so a fair bit of stuff works., You just cannot SET HOST from somewhere else to this box.  1 What are your other nodes (Phase IV or Phase IV)? , Are they all in the same area? (What are the( node numbers - don't list them all, just& pick one failing one and list that and your Phase V node).   & Try $ REPLY/ENABLE on you Phase V node, and see what messages (if any) are displayed. on the console when an incoming attempt fails.  & How do you set host from anywhere else to the box in question? Try 
 	SET HOST x.y + where x.y is the node number of the Phase V $ box (this skirts around any possible' problems with namespaces when initially % trying to get to the bottom of this).   ( If that works, then the other boxes have' the wqrong idea about your Phase V box  % name=>number translation. (A bit like % having a messed up dns entry for IP).    Phase IV boxes run only NSP.! Phase V boxes can run one or both % of NSP and OSI Transport (aka OSITP). & The usual mistake is to have a Phase V" running both but trying to talk to' a Phase IV box. The Phase V box decides * whether to use OSITP or NSP for it's first attempt base on the transport 
 precedence (check with 3 	$ MC NCL SHOW SESSION CONTROL TRANSPORT PRECEDENCE 9 you can abbreviate much of this if you don't like typing)   8 The default was (and may still be) to use OSITP followed/ by NSP. Cue 3 fruitless attempts at using OSITP . which a Phase IV box won't understand, each of0 which takes 30 seconds to time out. The CSC call1 usually gets logged before NSP gets a look in :-)   # Now if your other boxes are Phase V 8 boxes then a variant of this scenario is still possible.- If they are Phase IV boxes, then its a little - harder to see how this could happen (you must + be running NSP since you can talk to them).    One other thing to check is $  $ MC NCL SHOW ROUTING CIRCUIT * ALL2 and verify that you have "Enable Phase IV address"# set to True ... otherwise you won't ' get very far talking to Phase IV nodes.    Antonio        --     --------------- - Antonio Carlini             arcarlini@iee.org    ------------------------------  % Date: Wed, 04 Jul 2001 20:38:24 +0100 + From: "antonio.carlini" <arcarlini@iee.org> 2 Subject: Re: Difference between an 8820 and 8820-N' Message-ID: <3B4370B0.A9AAD21F@iee.org>    dittman@dittman.net wrote: > : > Is the only difference between an 8820 and an 8820-N the< > console system?  From the document the 8820 has a MicroVAX< > II and the 8820-N has a Pro380.  What card was used as the& > system interface in the MicroVAX II?  0 I've never been near either of these systems, so/ take with an appropriate pinch of salt, but ...   ' The VAX 8820-N was a 2 CPU box with no  0 expansion potential (it was really a VAX 8800). $ The VAX 8820 was a 2 CPU that could 0 (with the addition of a few more boxes I think)  become a 3 CPU or 4 CPU box.  / I *guess* it was all in the expansion potential . that the box came with. I'm not 100% sure that/ the 8820-N was just a new name for the VAX 8800 . (and with the number of bits and bobs in those2 things you'd have a long parts list to check ...).   Antonio    --     --------------- - Antonio Carlini             arcarlini@iee.org    ------------------------------   Date: 4 Jul 2001 12:23:30 -0700 ( From: kparris@my-deja.com (Keith Parris)/ Subject: Re: Disk Cluster Size for Oracle Disks = Message-ID: <cb85fed2.0107041123.481c44ea@posting.google.com>   s Dave Harrold <DRHarrold.nospam@earthlink.net> wrote in message news:<ihsjjtg28kc17q8a4innt8ain5nltkegrg@4ax.com>... ; > So, what tools can I use to measure the size of the I/Os?   @ If you have host-based RAID software, the RAID SHOW/FULL commandE displays a really nice table of I/O sizes and I/O rates for each size  range.  B If you have HS-series controllers, you can run the VTDPY utility. @ Several of its displays show average I/O rates/second along withD Kbytes/second transferred.  By dividing the I/O rate into the KbytesE value, you get average I/O size in Kbytes.  Multiply this result by 2 / to get the average I/O size in 512-byte blocks.    Keith    ------------------------------  $ Date: Wed, 4 Jul 2001 20:12:35 -0400( From: Bill Gunshannon <bill@cs.uofs.edu> Subject: Re: FreeVMSK Message-ID: <Pine.LNX.4.10.10107042005130.3327-100000@triangle.cs.uofs.edu>   6 On Tue, 3 Jul 2001, Roar [iso-8859-1] Thron=E6s wrote:  4 > Bill Gunshannon <bill@triangle.cs.uofs.edu> wrote: >=20I > : Actually, considering their purpose, I would be very surprised if you H > : could acquire these listings without having a "normal" (there's thatJ > : word again!) VMS license.  And you definitely couldn't use them as theJ > : basis for a competing OS, free or not.  These listings are unpublished >=20D > The listings are on the contrary very much published, but on a bit > closed circuit.   D Published vs. un-published are legal terms and not dictionary terms.E The agreement under which these listings are available is restrictive C and the works are, from a legal standpoint, are un-published works.    >=20D > : trade secret information with a very strong license behind them. >=20( > Such code is not even in the listings.  I All of it is trade secret.  I havee a feeling you have no idea whatsoever 0 about the legal standing of commercial software.   >=20I > : Disassembling VMS and using that information in any way would violate  >=20. > That would still be legal in some countries.  C Yeah, the same countries that allow bootlegging music CDs and DVDs. C All countries who are signatories of the Berne Convention recognize H the legal copyrights of other member countries.  That pretty much covers the civilized world.   >=20H > : any license agreement you signed.  And if you didn't sign a license,5 > : then you have stolen software in your possession.  >=20D > : FreeVMS would have to be developed in a cleanroom environment inF > : order to ever be free.  Otherwise, it is just a pirated version ofG > : VMS and is very likely to attract the attention of some very hungry  > : bottom feeders.  >=20G > You might mostly use just the VMS Internals and Data Structures book? G > It corresponds well with the code listings (so it seems so far, but I   > have not done much comparing).F > I assume both the book and code are based on the same design papers?  K If you used only published works, then your new OS might pass a legal test. F Of course, you would probably be bankrupted proving it, so it ends outF being a Phyrric victory.  All of this assumes no one working on it hasD ever seen the VMS sources, as they could lead to a court ruling that the new OS is polluted anyway.   bill =20l --=20vJ Bill Gunshannon          |  de-moc-ra-cy (di mok' ra see) n.  Three wolvesD bill@cs.scranton.edu     |  and a sheep voting on what's for dinner. University of Scranton   |C Scranton, Pennsylvania   |         #include <std.disclaimer.h>  =20l   ------------------------------  % Date: Thu, 05 Jul 2001 07:56:31 +0200  From: zessin@decus.dec+ Subject: Re: I didn't stick it upside down!m+ Message-ID: <009FE8B0.777A0C96.20@decus.de>c   Glad you liked it...     Scott Vieth wrote: > That's insane!!r >A= > How did you get the SBB in there?  Use a big rubber mallet?o  2 How do you know? Did you hear me pushing it in ;-)  H > Somebody must have pulled the front cover off the SBB and re-installed > it the wrong way.u  G Nope. I did unpack the carrier. The carton and the anti-static bag werec untampered.t  B > I bet this drive is good for a few laughs when the field service) > engineer comes out to look at things....  8 I won't waste my time to call field service - see below.  ,                                  --- --- ---   Robert Deininger wrote:b > Scott Smith wrote:6 >> Did anyone else notice that the sticker is missing? >pG > Yes, I noticed.  I wonde what's really in that canister.  Perhaps thet3 > missing plans for the Death Star, or Alpha EV8...h  D The HSZ-80 beleives it is a 36.4 GByte disk. There is also a sticker at the bottom - see below.  ,                                  --- --- ---   Roberta Sutter wrote:o3 > That's obviously the Southern Hemisphere variant.    You must be right!   The sticker at the bottem says:c  5 |-\      \        | /    ----    |        /\     /\/\a6 |  \      \       |/     \       |       /  \   /    \6 |   |   /--  ---  |--\    \      |  --- |    |  |    |6 |  /    \         |   |    \   \ |      |    |  |    |6 |-/      \        |--/   ---    \|      |    |  |    |   All others say:e  6 |-\      /        |--\   ---    /|      |    |  |    |6 |  \    /         |   |    /   / |      |    |  |    |6 |   |   \__  ---  |--/    /      |  --- |    |  |    |6 |  /      /       |\     /       |       \  /   \    /5 |-/      /        | \    ----    |        \/     \/\/c    ,                                  --- --- ---   Nigel Arnot wrote:F > ROFL. Was this a manufacturing, a field circus, or an in-house jape?   Original disk.  N > Actually, it's not hard and should be every bit as reliable as a normal one. >o< > All you do is ping the front off the storageworks canisterI > ( not easy!) and then if the connections are wires rather than flexiblehC > printed circuit stuff, rotate it 180 degrees and push it back on.t  = No problem. I have the correct tool to open the SBB carriers.p$ I have just rotated the front plate.  H > You might succeed with the flexi printed circuit stuff as well, but an/ > engineer would call that seriously bad taste.   ! I am afraid I didn't get that :-(w  A The carrier uses a (brown) flexible whatever-that-is-called-todayVB (it's 15+ years since I did my last printed circuit board design -? on a VAX-8600 - my, things have changed since then...) that can  easily stand a twist or two.  D > Disclaimer: if you try this trick and it doesn't turn out well for- > you, I accept absolutely no responsibility.d  @ Does that mean if I blew it I won't get a new drive from you ;-)     --  
 Uwe Zessin   ------------------------------  % Date: Wed, 04 Jul 2001 13:22:22 -0500e1 From: "David J. Dachtera" <djesys.nospam@fsi.net>o  Subject: Re: IA64 Rocks My World' Message-ID: <3B435EDE.B1D88653@fsi.net>    ia64 dog wrote:  > 	 > Hey!!!!- > C >      You know in many ways it is much better than the fate of the-G > abandoned dog I saw this morning. Its owner took this old dog removedrF > the collar and left it on the streets. I am sure Animal shelter will > take good care of it.7 > B >      At least Compaq had the decency to take it to Intel. It mayD > survive and maybe even thrive. I feel more sorry for the dog. Stop
 > whining!  C Clearly, your professional situation has no ties to the fortunes ofw either OpenVMS or Compaq.t   -- i David J. Dachtera  dba DJE Systems  http://www.djesys.com/  : Unofficial Affordable OpenVMS Home Page and Message Board: http://www.djesys.com/vms/soho/t  F This *IS* an OpenVMS-related newsgroup. So, a certain bias in postings is to be expected.  @ Feel free to exercise your rights of free speech and expression.  F However, attacks against individual posters, or groups of posters, are strongly discouraged.    ------------------------------  $ Date: Wed, 4 Jul 2001 14:40:58 -0400, From: "J. Scott Greig" <jsgreig@geminaq.com>! Subject: Re: Nodenames in clustere4 Message-ID: <9nJ07.35800$Mb7.1185794@brie.direct.ca>   You'll want to do    $ Help Lex F$CSID   J This will return (one at a time), the cluster ID of each available node in the@K cluster.  Use the returned value as then the third argument to a subsequent  call to F$GETSYI.d   Scott1  1 <"Ingemar Olson"@dairyworld.com> wrote in messagea) news:01K5J5QNUE7O90NMI5@dairyworld.com...eJ > I'd like to be able to determine the names of the nodes that exist / are& > available in the cluster (from DCL). > E > If I use f$getsyi I can use the cluster_nodes argument to determinea whetherlK > I'm in a cluster. But if I want to know what the other nodes are, then it K > seems I've got to have a list of nodenames before I can check whether anyy of' > them are part of the present cluster.b >sH > Seems there should be a way of finding the other node names if I don't alreadya > know what they are.  >t > What am I missing here?g >t > TIAa   ------------------------------  + Date: Wed, 04 Jul 2001 12:39:21 -0800 (PST)aE From: "Ingemar Olson, Sperling (604)444-7367" <IOLSON@dairyworld.com>.! Subject: Re: Nodenames in cluster-/ Message-ID: <01K5J7R2FXYC90NMI5@dairyworld.com>a   Bingo!  Thanks   >You'll want to do >o >$ Help Lex F$CSID >6K >This will return (one at a time), the cluster ID of each available node incF >the cluster.  Use the returned value as then the third argument to a  >subsequent call to F$GETSYI.w >o >Scott  ( ><"Ingemar Olson"@dairyworld.com> wrote K >> I'd like to be able to determine the names of the nodes that exist / areo' >> available in the cluster (from DCL).a >>F >> If I use f$getsyi I can use the cluster_nodes argument to determine
 >> whetherL >> I'm in a cluster. But if I want to know what the other nodes are, then itL >> seems I've got to have a list of nodenames before I can check whether any+ >> of them are part of the present cluster.t >>I >> Seems there should be a way of finding the other node names if I don't  >> already know what they are. >> >> What am I missing here? >> >> TIA   ------------------------------  % Date: Wed, 04 Jul 2001 13:52:01 -0500s1 From: "David J. Dachtera" <djesys.nospam@fsi.net>w! Subject: Re: Nodenames in cluster & Message-ID: <3B4365D1.B93BA0D@fsi.net>   "J. Scott Greig" wrote:w >  > You'll want to dom >  > $ Help Lex F$CSID  > L > This will return (one at a time), the cluster ID of each available node in > therM > cluster.  Use the returned value as then the third argument to a subsequentu > call to F$GETSYI.m  0 Example, lifted from the on-line DCl dictionary:  D $ IF F$GETSYI("CLUSTER_MEMBER") .EQS. "FALSE" THEN GOTO NOT_CLUSTER  $ CONTEXT = "" s $START:  $   id = F$CSID (CONTEXT)  $   IF id .EQS. "" THEN EXIT l) $   nodename = F$GETSYI ("NODENAME",,id) e $   WRITE SYS$OUTPUT nodename  $   GOTO start e $NOT_CLUSTER: 0 $ WRITE SYS$OUTPUT "Not a member of a cluster."  $ EXIT e    Verified on V7.2-1.e   --   David J. Dachterao dba DJE Systems  http://www.djesys.com/  : Unofficial Affordable OpenVMS Home Page and Message Board: http://www.djesys.com/vms/soho/a  F This *IS* an OpenVMS-related newsgroup. So, a certain bias in postings is to be expected.  @ Feel free to exercise your rights of free speech and expression.  F However, attacks against individual posters, or groups of posters, are strongly discouraged.    ------------------------------  + Date: Wed, 04 Jul 2001 13:06:34 -0800 (PST)M From: "Ingemar Olson " <>a! Subject: Re: Nodenames in clusterT/ Message-ID: <01K5J92BKZTG90NMI5@dairyworld.com>n   I Bingo'd too soon.r  E The other thing I need to be able to find out is whether the node is cL actually up and running. (and I can't shut anything down while I experiment)  I Does the f$csid only return info for nodes that are "up", or will it alsoPA include (eg) nodes that it knows have been up but now may not be?   G If it's the latter, which argument to f$getsyi will let me distinguish 4/ between the functional vs non-functional nodes?o    K btw: the reason I want this is that we have a menuing system and some app'saM are only supposed to run on a specific node. Unless that node is unavailable,h in which case any node is ok.e   ------------------------------  % Date: Wed, 04 Jul 2001 16:52:02 -0500-1 From: "David J. Dachtera" <djesys.nospam@fsi.net>h! Subject: Re: Nodenames in cluster/& Message-ID: <3B439002.EE04700@fsi.net>   Ingemar Olson wrote: >  > I Bingo'd too soon.b > F > The other thing I need to be able to find out is whether the node isN > actually up and running. (and I can't shut anything down while I experiment) > K > Does the f$csid only return info for nodes that are "up", or will it also C > include (eg) nodes that it knows have been up but now may not be?  > H > If it's the latter, which argument to f$getsyi will let me distinguish1 > between the functional vs non-functional nodes?  > M > btw: the reason I want this is that we have a menuing system and some app's-O > are only supposed to run on a specific node. Unless that node is unavailable,  > in which case any node is ok.u  G AFAIK, a node which is not currently a cluster member (i.e., not booted E and running as a cluster member) will not appear in the list of CSIDs.D returned by the lexical function. I've no way to test that, however.   -- p David J. Dachtera  dba DJE Systems  http://www.djesys.com/  : Unofficial Affordable OpenVMS Home Page and Message Board: http://www.djesys.com/vms/soho/k  F This *IS* an OpenVMS-related newsgroup. So, a certain bias in postings is to be expected.  @ Feel free to exercise your rights of free speech and expression.  F However, attacks against individual posters, or groups of posters, are strongly discouraged..   ------------------------------  % Date: Wed, 04 Jul 2001 13:25:43 -050031 From: "David J. Dachtera" <djesys.nospam@fsi.net>c, Subject: Re: PointSecure site was hacked ???' Message-ID: <3B435FA7.EC2355B9@fsi.net>u  * fabio_compaq@ep-bc.petrobras.com.br wrote: >  > It is working fine now ! > @ > This  worried me because they sell OpenVMS security products !  G ...but what do they run their website on? What security holes remain in ) the web server software on that platform.-   -- m David J. Dachterad dba DJE Systemsd http://www.djesys.com/  : Unofficial Affordable OpenVMS Home Page and Message Board: http://www.djesys.com/vms/soho/d  F This *IS* an OpenVMS-related newsgroup. So, a certain bias in postings is to be expected.  @ Feel free to exercise your rights of free speech and expression.  F However, attacks against individual posters, or groups of posters, are strongly discouraged.e   ------------------------------  % Date: Wed, 04 Jul 2001 15:39:24 -0300 ) From: fabio_compaq@ep-bc.petrobras.com.br , Subject: Re: PointSecure site was hacked ???L Message-ID: <OF61615F05.4843853B-ON03256A7F.00666274@ep-bc.petrobras.com.br>   David/  ; It is just a question of " good product and bad marketing".y5 And the site up means "good marketing" in the case of  a scurity company.   Regards    FC        B "David J. Dachtera" <djesys.nospam@fsi.net> em 04/07/2001 15:25:43  = Favor responder a "David J. Dachtera" <djesys.nospam@fsi.net>n             Info-VAX@Mvb.Saic.Comn      , Assunto: Re: PointSecure site was hacked ???    * fabio_compaq@ep-bc.petrobras.com.br wrote: >P > It is working fine now ! > @ > This  worried me because they sell OpenVMS security products !  F ..but what do they run their website on? What security holes remain in) the web server software on that platform.p   -- David J. Dachterac dba DJE Systems2 http://www.djesys.com/  : Unofficial Affordable OpenVMS Home Page and Message Board: http://www.djesys.com/vms/soho/   F This *IS* an OpenVMS-related newsgroup. So, a certain bias in postings is to be expected.  @ Feel free to exercise your rights of free speech and expression.  F However, attacks against individual posters, or groups of posters, are strongly discouraged.    ------------------------------  # Date: Wed, 04 Jul 2001 20:01:36 GMTl4 From: LESLIE@209-16-45-102.insync.net (Jerry Leslie), Subject: Re: PointSecure site was hacked ???) Message-ID: <ACK07.4993$%L5.64231@insync>f  0 David J. Dachtera (djesys.nospam@fsi.net) wrote:, : fabio_compaq@ep-bc.petrobras.com.br wrote: : >  : > It is working fine now ! : > B : > This  worried me because they sell OpenVMS security products ! : I : ...but what do they run their website on? What security holes remain inC+ : the web server software on that platform.- :  Per wwww.netcraft.com:  B   "The site www.pointsecure.com is running Apache/1.2.6 on Linux."  % --Jerry Leslie   leslie@clio.rice.eduo/                  leslie@209-16-45-97.insync.netM;                  leslie@209-16-45-102.insync.net is invalidx   ------------------------------  $ Date: Wed, 4 Jul 2001 22:06:53 +02006 From: "Paul Blenderman" <Paul.Blenderman@micronas.com>, Subject: Re: PointSecure site was hacked ???- Message-ID: <9hvt11$4dl$1@seebuck.freinet.de>r  L Unless the DNS servers are VMS-based, this is OT. www.pointsecure.com is OK.  K The problem is that you have a 50% chance of getting the wrong address from  DNS.) (IE, Netscape, etc., etc. is irrelevant.)1  B Two of the four DNS servers responsible for pointsecure.com are in
 inetlabs.com.h0 They supply the correct address: 65.104.226.166.  ? The other two are at gobase2.com. They supply the wrong addressr 63.149.157.92.H This address is in the same subnet as www.gobase2.com. The SOA record is also) completely different and doesn't point to0  D gobase2.com, phatchip.com and pointsecure.com all belong to the same	 companiesCJ at the same address that is also listed at the "real" www.pointsecure.com.F Unless this was a very thorough hacker, I'd say an admin probably just deliberately or  undeliberately screwed up.  2 "Alan Greig" <a.greig@virgin.net> wrote in message2 news:0ac6ktofnqcecofnndrf8nqpch186ai2cq@4ax.com...% > On Wed, 04 Jul 2001 10:29:51 -0300,., > fabio_compaq@ep-bc.petrobras.com.br wrote: >o8 > >I am connecting to the homepage and it is appearing : > >cA > > Name                    Last modified       Size  Descriptionr >lL >--------------------------------------------------------------------------- ----- 4 > > Parent Directory        15-Jun-2000 01:46      -4 > > Get out of here/        20-Jun-2001 17:01      - > >g > >f >eL >--------------------------------------------------------------------------- -----  > >r8 > >Apache/1.3.14 Server at phatchat.phatchip.com Port 80 >- > FC,a > C > I see the same as you but I suspect config problems rather than aaA > hacker. If you click on get out of here it does bring up a page. >> >  > -- > Alan   ------------------------------   Date: 5 Jul 2001 03:39:50 GMT 5 From: ccburgess@uqstu.jdstory.uq.edu.au (Ian Burgess)v- Subject: SSH client, FISH, -- sources anyone? . Message-ID: <9i0ni6$d25$1@bunyip.cc.uq.edu.au>  > I've just scanned the web for the source of a good SSH client. A recurring message is...0M "The OpenVMS SSH1 client is done by Christer Weinigel and Richard Levitte andu. is available at  http://www.free.lp.se/fish/."  ; It seems unanimous that FiSH is the way to go, but there istK a sad story -- a disk head crash has put LP (www.free.lp.se) out of action.l  ? If someone has the distribution from there I would be grateful.   % I think Richard Levitte would be too.r   Ian Burgess  University of Queensland I.Burgess[at]its.uq.edu.au www.its.uq.edu.aut   ------------------------------  $ Date: Thu, 5 Jul 2001 07:45:38 +1000- From: "Peter Mayne" <Peter.Mayne@au1.ibm.com>n" Subject: Re: The Alpha/IA64 Hybrid+ Message-ID: <9i02qf$rik$1@news.btv.ibm.com>   2 "Bill Todd" <billtodd@foo.mv.com> wrote in message" news:9htso9$go4$1@pyrite.mv.net... >kH > "Larry Kilgallen" <Kilgallen@eisner.decus.org.nospam> wrote in message/ > news:xCFJA14cPQtJ@eisner.encompasserve.org...| >c > ...  >n >   Absent RMS or an= > > equivalent, programs have to be specially written to make-$ > > use of the cluster capabilities. >0H > Trouble is, at least some of those other cluster facilities (SUN's and- > Tru64, for two) *do* have file systems that  >dE > a) allow concurrent access to the same data from multiple nodes and   5 Depending on your exact definition of "concurrent"...   H NFS allows concurrent access to the same data from multiple nodes, but IC wouldn't call a collection of NFS clients/servers a cluster (but myw0 definition of cluster is rather VMS-biased. 8-).  K Last time I worked with Tru64 (V5.1 I think), the Cluster File System (CFS)aK worked by having one of the nodes in the cluster serve the underlying AdvFS.J file system to the other nodes over the cluster memory channel. Therefore,L only that node could directly access the underlying file system, and it onlyG looked like the other nodes had direct access, with CFS server failoverIK happening as necessary. This is somewhat different to VMS, where nodes in alB cluster can access the file system concurrently and independently.   See4L http://tru64unix.compaq.com/faqs/publications/cluster_doc/cluster_51/HTML/AR HGVCTE/CHPSTRGN.HTM#sect-cfs     PJDM -- Peter Maynee IBM GSAt Canberra, ACT, Australia My own opinions.   ------------------------------  $ Date: Wed, 4 Jul 2001 18:56:42 -0400' From: "Bill Todd" <billtodd@foo.mv.com>/" Subject: Re: The Alpha/IA64 Hybrid( Message-ID: <9i06o8$3jk$1@pyrite.mv.net>  8 "Peter Mayne" <Peter.Mayne@au1.ibm.com> wrote in message% news:9i02qf$rik$1@news.btv.ibm.com... 4 > "Bill Todd" <billtodd@foo.mv.com> wrote in message$ > news:9htso9$go4$1@pyrite.mv.net...   ...-  J > > Trouble is, at least some of those other cluster facilities (SUN's and/ > > Tru64, for two) *do* have file systems that  > > G > > a) allow concurrent access to the same data from multiple nodes and  > 7 > Depending on your exact definition of "concurrent"...   E Perhaps.  Using an exacting definition, *no* existing system providesmH concurrent access to data at its persistent location (or that location's7 cache):  accesses are always serialized in some manner.l   >aJ > NFS allows concurrent access to the same data from multiple nodes, but IE > wouldn't call a collection of NFS clients/servers a cluster (but mye2 > definition of cluster is rather VMS-biased. 8-).  D That's because you're looking at implementation rather than external	 function.r  K Do you consider a VMS cluster that uses only node-private storage (exportedeJ for cluster-wide use) not to be a cluster for that reason?  Such a clusterG uses node-to-node communication to access data managed by another node,aJ but - especially when combined with distributed volume-shadowing - it sure9 works just about the same way as a cluster based on HSxs.y  I Or do you really feel that the distinction between node-to-node access at H the file level vs. node-to-node access at the device level is important?K The only difference I can see is that the latter potentially allows greater H scalability - but is there some lower limit (greater than 1) on how many- nodes it takes to make something a 'cluster'?2   >0G > Last time I worked with Tru64 (V5.1 I think), the Cluster File System6 (CFS) G > worked by having one of the nodes in the cluster serve the underlying  AdvFS!L > file system to the other nodes over the cluster memory channel. Therefore,I > only that node could directly access the underlying file system, and it  only0 > looked like the other nodes had direct access,   So what?  See above discussion.     with CFS server failoverpK > happening as necessary. This is somewhat different to VMS, where nodes inv aID > cluster can access the file system concurrently and independently.  D Different in implementation, certainly.  Different in user-perceivedJ function?  Hardly at all.  VMS does have faster recovery from node-failureL than current fail-over mechanisms provide, but there's no *intrinsic* reasonL that fail-over - especially planned fail-over - need take more than a second or two.n  F And as I said above, shared-disk clusters (and even pseudo-shared-diskB clusters using exported node-private drives) can potentially scaleF homogeneously to larger sizes than data-partitioned Unix clusters can.G OTOH, where data *is* readily partitionable, partitioned approaches can<I scale far larger than the VMS approach can (you and even I may think that7L using 'mount points' to cobble many base filesystems together into one largeJ one feels kind of kludgey, but Unix people seem to like it a lot, and theyH do have some credible reasons for this).  So that's something of a wash.  H In closing, I haven't yet found anything in your response that calls anyF statement (or implication) I made into question.  If you feel I missed something, please elucidate.   - bill   >  > Seeh >eL http://tru64unix.compaq.com/faqs/publications/cluster_doc/cluster_51/HTML/AR > HGVCTE/CHPSTRGN.HTM#sect-cfs >t >e > PJDM > --
 > Peter Maynea	 > IBM GSAu > Canberra, ACT, Australia > My own opinions. >i >    ------------------------------  % Date: Wed, 04 Jul 2001 20:58:26 -0400y- From: Jack Patteeuw <jjpatteeuw@peoplepc.com> " Subject: Re: The Alpha/IA64 Hybrid, Message-ID: <3B43BBB2.27ABEBAF@peoplepc.com>   Bill Todd wrote: .a .  .tG > b) include robust access and distributed (byte-range) lock management-N > facilities for that data that survive the loss of any node without requiringL > client applications to do anything but wait for storage fail-over and lockI > state re-build to occur (though it's reportedly not as fast as in a VMSlK > environment - something on the order of a minute, unless they've improvedt > the speed lately).    N A few years ago (5 -10) in one of the Un*x "rags" of the time (Un*x Review ?) L an author stated that a "distributed lock manager" was a theory that no one P had yet implemented !!  When a knowledgeable VMSer friend of mine sent a letter O to the editor explaining the VMS distributed lock manager the author dismissed I- it because it wasn't implemented on Un*x !!!!e  R Sigh, and most Un*x and MS admins still have never heard of a ISAM file structure Q and are happily paying millions to Oracle, MS, etc., etc. because no Un*x vendor  O would **DARE** to draw the ire of one of their major vendors by including ISAM oP support "native".  Other programmers are happy with their jobs security of "roll your own" db's !!!!r    
 Jack Patteeuw-   Burroughs CANDE-! GCOS (not the field name, the OS)1 Multics0 TOPS-10m TOPS-20J VMSh OSF/1d Digital Unix Tru64w Solaris:    What's next ?  Windows XP or ...   ------------------------------  % Date: Wed, 04 Jul 2001 21:00:05 -0400n- From: Jack Patteeuw <jjpatteeuw@peoplepc.com>w" Subject: Re: The Alpha/IA64 Hybrid, Message-ID: <3B43BC15.84C166AA@peoplepc.com>   Peter Mayne wrote: >vJ > NFS allows concurrent access to the same data from multiple nodes, but IE > wouldn't call a collection of NFS clients/servers a cluster (but myR2 > definition of cluster is rather VMS-biased. 8-).  4 The very thought that I live in fear of everyday !!!    
 Jack Patteeuw    ------------------------------  % Date: Wed, 04 Jul 2001 21:13:32 -0400d- From: Jack Patteeuw <jjpatteeuw@peoplepc.com>T" Subject: Re: The Alpha/IA64 Hybrid, Message-ID: <3B43BF3C.705CD606@peoplepc.com>   Larry Kilgallen wrote: > > > So far as I know, the term was first used by VMS about 1985,A > for what they offered then (not materially different from todaya4 > with regard to the capabilities under discussion). > 6 > Was there a prior use of "cluster" in this context ? > > > If not, then lesser copycat efforts are not really clusters.; > Greater copycat efforts are, but I have not heard of any,g9 > including Tru64.  The net effect includes the operating * > system and the software that runs on it.  K As a former VMSer (and now Tru64 admin not by choice) I agree with you 100%2	 Larry !!!@  J I can not imagine why any owner of a "group" of NFS mounted systems would J want to allow **SILENT** multiple write accesses to the same file !!!  It L sends shudders down my spine knowing that what "Joe" just wrote, "Suzie" canJ overwrite in a blink, and with out file version numbering, there is no way- that the original data can be save/recovered.a  H So why does CDFS (maybe with RRE) have file version numbering (on a fileK system that is typically written once) and no other Un*x file system does ?h  L There are pluses and minuses in most modern operating systems (except those M from MS, which are just made in the image and likeness of their god), so why -M aren't these being taught in most colleges and why hasn't the world of "open JL source" borrowed a few other "good ideas" ?  Could it be that penguin might ! get a smile like a Cheshire Cat ?     
 Jack Patteeuwt   ------------------------------    Date: 05 Jul 2001 03:05:03 +0100$ From: Rich Walker <rw@shadow.org.uk>" Subject: Re: The Alpha/IA64 Hybrid0 Message-ID: <m38zi4w2pc.fsf@lin-pc.shadow.local>  ) "Bill Todd" <billtodd@foo.mv.com> writes:a  : > "Peter Mayne" <Peter.Mayne@au1.ibm.com> wrote in message' > news:9i02qf$rik$1@news.btv.ibm.com... 6 > > "Bill Todd" <billtodd@foo.mv.com> wrote in message& > > news:9htso9$go4$1@pyrite.mv.net... >    [concurrent vs serialised]  L > > NFS allows concurrent access to the same data from multiple nodes, but IG > > wouldn't call a collection of NFS clients/servers a cluster (but myl4 > > definition of cluster is rather VMS-biased. 8-). > F > That's because you're looking at implementation rather than external > function.m > M > Do you consider a VMS cluster that uses only node-private storage (exportedeL > for cluster-wide use) not to be a cluster for that reason?  Such a clusterI > uses node-to-node communication to access data managed by another node,gL > but - especially when combined with distributed volume-shadowing - it sure; > works just about the same way as a cluster based on HSxs.   B Excuse me for butting in from a position of ignorance, but are youH describing a system where each "box" in the system has local CPU + RAM +G mass storage, and exports this to the other "box"es in the system, such D that as far as any user is concerned there is one (or more) pools of> storage of size bigger than that attached to any given "box"?   : Where such storage has mirroring properties, such that the0 catastrophic failure of any "box" is irrelevant?  C And the available storage bandwidth (read/write to mass storage) ist3 greater than the amount possessed by any one "box"?e  E If so, is there a *single* document available anywhere describing how. such a system is      K > Or do you really feel that the distinction between node-to-node access ateJ > the file level vs. node-to-node access at the device level is important?M > The only difference I can see is that the latter potentially allows greateroJ > scalability - but is there some lower limit (greater than 1) on how many/ > nodes it takes to make something a 'cluster'?e >  > >yI > > Last time I worked with Tru64 (V5.1 I think), the Cluster File System1 > (CFS)0I > > worked by having one of the nodes in the cluster serve the underlyings > AdvFSaN > > file system to the other nodes over the cluster memory channel. Therefore,K > > only that node could directly access the underlying file system, and it" > only2 > > looked like the other nodes had direct access, > ! > So what?  See above discussion.   F If the system is "interesting" then there cannot be a single node that? serves the filesystem to all other nodes, even if that node has G replication properties to make it fault-tolerant. To be interesting theaG filesystem must be organised from more than one node, with no notion of  a "master" node serving it.s   [snippety rest]e   -- eH rich walker | technical person | Shadow Robot Company | rw@shadow.org.uk7 front-of-tshirt space to let     251 Liverpool Road   |nH                                  London  N1 1LX       | +UK 20 7700 2487   ------------------------------  $ Date: Wed, 4 Jul 2001 22:24:07 -0400' From: "Bill Todd" <billtodd@foo.mv.com>*" Subject: Re: The Alpha/IA64 Hybrid( Message-ID: <9i0it5$hhk$1@pyrite.mv.net>  1 "Rich Walker" <rw@shadow.org.uk> wrote in messagep* news:m38zi4w2pc.fsf@lin-pc.shadow.local...+ > "Bill Todd" <billtodd@foo.mv.com> writes:,   ...t  E > > Do you consider a VMS cluster that uses only node-private storageu	 (exported1F > > for cluster-wide use) not to be a cluster for that reason?  Such a clustertK > > uses node-to-node communication to access data managed by another node,iI > > but - especially when combined with distributed volume-shadowing - its sure= > > works just about the same way as a cluster based on HSxs.o >eD > Excuse me for butting in from a position of ignorance, but are youJ > describing a system where each "box" in the system has local CPU + RAM +I > mass storage, and exports this to the other "box"es in the system, suchwF > that as far as any user is concerned there is one (or more) pools of? > storage of size bigger than that attached to any given "box"?n >e< > Where such storage has mirroring properties, such that the2 > catastrophic failure of any "box" is irrelevant? >lE > And the available storage bandwidth (read/write to mass storage) is.5 > greater than the amount possessed by any one "box"?s   Yup.   >1G > If so, is there a *single* document available anywhere describing hows > such a system is  ! Not one that I know of, off-hand.   H > If the system is "interesting" then there cannot be a single node thatA > serves the filesystem to all other nodes, even if that node hasnI > replication properties to make it fault-tolerant. To be interesting thedI > filesystem must be organised from more than one node, with no notion of  > a "master" node serving it.   L While what you consider 'interesting' is not of obvious relevance, and whileL a single *large* SMP node (that can fail over to a partner) with significantG available I/O bandwidth can in fact do a pretty good job of large-scaletL file-serving, and while failover itself is sufficiently complex (at least ifK you include the file-system design required to make it reasonably fast) foryD some people to find it 'interesting', what you describe in the aboveE paragraph is not what I described in the post to which you responded.    - bill   >. > [snippety rest]- >p > --J > rich walker | technical person | Shadow Robot Company | rw@shadow.org.uk9 > front-of-tshirt space to let     251 Liverpool Road   |iJ >                                  London  N1 1LX       | +UK 20 7700 2487   ------------------------------  $ Date: Wed, 4 Jul 2001 22:44:12 -0400' From: "Bill Todd" <billtodd@foo.mv.com>0" Subject: Re: The Alpha/IA64 Hybrid( Message-ID: <9i0k2s$i8u$1@pyrite.mv.net>  : "Jack Patteeuw" <jjpatteeuw@peoplepc.com> wrote in message& news:3B43BBB2.27ABEBAF@peoplepc.com...   ...0  L > A few years ago (5 -10) in one of the Un*x "rags" of the time (Un*x Review ?)I > an author stated that a "distributed lock manager" was a theory that no  one J > had yet implemented !!  When a knowledgeable VMSer friend of mine sent a letterF > to the editor explaining the VMS distributed lock manager the author	 dismissedo/ > it because it wasn't implemented on Un*x !!!!c  E Yeah, given that Unix is a half-dozen years older than VMS, one couldiL reasonably observe that it's been a dramatically slow learner in many areas.B Even in more recent conference papers, as often as not one sees noH references to IBM and DEC products (and perhaps others that I just don'tK happen to be familiar with) that have been on the market for years offeringiH much in the way of the novel approaches being discussed.  I suspect thatF most academics are as little-steeped in real-world engineering as mostL engineers are in academic work, and Unix advances have seemed mostly to come from academia.   >eI > Sigh, and most Un*x and MS admins still have never heard of a ISAM filec	 structurecK > and are happily paying millions to Oracle, MS, etc., etc. because no Un*x  vendorK > would **DARE** to draw the ire of one of their major vendors by including. ISAM > support "native".K  = I still hope to get my act together and create a light-weight4G record-structured standard file system that would open their eyes.  But K ISAM/VSAM from IBM and RMS from DEC did a pretty good job of scaring peopleeL away (and I say this as one of the original RMS developers):  the interfacesK were unbelievably convoluted, and though some of the higher-level languages K softened them a bit they still tended to make most of the options availabledK if you really wanted them and hence weren't really all that much simpler ine the end.  G For most of the last decade processor cycles and memory have been cheapeK enough to avoid the necessity for a great many of the options that made RMS-I so complex to approach at the FAB/RAB/NAM/XAB level:  it could be so much J simpler now.  There aren't too many people any more interested in making aI career out of understanding the intricacies of some vendor's file system,hF and it would be nice to be able to offer the rest of the world a thirdH option to either using Oracle (and the human DB administrator that comes; along with such use) or kludging up something on their own.e   - bill  ?   Other programmers are happy with their jobs security of "rolll > your own" db's !!!!  >4 >t > Jack Patteeuwm >. > Burroughs CANDEn# > GCOS (not the field name, the OS)C	 > Multics.	 > TOPS-10s	 > TOPS-20  > VMSe > OSF/1  > Digital Unix > Tru64A	 > Solarisy >n" > What's next ?  Windows XP or ...   ------------------------------  $ Date: Wed, 4 Jul 2001 23:01:37 -0400' From: "Bill Todd" <billtodd@foo.mv.com>I" Subject: Re: The Alpha/IA64 Hybrid( Message-ID: <9i0l3f$it7$1@pyrite.mv.net>  : "Jack Patteeuw" <jjpatteeuw@peoplepc.com> wrote in message& news:3B43BF3C.705CD606@peoplepc.com...   ...m  K > I can not imagine why any owner of a "group" of NFS mounted systems would K > want to allow **SILENT** multiple write accesses to the same file !!!  ItrJ > sends shudders down my spine knowing that what "Joe" just wrote, "Suzie" canaL > overwrite in a blink, and with out file version numbering, there is no way/ > that the original data can be save/recovered.e  H I must be missing something here:  VMS certainly allows concurrent writeJ (and over-write) access to files as well.  Versioning only comes into playI when an application deliberately creates a new version of a file and then F (typically, as with an editor) copies into that new version a modifiedL version of the old version - and while it's likely that the old version willH stick around for a while, it's by no means guaranteed (since old-version0 clean-up is a per-directory-settable parameter).  J I liked versioning, but it never caught on with the rest of the world, and8 the rest of the world does manage to survive without it.   >nJ > So why does CDFS (maybe with RRE) have file version numbering (on a fileK > system that is typically written once) and no other Un*x file system doesa ?s  C Probably because Andy Goldstein was DEC's representative to the ISOnL committee that designed the CDFS standard.  But it could also have somethingJ to do with the write-once nature of the medium (CD-R technology was on theL horizon back then, but not necessarily CD-RW):  versioning is a way to allowJ *effective* over-writes of existing files just by creating a new version -L and since the old version is going to stick around anyway, having the syntax to access it is nice.a   >oG > There are pluses and minuses in most modern operating systems (excepte thoseeJ > from MS, which are just made in the image and likeness of their god), so whydH > aren't these being taught in most colleges and why hasn't the world of "open - > source" borrowed a few other "good ideas" ?   L Versioning just wasn't a good *enough* idea to overcome the fact that no oneJ else used it.  RMS wasn't a simple enough good idea (or alternatively someH would say that its complexity moved it completely out of the 'good idea' column).  J The Unix approach to handling files as byte-streams, while a bit primitiveJ for some use, was, and remains, a good idea.  The first implementation wasI unbelievably inefficient, and even today a lot aren't that great, but the B *idea* was good (though I'd still like to see it supplemented by a+ cleanly-designed record-handling facility).    - bill      Could it be that penguin might# > get a smile like a Cheshire Cat ?o >r >  > Jack Patteeuwa   ------------------------------    Date: 05 Jul 2001 04:20:21 +0100$ From: Rich Walker <rw@shadow.org.uk>" Subject: Re: The Alpha/IA64 Hybrid0 Message-ID: <m366d8vz7u.fsf@lin-pc.shadow.local>  ) "Bill Todd" <billtodd@foo.mv.com> writes:e  3 > "Rich Walker" <rw@shadow.org.uk> wrote in messagee, > news:m38zi4w2pc.fsf@lin-pc.shadow.local...- > > "Bill Todd" <billtodd@foo.mv.com> writes:e >  > ...w > G > > > Do you consider a VMS cluster that uses only node-private storagea > (exported H > > > for cluster-wide use) not to be a cluster for that reason?  Such a	 > clusterdM > > > uses node-to-node communication to access data managed by another node,/K > > > but - especially when combined with distributed volume-shadowing - itM > sure? > > > works just about the same way as a cluster based on HSxs.o > >nF > > Excuse me for butting in from a position of ignorance, but are youL > > describing a system where each "box" in the system has local CPU + RAM +K > > mass storage, and exports this to the other "box"es in the system, suchdH > > that as far as any user is concerned there is one (or more) pools ofA > > storage of size bigger than that attached to any given "box"?o > > > > > Where such storage has mirroring properties, such that the4 > > catastrophic failure of any "box" is irrelevant? > >tG > > And the available storage bandwidth (read/write to mass storage) is 7 > > greater than the amount possessed by any one "box"?e >  > Yup. >  > >yI > > If so, is there a *single* document available anywhere describing how  > > such a system is > # > Not one that I know of, off-hand.w  H Drat. This is one of those problems that I was hoping someone, somewhere@ had solved in an interesting way, and then written up relativelyG concisely. Sadly, I lack the time to read VMS: I'm sure I *should*, bute< my should-read-queue is far too long for things like this...   > J > > If the system is "interesting" then there cannot be a single node thatC > > serves the filesystem to all other nodes, even if that node haswK > > replication properties to make it fault-tolerant. To be interesting the K > > filesystem must be organised from more than one node, with no notion ofs > > a "master" node serving it.r > N > While what you consider 'interesting' is not of obvious relevance, and while  $ Interesting in the following sense: ? 	single filesystem served from single fileserver machine with a-A 	logical volume manager mounting network block devices from other  	machines with lots of disks  4 is I believe do-able today with stock linux and NFS.  = 	single filesystem served from multiple machines with lots ofn" 	disks without fileserver+LVM+NBD   9 would be well worth reading the papers on how it worked. l  G It might be that something approaching the latter could be created fromsE a collection of machines serving disks to a collection of fileserversaE that collaborated to ensure their clients *saw* a coherent filesystem10 without having to create a coherent filesystem.   : So, rather than storage boxes ---> fileserver ---> clients 				   with replicationl   we have + 	storage boxes ---> fileserver ---> clientsw 				^e 				|f 				V	+ 	storage boxes ---> fileserver ---> clients  				^o 				|i 				Vf+ 	storage boxes ---> fileserver ---> clientsb  G with redundancy built in such that, after a write is committed, loss ofs+ any one piece of hardware is not a problem.   > I guess this requires the fileserver<->fileserver bandwidth to@ approximate the aggregate write-bandwidth of the clients, whilstE permitting the aggregate read-bandwith of the clients to saturate alls available links.  = But the case I'm interested in does not differentiate between!F fileservers and clients. All boxes in the group provide storage to the common storage pool.    N > a single *large* SMP node (that can fail over to a partner) with significantI > available I/O bandwidth can in fact do a pretty good job of large-scaleiN > file-serving, and while failover itself is sufficiently complex (at least ifM > you include the file-system design required to make it reasonably fast) for F > some people to find it 'interesting', what you describe in the aboveG > paragraph is not what I described in the post to which you responded.   
 Fair enough.    
 cheers, Rich.r   -- wH rich walker | technical person | Shadow Robot Company | rw@shadow.org.uk7 front-of-tshirt space to let     251 Liverpool Road   | H                                  London  N1 1LX       | +UK 20 7700 2487   ------------------------------  % Date: Wed, 04 Jul 2001 22:47:16 -0500t1 From: "David J. Dachtera" <djesys.nospam@fsi.net>2" Subject: Re: The Alpha/IA64 Hybrid' Message-ID: <3B43E344.DAB4CC99@fsi.net>p   Jack Patteeuw wrote: >  > Bill Todd wrote: > .a > .o > . I > > b) include robust access and distributed (byte-range) lock managementeP > > facilities for that data that survive the loss of any node without requiringN > > client applications to do anything but wait for storage fail-over and lockK > > state re-build to occur (though it's reportedly not as fast as in a VMStM > > environment - something on the order of a minute, unless they've improvedF > > the speed lately). > O > A few years ago (5 -10) in one of the Un*x "rags" of the time (Un*x Review ?)uM > an author stated that a "distributed lock manager" was a theory that no oneoQ > had yet implemented !!  When a knowledgeable VMSer friend of mine sent a letterlP > to the editor explaining the VMS distributed lock manager the author dismissed/ > it because it wasn't implemented on Un*x !!!!n  " Typical head-in-the-sand attitude.  S > Sigh, and most Un*x and MS admins still have never heard of a ISAM file structuresR > and are happily paying millions to Oracle, MS, etc., etc. because no Un*x vendorP > would **DARE** to draw the ire of one of their major vendors by including ISAMR > support "native".  Other programmers are happy with their jobs security of "roll > your own" db's !!!!c  E Once upon many moons ago, one or more UN*X distro "came with" C-ISAM.t. Been too long - don't remember which or why...   Once, but long ago...t   -- c David J. Dachterao dba DJE Systemsn http://www.djesys.com/  : Unofficial Affordable OpenVMS Home Page and Message Board: http://www.djesys.com/vms/soho/   F This *IS* an OpenVMS-related newsgroup. So, a certain bias in postings is to be expected.  @ Feel free to exercise your rights of free speech and expression.  F However, attacks against individual posters, or groups of posters, are strongly discouraged.e   ------------------------------  $ Date: Thu, 5 Jul 2001 00:05:50 -0400' From: "Bill Todd" <billtodd@foo.mv.com> " Subject: Re: The Alpha/IA64 Hybrid( Message-ID: <9i0oru$lf1$1@pyrite.mv.net>  1 "Rich Walker" <rw@shadow.org.uk> wrote in messaget* news:m366d8vz7u.fsf@lin-pc.shadow.local...   ...   % > Interesting in the following sense:i@ > single filesystem served from single fileserver machine with aB > logical volume manager mounting network block devices from other > machines with lots of diskse >u6 > is I believe do-able today with stock linux and NFS.  F Not when you throw in the reasonably-fast fail-over requirement:  thatH requires a 'careful-update' file system such as VMS's or a log-protectedG file system such as Reiserfs or ext3fs (neither of which AFAIK is quiteiJ ready for prime-time) or a log-structured file system (which AFAIK doesn't9 yet exist at all on Linux) to avoid fsck restart latency.   L I think IBM just released JFS in something that may no longer be a beta-testC version, so that might qualify.  But there still may not yet be anyaL underlying/surrounding fail-over support available on Linux (though multipleG groups are working in this area), including a standard mechanism for IP-J address fail-over (unless NFS now has some internal fail-over support that avoids the need for this).   >u> > single filesystem served from multiple machines with lots of" > disks without fileserver+LVM+NBD >o: > would be well worth reading the papers on how it worked. >aI > It might be that something approaching the latter could be created fromnG > a collection of machines serving disks to a collection of fileservers G > that collaborated to ensure their clients *saw* a coherent filesysteme1 > without having to create a coherent filesystem.a >m< > So, rather than storage boxes ---> fileserver ---> clients >    with replications >o	 > we have , > storage boxes ---> fileserver ---> clients > ^r > |d > VS, > storage boxes ---> fileserver ---> clients > ^n > |g > Vc, > storage boxes ---> fileserver ---> clients >"I > with redundancy built in such that, after a write is committed, loss oft- > any one piece of hardware is not a problem.t  K The above diagram - with all the 'fileserver' entries removed - is what wassI being recently discussed in comp.arch/comp.arch.storage in the 'commodityeL storage servers' thread (the storage boxes cooperate to serve files or space( on logical volumes directly to clients).   > @ > I guess this requires the fileserver<->fileserver bandwidth toB > approximate the aggregate write-bandwidth of the clients, whilstG > permitting the aggregate read-bandwith of the clients to saturate all  > available links.  
 Precisely.   > ? > But the case I'm interested in does not differentiate betweendH > fileservers and clients. All boxes in the group provide storage to the > common storage pool.  E There's nothing inherent in your diagram above (with the 'fileserver'eI entries removed) that would prevent the client/storage box interface fromeI being an internal one in a single box supporting both - which also halvesaG the total number of links required.  But the clients have to trust eachrI other a lot more than when they're separated from the storage servers andbK can rely upon storage-server protection mechanisms.  Clients that cooperatepL to export a single, stable application to a much wider external world (e.g.,E Web servers) tend to fall into this category, but clients that may bebL running virtually anything (which can compromise their robustness) or are inL the service of individuals (e.g., workstations) who may elect to reboot themA at any time may better be kept separate from the storage servers.h   - bill   ------------------------------  # Date: Thu, 05 Jul 2001 05:40:45 GMTn4 From: LESLIE@209-16-45-102.insync.net (Jerry Leslie)" Subject: Re: The Alpha/IA64 Hybrid) Message-ID: <x5T07.5003$%L5.64358@insync>l  & Bill Todd (billtodd@foo.mv.com) wrote: : I : I liked versioning, but it never caught on with the rest of the world, t> : and the rest of the world does manage to survive without it. :   @ IBM mainframes folks call versioned files "generation datasets".   From:e  0    http://members.aol.com/rexxauthor/mvsfile.htm     "Generation datasets.   H    These are exactly like ordinary sequential except that the operating ?    system manipulates their names in such a way that there are V.    historically-related versions of the file.   H    Although the name given in the JCL will not change each time you run I    the job, the operating system will change the actual name assigned so  @    as to create multiple versions or generations of the file..."  # Univac Exec 8 called them F-cycles.n  % --Jerry Leslie   leslie@clio.rice.edu /                  leslie@209-16-45-97.insync.neta;                  leslie@209-16-45-102.insync.net is invalida   ------------------------------    Date: 05 Jul 2001 06:35:44 +0100$ From: Rich Walker <rw@shadow.org.uk>" Subject: Re: The Alpha/IA64 Hybrid0 Message-ID: <m33d8cvsy7.fsf@lin-pc.shadow.local>  ) "Bill Todd" <billtodd@foo.mv.com> writes:r  3 > "Rich Walker" <rw@shadow.org.uk> wrote in message , > news:m366d8vz7u.fsf@lin-pc.shadow.local... >  > ...  > ' > > Interesting in the following sense:cB > > single filesystem served from single fileserver machine with aD > > logical volume manager mounting network block devices from other > > machines with lots of disksg > > 8 > > is I believe do-able today with stock linux and NFS. > H > Not when you throw in the reasonably-fast fail-over requirement:  thatJ > requires a 'careful-update' file system such as VMS's or a log-protectedI > file system such as Reiserfs or ext3fs (neither of which AFAIK is quite L > ready for prime-time) or a log-structured file system (which AFAIK doesn't; > yet exist at all on Linux) to avoid fsck restart latency.   F Missed that. Thanks. So, if "fileserver down for a while" is very bad,F then we need a filesystem that is always consistent, so we can replaceA the fileserver. Or, we need a filesystem that can be generated byh@ multiple fileservers concurrently, which is the interesting one.   > N > I think IBM just released JFS in something that may no longer be a beta-testE > version, so that might qualify.  But there still may not yet be anyrN > underlying/surrounding fail-over support available on Linux (though multipleI > groups are working in this area), including a standard mechanism for IPtL > address fail-over (unless NFS now has some internal fail-over support that > avoids the need for this).   > >h@ > > single filesystem served from multiple machines with lots of$ > > disks without fileserver+LVM+NBD > >C< > > would be well worth reading the papers on how it worked. > > K > > It might be that something approaching the latter could be created from I > > a collection of machines serving disks to a collection of fileservers0I > > that collaborated to ensure their clients *saw* a coherent filesysteme3 > > without having to create a coherent filesystem.m > >>> > > So, rather than storage boxes ---> fileserver ---> clients > >    with replicationo > >. > > we havem. > > storage boxes ---> fileserver ---> clients > > ^F > > |e > > Vm. > > storage boxes ---> fileserver ---> clients > > ^e > > |  > > Ve. > > storage boxes ---> fileserver ---> clients  F B'r't. The above diagram was tab-formatted in an attack of idiocy. TheF vertical arrows are supposed to be be between fileservers, rather than between storage boxes. r   > > K > > with redundancy built in such that, after a write is committed, loss ofd/ > > any one piece of hardware is not a problem.  > M > The above diagram - with all the 'fileserver' entries removed - is what wasiK > being recently discussed in comp.arch/comp.arch.storage in the 'commodityoN > storage servers' thread (the storage boxes cooperate to serve files or space* > on logical volumes directly to clients).  D AFAICT that discussion was more about what would go *in* the storageD boxes. The issue of how to coherently generate a filesystem across aD collection of collaborating boxes seemed to get glossed over, though could have missed it.   E I can see how you might generate a load-balancing logical volume withtD an intra-storage-box broadcast protocol for attach and detach, whichH raises an interesting question of what amount of redundancy is necessaryC in such a system given reasonable rates of removal of components...e   >  > > B > > I guess this requires the fileserver<->fileserver bandwidth toD > > approximate the aggregate write-bandwidth of the clients, whilstI > > permitting the aggregate read-bandwith of the clients to saturate alla > > available links. >  > Precisely. >  > >gA > > But the case I'm interested in does not differentiate between J > > fileservers and clients. All boxes in the group provide storage to the > > common storage pool. > G > There's nothing inherent in your diagram above (with the 'fileserver'tK > entries removed) that would prevent the client/storage box interface fromrK > being an internal one in a single box supporting both - which also halveso& > the total number of links required.   2 This is precisely the case I'm interested in, yes.  $ > But the clients have to trust eachK > other a lot more than when they're separated from the storage servers andnM > can rely upon storage-server protection mechanisms.  Clients that cooperateuN > to export a single, stable application to a much wider external world (e.g.,G > Web servers) tend to fall into this category, but clients that may begN > running virtually anything (which can compromise their robustness) or are inN > the service of individuals (e.g., workstations) who may elect to reboot themC > at any time may better be kept separate from the storage servers.-  D That's the redundancy problem. I assume that someone, somewhere, hasE done the equivalent of merging the spare disk space on a network intodG one big storage pool, and has dealt with the "random reboot" problem bys a RAID-type technique.  F I've been considering it from the low-end, where each machine has diskG space of a few tens of GB, and reboot probabilities are fairly low, but A machines might be very slow from time to time. However, without agC implementation of a filesystem that supports multiple machines each C generating the same logical volume as each other from the availableo  data, I can't see how to do it.   J So, my next question is: is there a filesystem that allows each machine inF the network to generate the same image as all the others, and maintainF synchronisation between them, and cope with permanent loss of parts of
 the storage? u   thanks for the commentsh  
 cheers, Rich.    -- tH rich walker | technical person | Shadow Robot Company | rw@shadow.org.uk7 front-of-tshirt space to let     251 Liverpool Road   | H                                  London  N1 1LX       | +UK 20 7700 2487   ------------------------------  # Date: Thu, 05 Jul 2001 05:53:55 GMTm From: newsuser@news.com (dondo)e" Subject: Re: The Alpha/IA64 Hybrid8 Message-ID: <Xns90D513130E541newsusernewscom@24.24.0.10>  0 "Peter Mayne" <Peter.Mayne@au1.ibm.com> wrote in! <9i02qf$rik$1@news.btv.ibm.com>: oF >Last time I worked with Tru64 (V5.1 I think), the Cluster File SystemA >(CFS) worked by having one of the nodes in the cluster serve theoH >underlying AdvFS file system to the other nodes over the cluster memoryH >channel. Therefore, only that node could directly access the underlyingH >file system, and it only looked like the other nodes had direct access,B >with CFS server failover happening as necessary. This is somewhatF >different to VMS, where nodes in a cluster can access the file system! >concurrently and independently.   >v  @ Nope - that's not how it works, at least in a connected cluster.  ; Though a single node serves the metadata to other nodes vianB the cluster interconnect, each node that has a direct path to the @ given disk uses that direct path (or multiple paths if they are + available) to access any actual file data. u  ? They do do this concurrently - multiple nodes initiate io's at nB the same time.  Of course, with single disks, it wouldn't *really*0 be concurrent, but large raid sets surely would.  D If a cluster is not fully connected to all the storage, or if a pathD failure occurs from one node, then the cluster interconnect is still6 used for data access (i.e. it would be "served" data.)  E This architecture is actually quite scalable, and failure resiliant, tB especially since all nodes can be serving file system metadata forD different filesystems.  The meta data traffic is still substantiallyB less than the lock traffic that would be required to do fine-grain? lock access, and it also permits more rapid recovery if a node r fails.   ------------------------------  $ Date: Thu, 5 Jul 2001 13:39:35 +0800 From: "mark" <mark@olc.com.au> Subject: Re: vax 4000/904 Message-ID: <W0T07.65$iB1.9749@nsw.nnrp.telstra.net>  = Thats not nice, were a real masterpiece in their time...lolol-        1 "Scott Vieth" <svieth@wi.rr.com> wrote in message # news:3B413979.614FB00A@wi.rr.com... L > You're chasing a VAXstation 4000 model 90?  It shouldn't take you too long to( > catch it.  They can't run very fast... >t > -Scott ;^) > 
 > mark wrote:  >l
 > > Hi there,t > >gI > > I am chasing a VAX 4000/90 with tape drive preferably, I thought thatu this > > was a good place to ask. > >fI > > If you think that you can help and you are in Australia please let me  know.  > >e
 > > Thanks >e   ------------------------------  % Date: Wed, 04 Jul 2001 11:02:50 -0700i! From: Tom Linden <tom@kednos.com>d( Subject: RE: VAX-11/780 boot disk needed9 Message-ID: <CIEJLCMNHNNDLLOOGNJIGELDCOAA.tom@kednos.com>s  L Almost any SMD drive will work, if you can find one.  I don't recall if that< interface ever made it to anything smaller than 8", however.   > -----Original Message-----> > From: yyyc186@mindspring.com [mailto:yyyc186@mindspring.com]) > Sent: Wednesday, July 04, 2001 11:24 AMh > To: Info-VAX@Mvb.Saic.Coml* > Subject: Re: VAX-11/780 boot disk needed >  >s2 > In <3B3B943B.3CAED199@Compaq.com>, on 06/28/2001B >    at 10:31 PM, Didier Morandi <Didier.Morandi@Compaq.com> said: >  > I > Are you talking about an RA Series disk drive?  There are lots of those1H > laying around and they will cost you more to ship than they are worth. >:H > Unless I'm confusing things with the 11/750 you should be able to boot > from the console tape. >l > Roland >  > >"Richard W. Schauer" wrote: > >> > >> Hi- > >>A > >> I have a VAX-11/780 in need of the boot disk for the consolea > processor.  I B > >> would like to run the machine, as it's in excellent condition > except forL > >> this missing disk.  If anyone has one they no longer need or can spare,@ > >> please let me know.  Also if this is the sort of thing that > still might beB > >> available from Compaq, let me know where to find it because I
 > haven't had 
 > >> any luckn >" > >You mean the 5 1/4 floppy?? > >Well... try DECUS.t& > >Or maybe Portobello Rd (London, UK) >r > >D.n > --= > -----------------------------------------------------------f > yyyc186@mindspring.com= > -----------------------------------------------------------h >d   ------------------------------  % Date: Wed, 04 Jul 2001 21:04:21 +0100i+ From: "antonio.carlini" <arcarlini@iee.org> ( Subject: Re: VAX-11/780 boot disk needed' Message-ID: <3B4376C5.27147453@iee.org>a  # I think he means the 8" floppy disk  that is used to kick the LSI-11* console processor into life so that it can, in turn, prod the VAX-11/780.p   These are somewhat lighter toa" ship than an RA60 (unless he needs the actual RX01 drive ...).c   Sorry, I don't have one.   Antonior     Tom Linden wrote:h > N > Almost any SMD drive will work, if you can find one.  I don't recall if that> > interface ever made it to anything smaller than 8", however. >  > > -----Original Message-----@ > > From: yyyc186@mindspring.com [mailto:yyyc186@mindspring.com]+ > > Sent: Wednesday, July 04, 2001 11:24 AMo > > To: Info-VAX@Mvb.Saic.Coms, > > Subject: Re: VAX-11/780 boot disk needed > >t > >f4 > > In <3B3B943B.3CAED199@Compaq.com>, on 06/28/2001D > >    at 10:31 PM, Didier Morandi <Didier.Morandi@Compaq.com> said: > >- > >-K > > Are you talking about an RA Series disk drive?  There are lots of thoseoJ > > laying around and they will cost you more to ship than they are worth. > >rJ > > Unless I'm confusing things with the 11/750 you should be able to boot > > from the console tape. > >o
 > > Roland > >   > > >"Richard W. Schauer" wrote: > > >>
 > > >> Hi- > > >>C > > >> I have a VAX-11/780 in need of the boot disk for the consoleg > > processor.  I D > > >> would like to run the machine, as it's in excellent condition > > except forN > > >> this missing disk.  If anyone has one they no longer need or can spare,B > > >> please let me know.  Also if this is the sort of thing that > > still might beD > > >> available from Compaq, let me know where to find it because I > > haven't hade > > >> any lucke > >a > > >You mean the 5 1/4 floppy?h > > >Well... try DECUS. ( > > >Or maybe Portobello Rd (London, UK) > >  > > >D.n > > --? > > -----------------------------------------------------------e > > yyyc186@mindspring.com? > > -----------------------------------------------------------i > >r   -- t   --------------- - Antonio Carlini             arcarlini@iee.orge   ------------------------------   Date: 4 Jul 2001 17:34:35 -0400 5 From: pechter@i4got.pechter.dyndns.org (Bill Pechter)s( Subject: Re: VAX-11/780 boot disk needed3 Message-ID: <9i025b$ep9$1@i4got.pechter.dyndns.org>i  ; In article <3b435f6a$3$lllp186$mr2ice@nntp.mindspring.com>,n   <yyyc186@mindspring.com> wrote:2 >In <3B3B943B.3CAED199@Compaq.com>, on 06/28/2001 A >   at 10:31 PM, Didier Morandi <Didier.Morandi@Compaq.com> said:t >  >lH >Are you talking about an RA Series disk drive?  There are lots of thoseG >laying around and they will cost you more to ship than they are worth.a >mG >Unless I'm confusing things with the 11/750 you should be able to boot  >from the console tape.v  C That's an 11/750.  There's no console tape on the 11/780 or 11/785.t >g >Roland- >f >>You mean the 5 1/4 floppy?< >----------------------------------------------------------- >yyyc186@mindspring.comn< >-----------------------------------------------------------  F BTW that's an 8 inch RX01 single density floppy on an 11/780 or 11/785E and an RT11 RL02 on an 8600/8650.  The only consoles with 5 1/4 frontoG end floppies on Vaxes I know are Pro350/380 based consoles on the latere 8xxxx stuff.   Bill --   --- >   Bill Gates is a Persian cat and a monocle away from being a >   villain in a James Bond movie              -- Dennis Miller 8   bpechter@shell.monmouth.com|pechter@pechter.dyndns.org   ------------------------------  # Date: Thu, 05 Jul 2001 03:50:10 GMTt+ From: "Nikita V. Belenki" <public@kits.net> D Subject: Re: Wailing and moaning.... (was: Compilers go to Intel...)A Message-ID: <StR07.154305$%i7.102630651@news1.rdc1.sfba.home.com>   F "Larry Kilgallen" <Kilgallen@eisner.decus.org.nospam> wrote in message- news:3muriWXVncvJ@eisner.encompasserve.org...)  J > > I can see where Intel would need the GEM backend, and the Q (and other OSE > > mfg's) would need specific frontends.  However, the question theno becomeseJ > > does Q pay Inhel royalties for the backend?  If so, does that kill anyG > > ability to include the compilers in the free kits (hobbyist, etc.)?rE > Doesn't anybody _read_ what has been posted here?  Michael CapellastG > is quoted as saying the transfer to Intel was _non_exclusive_rights_, . > meaning there can be no royalty requirement.  L No exclusive rights on code written by Intel? Is Compaq going to be a second
 Microsoft?   Kit.   ------------------------------  % Date: Wed, 04 Jul 2001 14:32:38 -0300a) From: fabio_compaq@ep-bc.petrobras.com.brs Subject: Yahoo and OpenVMSL Message-ID: <OF11C7F058.28C8E738-ON03256A7F.00604883@ep-bc.petrobras.com.br>  G The IT White Paper at Yahoo discovered almost 500 products for OpenVMS.d  k http://yahoo.knowledgestorm.com/customers/search.php?qt_type=&qt=openvms&col=ks1&st=1&rf=0&nh=50&type=P_A_Sg   This sounds good....   Regardse   FC   ------------------------------  % Date: Wed, 04 Jul 2001 13:43:49 -0500.1 From: "David J. Dachtera" <djesys.nospam@fsi.net>g Subject: Re: Yahoo and OpenVMS' Message-ID: <3B4363E5.6908CC96@fsi.net>s  * fabio_compaq@ep-bc.petrobras.com.br wrote: > I > The IT White Paper at Yahoo discovered almost 500 products for OpenVMS.a > m > http://yahoo.knowledgestorm.com/customers/search.php?qt_type=&qt=openvms&col=ks1&st=1&rf=0&nh=50&type=P_A_S   F Now - look through the list, eliminate the duplicates, locate the onesC that have a GUI interface then locate within that subset any app.'s C which are useful as competition to similar offerings for BillyWare.E   -- a David J. Dachteran dba DJE Systemse http://www.djesys.com/  : Unofficial Affordable OpenVMS Home Page and Message Board: http://www.djesys.com/vms/soho/c  F This *IS* an OpenVMS-related newsgroup. So, a certain bias in postings is to be expected.  @ Feel free to exercise your rights of free speech and expression.  F However, attacks against individual posters, or groups of posters, are strongly discouraged.    ------------------------------   End of INFO-VAX 2001.369 ************************local... >  > ...  > ' > > Interesting in the following sense:cB > > sing