1 INFO-VAX	Sat, 17 Dec 2005	Volume 2005 : Issue 700       Contents: Re: Another Backup suggestion  Re: Another Backup suggestion " Re: calloc() fails, no more memory Re: Clustering Re: Clustering Re: Clustering Re: Clustering Re: Clustering Re: Clustering/ Re: Clustering: switches reliability/redundancy  Re: Database access from COBOL Re: Database access from COBOL Re: DESTA Memory hog+ Re: Location of initial page and swap files  Re: monitoring VMS log files Re: monitoring VMS log files Re: monitoring VMS log files Re: monitoring VMS log files" Re: MOUNT/BIND and BACKUP question" Re: MOUNT/BIND and BACKUP questionE Re: PHONE error - Invalid specification of node or person. Try again. E Re: PHONE error - Invalid specification of node or person. Try again. E Re: PHONE error - Invalid specification of node or person. Try again. * Re: R400X: converting DSSI shelves to SCSI* Re: R400X: converting DSSI shelves to SCSI" Samba v3 on VMS and HP VMS Roadmap& Re: Samba v3 on VMS and HP VMS Roadmap Re: shadowing questions  Re: shadowing questions  Re: Updated VMS information   F ----------------------------------------------------------------------  + Date: Fri, 16 Dec 2005 20:28:24 +0000 (UTC) P From: helbig@astro.multiCLOTHESvax.de (Phillip Helbig---remove CLOTHES to reply)& Subject: Re: Another Backup suggestion$ Message-ID: <dnv818$paj$5@online.de>  > In article <11nund.9b.ln@news.hus-soft.de>, Albrecht Schlosser <ajs856@tiscali.de> writes:   ; > > Last week I added a new qualifier - /PROGRESS_REPORT=n. B > > The new qualifier instructs BACKUP to write progress report toC > > SYS$OUTPUT (which may be a log file) every n seconds.  We write H > > the new CTRL-T message to the log file (data saved/restored, current9 > > file, rate and estimated completion time) prefixed by  > > %BACKUP-I-PROGRESS_REPORT, > H > Fine, but I'm missing some sort of time stamp, at least with the line @ > "... starting verification pass" (or similar). Sometimes it's E > interesting to know how long it took to write the saveset and when  : > backup started verification (and backup date recording).   $ SET PREFIX   ------------------------------  % Date: Fri, 16 Dec 2005 22:30:10 -0600 2 From: David J Dachtera <djesys.nospam@comcast.net>& Subject: Re: Another Backup suggestion+ Message-ID: <43A39451.ABD50FE0@comcast.net>    Guy Peleg wrote: > < > "JF Mezei" <jfmezei.spamnot@teksavvy.com> wrote in message( > news:43A21D55.6E4354C4@teksavvy.com...H > > While freezing my toes off in this very cold weather, I got to think3 > > about Guy Peleg's recent Backup improvements...  > > " > > I would suggest the following: > >   > > BACKUP /LOG=INTERVAL=(DIR:3)H > > Writes a log line everytime it has completed the backing up of a 3rd. > > level directory, as well as higher levels. > > eg:  > > K > > 30-FEB-2006 07:43:25 dka200:[sys0.syshlp.examples]  9273 blocks  230202  > blocksD > > 30-FEB-2006 08:20:21 dka200:[sys0.syshlp]           15000 blocks > > 3002383 blocksD > > 30-FEB-2006 09:10:34 dka200:[sys0]                  29392 blocks > > 3938282 blocksF > > (eg: give the number of blocks in that directory, and a cumulative/ > > number of blocks for the backup operation.)  > >  > >  > >  > > BACKUP /LOG=INTERVAL=100 > > G > > This one is simpler, it issues a standard line (filename) every 100  > > files it has processed.  > >  > >  > > I > > The first one gives very informative messages about which directories K > > contain many blocks, and where the cutoff really happens between tapes.  > > I > > The second one just gives a rough idea of where the backup has gotten A > > without filling a log file with  agazillion lines, or without > > > monopolising a terminal with constant output of filenames. >  > Interesting ideas....  > 9 > Last week I added a new qualifier - /PROGRESS_REPORT=n. @ > The new qualifier instructs BACKUP to write progress report toA > SYS$OUTPUT (which may be a log file) every n seconds.  We write F > the new CTRL-T message to the log file (data saved/restored, current7 > file, rate and estimated completion time) prefixed by  > %BACKUP-I-PROGRESS_REPORT,  
 How 'bout:   BACKUP  !     /STATISTICS[=keyword[=value]]            Keywords  "         PROGRESS[=time_expression]F             Report progress ala CTRL+T every time_expression interval.A             time_expression is in the format hh:mm:ss.cc. Useful               mostly in batch.           FINAL (default) B             Report statistics at the end of saveset creation, and F             at the end of the verification pass (if /VERIFY specified)@             including elapsed clock time, CPU time, I/O counts, &             virtual memory usage, etc.   --   David J Dachtera dba DJE Systems  http://www.djesys.com/  ) Unofficial OpenVMS Hobbyist Support Page: " http://www.djesys.com/vms/support/  ( Unofficial Affordable OpenVMS Home Page: http://www.djesys.com/vms/soho/   " Unofficial OpenVMS-IA32 Home Page: http://www.djesys.com/vms/ia32/    Coming soon:& Unofficial OpenVMS Marketing Home Page   ------------------------------  % Date: Fri, 16 Dec 2005 16:18:20 -0800 , From: Ken Fairfield <my.full.name@intel.com>+ Subject: Re: calloc() fails, no more memory + Message-ID: <dnvlgc$sgl$1@news01.intel.com>    Karsten Nyblad wrote:  > Hans Blom wrote: > 
 >> Hello all, J >> I'm running OpenVMS 7.3-2 on an Alphaserver. A programmer has developedJ >> an application that, in order to work with speed, wants to keep as muchJ >> as possible of the data in memory. At some point he does an calloc() toG >> get a memoryarea to keep about 400 000 pointers. He gets a null back G >> from the call, basically OpenVMS saying - sorry sir, no more memory!  >>K >> We have tried raising pgflquota, wsextent (in order to decrease need for J >> paging) and every other conceivable quota both in sysuaf and sysgen. WeJ >> can see that as long as the program works in memory everything is fine,G >> but as soon as wsextent is hit and it has to start paging, it fails.  >>2 >> I'm stuck! Anybody got any ideas on what to do? >>
 >> Regards >> >> Hans Blom >  > J > There is one system parameter, that I do not understand why nobody have E > mentioned.  Is it just me, who have not had my hands on an OpenVMS   > machine for too long?   >     I don't know why no one mentioned it either, but I haven't2 been paying particular attention to this thread...  @     On VAXes, there is the SYSGEN parameter VIRTUALPAGECNT which= limits the maximum virtual memory any process can have.  This < parameter should be set less than the total of your physicalA memory plus your total pagefile space.  On Alphas, this parameter < is obsolete (and was removed at some point rather early on).   [...]         Regards, Ken  --  6 I don't speak for Intel, Intel doesn't speak for me...  
 Ken Fairfield ! D1C Automation VMS System Support " who:   kenneth dot h dot fairfield where: intel dot com   ------------------------------   Date: 16 Dec 2005 19:04:39 GMT( From: bill@cs.uofs.edu (Bill Gunshannon) Subject: Re: Clustering , Message-ID: <40ghe7F19e38tU2@individual.net>  3 In article <CyAIKn3cvC3d@eisner.encompasserve.org>, > 	koehler@eisner.nospam.encompasserve.org (Bob Koehler) writes:\ > In article <11q5lqcgtoor5b0@corp.supernews.com>, Dave Froble <davef@tsoft-inc.com> writes: >>  K >> It's been a while, and I've never been real intimate with clusters, but  K >> I think the biggest issue with running diskless workstations is paging.   > F >    That's why our "diskless" VAXstations all had local page and swap/ >    disks.  Paging over 10BT is not efficient.  >   H I could probably live with that.  I still have a bunch o 102MB and 340MBG SCSI disks left around here from the old Sparcstations.  Now all I need G is for someone from HP to call me up and say they took a whole bunch in F as trades and they are shipping them to me.  :-)  Even 4000's althoughF I have never even seen on but I assume they are better/faster/stronger) than he 3100's I am so familiar with. :-)    bill   --  J Bill Gunshannon          |  de-moc-ra-cy (di mok' ra see) n.  Three wolvesD bill@cs.scranton.edu     |  and a sheep voting on what's for dinner. University of Scranton   |A Scranton, Pennsylvania   |         #include <std.disclaimer.h>       ------------------------------   Date: 16 Dec 2005 19:07:24 GMT( From: bill@cs.uofs.edu (Bill Gunshannon) Subject: Re: Clustering , Message-ID: <40ghjcF19e38tU3@individual.net>  0 In article <11q5lqcgtoor5b0@corp.supernews.com>,* 	Dave Froble <davef@tsoft-inc.com> writes: > Bill Gunshannon wrote:U >> In article <FD827B33AB0D9C4E92EACEEFEE2BA2FB773691@tayexc19.americas.cpqcorp.net>, - >> 	"Main, Kerry" <Kerry.Main@hp.com> writes:  >>  F >>>                        As I recall there was a university that ran0 >>>something like 120 WS's/servers in a cluster. >>>  >>   >>  < >> This brings up an interesting question (at least for me!) >>  H >> You mentioned WS's above and I assume that means WorkStations.  What,K >> if anything, would be the advantage of building a cluster of, let's say, F >> 2 multi-processor Vaxen like I currently have in the department andG >> a dozen or so VS3100's?  Could all the VS3100's run diskless getting J >> all their support from the HSJ served disks on the big boxes?  AssumingF >> the cluster traffic was all on a private ethernet and access to theE >> world was only through the two big boxes would performance be good G >> enough?  Is there something important I missed because I really have J >> no idea how a VMS Cluster works, never having built one but consideringM >> it now. (Especially if it can make the whole system more visible locally!)  >>  J >> Then, of course, would come the biggest question.  Does HP have a bunchM >> of fully loaded VS3100's with big monitors that they want to truck up here : >> as a donation so I can build a this dream VAX lab.  :-) >>   >> bill  >>   > J > It's been a while, and I've never been real intimate with clusters, but J > I think the biggest issue with running diskless workstations is paging. H >   Back when VAXstation 3100 systems with a max of 32 MB of memory was I > the norm, paging was an issue.  Cluster members with sufficient memory  B > can do remote boots and all disk activity over an ethernet link. > G > If you were looking for some old VAXstations, I'd suggest VAXstation  K > 4000 models.  The model 60 can be found on ebay at times.  I'm sure that  F > somewhere they can be had if you'd just haul them away.  Don't know  > where 'somewhere' is.    I Me neither.  But, I would take a truckload of 4000's.  I just said 3100's H because that is the Vax Workstation I am most familiar with and even ownH (and still use) a couple of them myself.  As usual, Ebay isn't an optionB as this isn't a low-budget operation it's a no-budget operation.     bill   --  J Bill Gunshannon          |  de-moc-ra-cy (di mok' ra see) n.  Three wolvesD bill@cs.scranton.edu     |  and a sheep voting on what's for dinner. University of Scranton   |A Scranton, Pennsylvania   |         #include <std.disclaimer.h>       ------------------------------    Date: 16 Dec 2005 12:56:45 -0600; From: koehler@eisner.nospam.encompasserve.org (Bob Koehler)  Subject: Re: Clustering 3 Message-ID: <CyAIKn3cvC3d@eisner.encompasserve.org>   Z In article <11q5lqcgtoor5b0@corp.supernews.com>, Dave Froble <davef@tsoft-inc.com> writes: > J > It's been a while, and I've never been real intimate with clusters, but J > I think the biggest issue with running diskless workstations is paging.   D    That's why our "diskless" VAXstations all had local page and swap-    disks.  Paging over 10BT is not efficient.    ------------------------------  % Date: Fri, 16 Dec 2005 17:35:35 -0500 ' From: Dave Froble <davef@tsoft-inc.com>  Subject: Re: Clustering 0 Message-ID: <11q6g6ketbtfa1b@corp.supernews.com>   Bob Koehler wrote:\ > In article <11q5lqcgtoor5b0@corp.supernews.com>, Dave Froble <davef@tsoft-inc.com> writes: > J >>It's been a while, and I've never been real intimate with clusters, but J >>I think the biggest issue with running diskless workstations is paging.  >  > F >    That's why our "diskless" VAXstations all had local page and swap/ >    disks.  Paging over 10BT is not efficient.  >   D Yes, I remember that.  Did some work for some people who had such a G set-up long ago.  I'm thinking that a system with more memory wouldn't  * page much, and could be entirely diskless.   --  4 David Froble                       Tel: 724-529-04504 Dave Froble Enterprises, Inc.      Fax: 724-529-0596> DFE Ultralights, Inc.              E-Mail: davef@tsoft-inc.com 170 Grimplin Road  Vanderbilt, PA  15486    ------------------------------  % Date: Fri, 16 Dec 2005 17:39:11 -0500 ' From: Dave Froble <davef@tsoft-inc.com>  Subject: Re: Clustering 0 Message-ID: <11q6gddk5fq272d@corp.supernews.com>   Bill Gunshannon wrote:2 > In article <11q5lqcgtoor5b0@corp.supernews.com>,, > 	Dave Froble <davef@tsoft-inc.com> writes: >  >>Bill Gunshannon wrote: >>U >>>In article <FD827B33AB0D9C4E92EACEEFEE2BA2FB773691@tayexc19.americas.cpqcorp.net>, - >>>	"Main, Kerry" <Kerry.Main@hp.com> writes:  >>>  >>> F >>>>                       As I recall there was a university that ran1 >>>>something like 120 WS's/servers in a cluster.  >>>> >>>  >>> < >>>This brings up an interesting question (at least for me!) >>> H >>>You mentioned WS's above and I assume that means WorkStations.  What,K >>>if anything, would be the advantage of building a cluster of, let's say, F >>>2 multi-processor Vaxen like I currently have in the department andG >>>a dozen or so VS3100's?  Could all the VS3100's run diskless getting J >>>all their support from the HSJ served disks on the big boxes?  AssumingF >>>the cluster traffic was all on a private ethernet and access to theE >>>world was only through the two big boxes would performance be good G >>>enough?  Is there something important I missed because I really have J >>>no idea how a VMS Cluster works, never having built one but consideringM >>>it now. (Especially if it can make the whole system more visible locally!)  >>> J >>>Then, of course, would come the biggest question.  Does HP have a bunchM >>>of fully loaded VS3100's with big monitors that they want to truck up here : >>>as a donation so I can build a this dream VAX lab.  :-) >>>  >>>bill  >>>  >>J >>It's been a while, and I've never been real intimate with clusters, but J >>I think the biggest issue with running diskless workstations is paging. H >>  Back when VAXstation 3100 systems with a max of 32 MB of memory was I >>the norm, paging was an issue.  Cluster members with sufficient memory  B >>can do remote boots and all disk activity over an ethernet link. >>G >>If you were looking for some old VAXstations, I'd suggest VAXstation  K >>4000 models.  The model 60 can be found on ebay at times.  I'm sure that  F >>somewhere they can be had if you'd just haul them away.  Don't know  >>where 'somewhere' is.  >  >   K > Me neither.  But, I would take a truckload of 4000's.  I just said 3100's J > because that is the Vax Workstation I am most familiar with and even ownJ > (and still use) a couple of them myself.  As usual, Ebay isn't an optionD > as this isn't a low-budget operation it's a no-budget operation.   >  > bill >   D Put the word out that you're looking.  David Turner shipped me some I pieces a while back.  I think they were in the way and he was glad to be  F rid of them.  Contact the used equipment dealers.  Some will probably D blow you off, but some may be interested in seeing VMS in education.  : You could also contact salvage people, if you know of any.   --  4 David Froble                       Tel: 724-529-04504 Dave Froble Enterprises, Inc.      Fax: 724-529-0596> DFE Ultralights, Inc.              E-Mail: davef@tsoft-inc.com 170 Grimplin Road  Vanderbilt, PA  15486    ------------------------------  % Date: Fri, 16 Dec 2005 17:22:41 -0600 / From: Chris Scheers <chris@applied-synergy.com>  Subject: Re: Clustering 0 Message-ID: <43A34C41.40003@applied-synergy.com>   Bill Gunshannon wrote:  T > In article <FD827B33AB0D9C4E92EACEEFEE2BA2FB773691@tayexc19.americas.cpqcorp.net>,, > 	"Main, Kerry" <Kerry.Main@hp.com> writes: > E >>                        As I recall there was a university that ran / >>something like 120 WS's/servers in a cluster.  >> >  > ; > This brings up an interesting question (at least for me!)  > G > You mentioned WS's above and I assume that means WorkStations.  What, J > if anything, would be the advantage of building a cluster of, let's say,E > 2 multi-processor Vaxen like I currently have in the department and F > a dozen or so VS3100's?  Could all the VS3100's run diskless gettingI > all their support from the HSJ served disks on the big boxes?  Assuming E > the cluster traffic was all on a private ethernet and access to the D > world was only through the two big boxes would performance be goodF > enough?  Is there something important I missed because I really haveI > no idea how a VMS Cluster works, never having built one but considering L > it now. (Especially if it can make the whole system more visible locally!) > I > Then, of course, would come the biggest question.  Does HP have a bunch L > of fully loaded VS3100's with big monitors that they want to truck up here9 > as a donation so I can build a this dream VAX lab.  :-)   F I set up something like this once.  Once of the questions you need to ? ask your self is: How important is reboot time for the cluster?   H I once saw a cluster of 80-90 some VS3100s all being served by a single C 8800 as the boot/disk server.  From power up to first login on all  J workstations took over four hours.  (A 10MB ethernet can only do so much.)  E The cluster I setup up had close to 40 VS3100s.  The plan I used was:   F 1) Every machine has a page/swap disk.  (In this case, they were 52MB  RZ22s.)   I 2) Every fifth machine is a boot server.  These machines had enough disk  D space for all software (VMS and layered apps), but no user data.  I  think we used 200MB RZ24s.  F 3) Each boot server serves four other machines.  All machines in this / "sub-cluster" are on the same ethernet segment.   G 4) The boot servers were kept synchronized with RSM.  (Is that product  F still supported?)  Except for SCS IDs and such, all boot servers were 
 identical.  I 5) Boot servers were used as user workstations, just like the "diskless"  I machines.  In fact, the users were not aware of which machines were boot   servers and which were not.   G 6) All user data was stored on a "disk server" which was not used as a  H user workstation.  This would be your HSJ machines.  All shared cluster A files (SYSUAF, queue files, etc.) were stored on the disk server.   I 7) The disk server was the only machine with votes.  (If I redid this, I  ? would consider spreading some votes around to help spread lock  F mastering, but this didn't seem to cause any performance problems for @ us.)  The disk server did nothing but file serving, print queue  execution, and lock mastering.  D This configuration worked very well.  Time from all machines off to F login at all machines was about 15 minutes.  This was with VS3100-30s.  D All machines except the disk server were used as user workstations. 0 User's could use any workstation interchangably.  F User machines had no local data, so they did not need to be backed up.  F Boot machines were identical to each other, so they could be restored G from another boot machine's disk, so they did not need to be backed up.   F The only files that needed to be backed up were the user files on the  disk server.   --  G ----------------------------------------------------------------------- $ Chris Scheers, Applied Synergy, Inc.  B Voice: 817-237-3360            Internet: chris@applied-synergy.com    Fax: 817-237-3074   ------------------------------    Date: 16 Dec 2005 11:45:22 -0800" From: dave.baxter@bannerhealth.com8 Subject: Re: Clustering: switches reliability/redundancyC Message-ID: <1134762322.182131.230380@g14g2000cwa.googlegroups.com>    Similar solution we use;  E 2 x 8-port, 1Gb (MMF) Cisco Switches ("somethng" 3508) old and cheap. D  Separate power, set up as two independent private segments (i.e. no connection to main network.)D (Note: both switches have been in place and used in this fashion for5 over 2 years, and I have had no problem with either).   G Ports 1-4 designated for Production cluster (3 ports used) 2 x GS1280's  and 1 x ES406 Ports 5-7 designated for Non-Prod Cluster (3 x ES40's)  F Each node (Prod and Non-Prod) has 2 x 1Gb (MMF) Ethernet NICs with one connection to each switch.  ? When the systems boot, cluster interconnect for each cluster is G independently established through both switches, and as a third option, " via the Main Network (10/100 NIC).  F I normally use SCACP to turn off the main network SCS traffic, howeverF in the event of a switch failure reducing my redundancy on the clusterB interconnect, the main network can always be turned back on again.   Dave.        Main, Kerry wrote: > > -----Original Message-----8 > > From: Keith A. Lewis [mailto:klewis@OMEGA.MITRE.ORG]# > > Sent: December 15, 2005 6:01 PM  > > To: Info-VAX@Mvb.Saic.Com < > > Subject: RE: Clustering: switches reliability/redundancy > > 7 > > "Main, Kerry" <Kerry.Main@hp.com> writes in article B > > <FD827B33AB0D9C4E92EACEEFEE2BA2FB7735F5@tayexc19.americas.cpqc3 > > orp.net> dated Wed, 14 Dec 2005 17:35:34 -0500:  > > > > > > >> From: JF Mezei [mailto:jfmezei.spamnot@teksavvy.com]=20 > > : > > >> Are there considerations if a cluster nodes are all > > connected via the H > > >> same switch/hub ? It the later fails, the whole cluster hangs and > > >> becomes inaccessible.=20  > > ? > > Yes.  It should un-hang when power is restored.  Of course,  > > if the same hub G > > connects you to your network of clients, the cluster would still be 6 > > inaccessible regardless of whether it hung or not. > > ? > > >> Are hubs/switches considered "fault tolerant"  ? If not,  > > what possible E > > >> steps would a good site planner take to ensure a cluster isn't + > > >> jeoperdized by some $50 switch/hub ? 	 > > >>=20 @ > > >> Are hubs considered more "fault tolerant" than switches ?	 > > >>=20 G > > >> Is it just a simple case of reserving spare ports on a backup=20  > > >> switch so@ > > >> that cluster ethernet connectiosn can be moved one by one > > before the; > > >> main switch/hub is powered off for maintenance etc ? 	 > > >>=20  > > J > > >Simple solution is to establish VLAN between 2 trunked switch/routers? > > >and use separate NIC connections to each from each server.  > > This causes > > > >the switch to appear as a logical unit or virtual cluster > > interconnect; > > >box. Entire switch/rtr fails and OpenVMS cluster keeps  > > running i.e. not- > > >even any application failover issues.=20  > > > @ > > >With the right IP failover config's in place, you would not > > even lose a  > > >telnet connection.=20 > > > F > > >OpenVMS will load balance SCS across all configured and available< > > >connections. By configured I mean not disabled with the > > SCACP utility. > > ? > > What we do is run 2 independent switches.  Can't v-lan them  > > together becauseA > > the MAC addresses are the same across all cards due to DECNET 
 > > phase IV. A > > With DECNET performance no longer important, Kerry's solution  > > is better, I'd% > > change to that if I had the time.  > > A > > The configuration I describe is tolerant of any single fault,  > > but you haveG > > to stay on top of them because a double fault can hang the cluster.  > > = > > Example:  Nodes A-C each have 2 NICs and are connected to  > > switches 1 and 2. = > > If NIC A-1 fails, everything still works.  But if NIC B-2  > > fails before A-1= > > is fixed, nodes A and B can no longer communicate.  C can  > > still see both of ? > > them, so it wants to keep the cluster together, and each of  > > {A,B} wants to; > > kick the other out.  The result is a cluster-wide hang.  > > 4 > > --Keith Lewis              klewis {at} mitre.orgB > > The above may not (yet) represent the opinions of my employer. > >  >  > Keith, > E > With one Cust with high availability 3 node ES45 cluster we did the  > following: > + > - dual Cisco switch/rtrs trunked together I > - 3 separate VLANS to isolate traffic protocols and increased security. I > VLAN1 - normal TCPIP traffic (each connection was actually 2 NIC's, one  > to each switch) H > VLAN2 - SCS traffic (each connection was actually 2 NIC's, one to each	 > switch) I > VLAN3 - DECnet traffic (each connection was only one NIC to one switch)  >  > Actually, we had 4th as well: J > VLAN4 - mgmt monitoring, security and console manager (had network level= > security controls on who could access devices on this VLAN)  > I > Hence with the above, if a switch failed, then DECnet to 1 or 2 servers > > might be cut, but all normal cluster traffic would continue. > D > And a misc note - while there are always some trade-offs, the CustJ > wanted to ensure that when the took 1 server down for planned reasons ifG > one of the remaining servers crashed, then the last node should still I > continue running. Hence, we simply made each system = 1 vote and made a  > quorum disk = 2 votes. >  > 	 > Regards  >  > Kerry Main > Senior Consultant  > HP Services Canada > Voice: 613-592-4660  > Fax: 613-591-4477  > kerryDOTmainAThpDOTcom > (remove the DOT's and AT)  > 6 > OpenVMS - the secure, multi-site OS that just works.   ------------------------------    Date: 16 Dec 2005 12:52:43 -0600; From: koehler@eisner.nospam.encompasserve.org (Bob Koehler) ' Subject: Re: Database access from COBOL 3 Message-ID: <ckNtst1kCl6i@eisner.encompasserve.org>   r In article <1134746549.851800.108040@g49g2000cwa.googlegroups.com>, "rcyoung" <rcyoung@aliconsultants.com> writes:D > Here is an option as well. Get the SIMH Vax simulator ( a freebie)C > which runs on Linux, Mac/X, Alpha VMS, and Windows. We have Rdb 7 C > running on Mac/X 10.3 with the Compaq "C" compiler for doing some D > development/testing while traveling. I see no reason why the CobolF > compiler would not function equally well. That way, each student canI > have their own "Vax" if they wish. All you need is the hobbyist license H > for VMS and $30 for one set of media. Load a new hobbyist license onto< > the disk image each year before the start of the semester.  >    Use of Hobbyist license for educational institutions is notA    permitted, unless there have been changes since Ilast read it.   :    Educational institutions are offered discount programs.   ------------------------------    Date: 16 Dec 2005 13:28:43 -0800, From: "rcyoung" <rcyoung@aliconsultants.com>' Subject: Re: Database access from COBOL C Message-ID: <1134768523.186871.137420@g49g2000cwa.googlegroups.com>   D To be honest, I had not thought of the Hobbyist "fine print", so youG may be correct. However, if individuals wanted to get it and use it for F school work "at home" , then I think that would qualify. I guess I wasG thinking of it as the sort of thing that someone who wanted their "own" F system to tinker, do programming homework, etc could get at near $0. IF could see it as a very convenient way to do  homework from home or offG campus without having to have a separate ISP connection back to school.    ------------------------------  % Date: Fri, 16 Dec 2005 22:14:29 -0600 2 From: David J Dachtera <djesys.nospam@comcast.net> Subject: Re: DESTA Memory hog + Message-ID: <43A390A5.B141319B@comcast.net>    comp.os.vms@hotmail.com wrote: >  > >From a DS25 4GB machine.  > ' > $ pipe show sys |sear sys$input desta H > 0000045D DESTA Director  HIB      6  1729374   0 00:11:02.78    189857	 > 34869 M  >  > This seems a tad excessive.   ) We just had an interesting issue at work:   F 3-node cluster. DESTA Director on the third node was, for some reason,G holding an exclusive lock on a file on the CDE$DEFAULTS path, but DESTA F Director itself on that node was hung in a MUTEX state due to resourceE exhaustion. As a result, my system disk backup on the the second node  hung.   G My advice would be: don't bother running WEBES/DESTA until you need it.    --   David J Dachtera dba DJE Systems  http://www.djesys.com/  ) Unofficial OpenVMS Hobbyist Support Page: " http://www.djesys.com/vms/support/  ( Unofficial Affordable OpenVMS Home Page: http://www.djesys.com/vms/soho/   " Unofficial OpenVMS-IA32 Home Page: http://www.djesys.com/vms/ia32/    Coming soon:& Unofficial OpenVMS Marketing Home Page   ------------------------------  + Date: Fri, 16 Dec 2005 20:19:05 +0000 (UTC) P From: helbig@astro.multiCLOTHESvax.de (Phillip Helbig---remove CLOTHES to reply)4 Subject: Re: Location of initial page and swap files$ Message-ID: <dnv7fo$paj$3@online.de>  5 In article <43A1D89E.3EAFAEE8@teksavvy.com>, JF Mezei ' <jfmezei.spamnot@teksavvy.com> writes:    M > > Very carfully... BTW, i use the file name AAA<SCSID>.PAGE|SWAP and put it I > > in [000000].  This automajikally sticks it right in the centre of the 5 > > disk next to the index file on an /image restore.  >  > C > Does putting a file in [000000] result in physical file placement ) > difference versus any other directory ?   D An image restore will put INDEXF.SYS in the middle of the disk.  An F image restore will defragment the disk, essentially restoring them in F directory order.  I don't know how, in detail, the disk is filled up, F but it makes sense that AAA<SCSID>.PAGE|SWAP would be near INDEXF.SYS.  H Note that "centre of the disk" means "between the middle and the edge", ) not the actual "centre of the disk".  :-)    ------------------------------    Date: 16 Dec 2005 11:48:37 -0800" From: dave.baxter@bannerhealth.com% Subject: Re: monitoring VMS log files C Message-ID: <1134762517.603389.241410@g14g2000cwa.googlegroups.com>   E Any pointers to the doc relating to this capability, and how it might G be used!    can it be scripted in DCL or does it have to be called from  a higher level language??    Dave   ------------------------------    Date: 16 Dec 2005 16:16:41 -0600- From: Kilgallen@SpamCop.net (Larry Kilgallen) % Subject: Re: monitoring VMS log files 3 Message-ID: <jQNoOfeHp8ej@eisner.encompasserve.org>   h In article <1134762517.603389.241410@g14g2000cwa.googlegroups.com>, dave.baxter@bannerhealth.com writes:  G > Any pointers to the doc relating to this capability, and how it might I > be used!    can it be scripted in DCL or does it have to be called from  > a higher level language??   L From your message it is not at all clear what you mean by "this capability".J It is not good to assume that everybody's newsreader is the same as yours,L or that all posts arrive everywhere in the same order or even arrive at all.   ------------------------------  % Date: Fri, 16 Dec 2005 19:05:29 -0800 ( From: Jeff Cameron <roktsci@comcast.net>% Subject: Re: monitoring VMS log files 0 Message-ID: <BFC8C079.191EF%roktsci@comcast.net>    On 12/16/05 11:48 AM, in article6 1134762517.603389.241410@g14g2000cwa.googlegroups.com,D "dave.baxter@bannerhealth.com" <dave.baxter@bannerhealth.com> wrote:  G > Any pointers to the doc relating to this capability, and how it might I > be used!    can it be scripted in DCL or does it have to be called from  > a higher level language??  >  > Dave > D See the Guide to System security under setting up an "Audit ListenerG Mailbox". You also need the $FORMAT_AUDIT system service to convert the  binary audit message to ASCII.   Jeff   ------------------------------  % Date: Fri, 16 Dec 2005 22:19:53 -0600 2 From: David J Dachtera <djesys.nospam@comcast.net>% Subject: Re: monitoring VMS log files + Message-ID: <43A391E9.E15667DE@comcast.net>    Jeff Cameron wrote:  > ! > On 12/15/05 8:25 AM, in article F > 1134663926.819306.74400@g49g2000cwa.googlegroups.com, "Ken Robinson" > <kenrbnsn@gmail.com> wrote:  >  > >  > > tumblindice wrote: > >> Hi, > >>G > >>    We would like to monitor our VMS logfiles, operator.log and the J > >> security files to a machine that could send out email to notify us ofK > >> discrepancies. There are a few tools out there for UNIX but I have not A > >> seen anything for VMS. Any help or pointers would be greatly  > >> appreciated. Thank you. > > K > > There are a number of products like that. Two that come to mind are the J > > Unicenter Console Management for OpenVMS from CA and ConsoleWorks from0 > > TECsys Developement <http://www.tditx.com/>. > >  > > Ken  > > K > Both the OPCOM and The Security Auditor have a calling interface allowing + > your process to intercept these messages.   D A generic piece that would be useful to many ISVs would be simple toF implement OPCOM listener/API. ConsoleWorks / ConsoleManager is just to much for many applications.    --   David J Dachtera dba DJE Systems  http://www.djesys.com/  ) Unofficial OpenVMS Hobbyist Support Page: " http://www.djesys.com/vms/support/  ( Unofficial Affordable OpenVMS Home Page: http://www.djesys.com/vms/soho/   " Unofficial OpenVMS-IA32 Home Page: http://www.djesys.com/vms/ia32/    Coming soon:& Unofficial OpenVMS Marketing Home Page   ------------------------------    Date: 16 Dec 2005 13:14:37 -0600 From: briggs@encompasserve.org+ Subject: Re: MOUNT/BIND and BACKUP question 3 Message-ID: <bi$X$48dZYt+@eisner.encompasserve.org>   \ In article <43A30037.9AF4333C@teksavvy.com>, JF Mezei <jfmezei.spamnot@teksavvy.com> writes:G > Not long ago. someone asked about BACKUP/IMAGE of a bound volume to a   > new single drive (impossible). > D > I current have a 10 gig drive known to all nodes in the cluster asG > $DISK2.  During migration, I have at my disposal 4 * 2gig DSSI drives I > that I could move the data to (it will fit) while the 10 gig SCSI drive B > is reinstalled from the all mighty microvax II to a new machine. > G > Despite all the warnings about bound volumes, I think it is still the J > best/easiest solution sicn applications could still see a single $DISK2.D > Otherwise, I would have to go through all software configs to moveT > individual software to their own drives. And this bound volume would be temporary. > ) > I current have 4 un-initialised drives.  > F > Does it matter how each drive is initialised before they are groupedE > into a bound volume ?  (/ACCESSED, /CLUSTER_SIZE, /DIRECTORIES etc    F /ACCESSED -- the documentation says that's for ODS1.  One assumes that you're going to be using ODS2.  B /CLUSTER_SIZE -- choose something reasonable based on the standardA tradeoffs.  Bigger cluster size = more wasted space at the end of E small files.  Smaller cluster size = somewhat increased fragmentation B and increased size of cluster bitmap.  Use the same cluster factor, that you would for an ordinary 8 gig volume.  C /DIRECTORIES -- let it default.  The MFD on everything but the root ? volume in the volume set should be empty except for the various  .SYS files.   F /HEADERS -- total expected files on volume set divided by 4.  Fudge itF a bit higher since the files may not be divided evenly across volumes.  J /MAXIMUM_FILES -- let it default.  The disk space you save by reducing theB size of the index file bitmap isn't worth the pain associated with% running out of room for file headers.   J >> How about the very first drive. Do I need to make its INDEXF.SYS bigger> > than normal so it can hold all the files of the volume set ? >  NO!   J Files on the volume set will be spread across each of the four INDEXF.SYS.E Files that live on a particular volume will have file headers in that D volume's INDEXF.SYS.  In the case of fragmented files that span fromF volume to volume, each extent will have an extension header that lives- on the same volume as that particular extent.   E The only thing special about the first volume in a volume set is that H the [000000] directory there serves as the root directory for the volumeG set.  The file headers pointed to by directory entries in that [000000] @ directory can be on any one of the other volumes.  This includesJ directory files.  So [000000]FOO.DIR could be on volume 1, [000000]BAR.DIR. on volume 2 and [000000]MEZEI.DIR on volume 3.  D The [000000] directory that users use is always the one on volume 1.  A One fun "stupid pet trick" is to use a directory tree starting at D the [000000] directory on one of the other volumes.  You end up withD an entirely independent directory tree sharing the same bound volumeD set.  (No worries about doubly allocated disk blocks, the allocationB of free space is coherent regardless of what directories the files are or are not catalogued in).   > Second question: > T > With BACKUP/IMAGE , I know that all file attributes, protection etc are preserved. >  >   F > Since I can't use BACKUP/IMAGE from a single drive to a bound volumeH > set, I need to be careful about providing the right backup qualifiers. > I can think only of:) > 	/BY_OWNER-ORIGINAL in output disk spec   ) Yup.  /BY_OWNER=ORIGINAL is all you need.  > 	 H > Which would preserve file ownership. Are there other qualifiers I needD > to provide on a DISK to DISK backup to ensure that the files are aH > mirror image in terms of protection, onwer, ACLs, create dates and any > other atrributes ?  D As long as you stay away from /INTERCHANGE, your protection and ACLs> will be preserved.  Your other attributes are safe regardless.   	John Briggs   ------------------------------  % Date: Fri, 16 Dec 2005 21:11:24 +0100 + From: Karsten Nyblad <nospam@nospam.nospam> + Subject: Re: MOUNT/BIND and BACKUP question = Message-ID: <43a31f68$0$78287$157c6196@dreader1.cybercity.dk>    JF Mezei wrote: G > Not long ago. someone asked about BACKUP/IMAGE of a bound volume to a   > new single drive (impossible). > D > I current have a 10 gig drive known to all nodes in the cluster asG > $DISK2.  During migration, I have at my disposal 4 * 2gig DSSI drives I > that I could move the data to (it will fit) while the 10 gig SCSI drive B > is reinstalled from the all mighty microvax II to a new machine. > G > Despite all the warnings about bound volumes, I think it is still the J > best/easiest solution sicn applications could still see a single $DISK2.D > Otherwise, I would have to go through all software configs to moveT > individual software to their own drives. And this bound volume would be temporary. > ) > I current have 4 un-initialised drives.  > F > Does it matter how each drive is initialised before they are groupedE > into a bound volume ?  (/ACCESSED, /CLUSTER_SIZE, /DIRECTORIES etc   > I > How about the very first drive. Do I need to make its INDEXF.SYS bigger > > than normal so it can hold all the files of the volume set ? >   F As far as I know there is no requirement that disks are of same size, E geometry, etc.  Volume sets are very different from disk stribing in  D that it is a number of different disks, each with its own directory H structure, etc.  Files are placed on the disk with most free place, and F thus volume sets are not as well suited as disk stribing and RAID for I load balancing.  A file will only span two disks if there was not enough  G space on the disk the file was first allocated.  You will see that the  C first file written is written to the first disk, the second to the  F second disk, the third file to the third disk, the fourth file to the F fourth disk, and the fifth file will be written to the disk where the , smallest of the first four files was placed.   > Second question: > T > With BACKUP/IMAGE , I know that all file attributes, protection etc are preserved. >  >   F > Since I can't use BACKUP/IMAGE from a single drive to a bound volumeH > set, I need to be careful about providing the right backup qualifiers. > I can think only of:) > 	/BY_OWNER-ORIGINAL in output disk spec  > 	 H > Which would preserve file ownership. Are there other qualifiers I needD > to provide on a DISK to DISK backup to ensure that the files are aH > mirror image in terms of protection, onwer, ACLs, create dates and any > other atrributes ? > 2 > (and yes, I am aware of alias directory issues).  G Getting qualifiers right so that you copy the ACLs of [000000] and [*]  F directories is always a problem.  You need a backup command such that E these directories are explicitly copied.  Perhaps you can copy these  G directories before and separately from the rest of the files.  That is  E what I would do, and then I would check that all ACLs and protection  8 settings are right before copying the rest of the files.   ------------------------------  + Date: Fri, 16 Dec 2005 20:30:54 +0000 (UTC) P From: helbig@astro.multiCLOTHESvax.de (Phillip Helbig---remove CLOTHES to reply)N Subject: Re: PHONE error - Invalid specification of node or person. Try again.$ Message-ID: <dnv85t$paj$6@online.de>  H In article <1134706222.495175.229830@g14g2000cwa.googlegroups.com>, "Bob" Armstrong" <bob@jfcl.com> writes:    >   Ah, that's it -  >  > $sho log sys$node ) >    "SYS$NODE" = "::" (LNM$SYSTEM_TABLE)  > = >   Next question - where is SYS$NODE supposed to be defined?   F I think it is defined during DECnet startup.  When running DECwindows G without DECnet, I defined it by hand (in the startup) since DECwindows  I starts with "Welcome to <SYS$NODE>".  Of course, it could and should get  G the node name via other means; I guess this is a throwback to the days  C when EVERY VMS system ALWAYS had DECnet running, so one could just   assume the logical was defined.    ------------------------------    Date: 16 Dec 2005 16:19:40 -0600- From: Kilgallen@SpamCop.net (Larry Kilgallen) N Subject: Re: PHONE error - Invalid specification of node or person. Try again.3 Message-ID: <SPLf0sMxgpH3@eisner.encompasserve.org>   w In article <dnv85t$paj$6@online.de>, helbig@astro.multiCLOTHESvax.de (Phillip Helbig---remove CLOTHES to reply) writes: J > In article <1134706222.495175.229830@g14g2000cwa.googlegroups.com>, "Bob$ > Armstrong" <bob@jfcl.com> writes:  >  >>   Ah, that's it - >>   >> $sho log sys$node* >>    "SYS$NODE" = "::" (LNM$SYSTEM_TABLE) >>  > >>   Next question - where is SYS$NODE supposed to be defined? > H > I think it is defined during DECnet startup.  When running DECwindows I > without DECnet, I defined it by hand (in the startup) since DECwindows  K > starts with "Welcome to <SYS$NODE>".  Of course, it could and should get  I > the node name via other means; I guess this is a throwback to the days  E > when EVERY VMS system ALWAYS had DECnet running, so one could just  ! > assume the logical was defined.   @ Without DECnet running, the string MYNODE:: has no significance,* so the logical name should not be defined.  C Someone wondering the name of the local node should call SYS$GETSYI B to retrieve the SCSNODE parameter (whose contents will not include trailing colons).    ------------------------------  % Date: Fri, 16 Dec 2005 18:02:35 -0500 % From: BRAD <bradhamilton@comcast.net> N Subject: Re: PHONE error - Invalid specification of node or person. Try again.* Message-ID: <43A3478B.5080706@comcast.net>   Larry Kilgallen wrote: <snip>B > Without DECnet running, the string MYNODE:: has no significance,, > so the logical name should not be defined.  > Indeed, I ran into this very problem in my previous job.  The I application relied on SYS$NODE being defined in order for certain things  F to work.  The local user's network "support" folks decided to disable F DECnet, (who needs it???) and would not enable it, despite pleas from F the customer.  We had to customize the application as described below.  H With the advent of similar network "support" schemes, it would probably G behoove application developers to review and (possibly) update code to  3 deal with these kind of "excellent adventures".	:-)   E > Someone wondering the name of the local node should call SYS$GETSYI D > to retrieve the SCSNODE parameter (whose contents will not include > trailing colons).    ------------------------------    Date: 16 Dec 2005 13:00:59 -0600; From: koehler@eisner.nospam.encompasserve.org (Bob Koehler) 3 Subject: Re: R400X: converting DSSI shelves to SCSI 3 Message-ID: <Cy$wwHAF4OaR@eisner.encompasserve.org>   \ In article <43A2F515.7555FE0F@teksavvy.com>, JF Mezei <jfmezei.spamnot@teksavvy.com> writes: > "Christian J. Bauer" wrote: G >> No need for SCSI controllers. A HSD05 (or similar) would "route" the I >> DSSI to SCSI drives. As far as I can remember there was also a Version  >> for the RX400.  > H > Thanks for the pointer. Something I don't quite understand is why muchG > of the litterature about the HSD05 and HSD10 mentions "Storageworks".  > Isn't DSSI totally dead ?     C    HSD05 and HSD10 is how we added Storageworks SCSI shelves to our D    DSSI based disk farms when we had a hard time locating additional    RF series drives.   ------------------------------  % Date: Fri, 16 Dec 2005 21:40:44 +0800  From: prep@prep.synonet.com 3 Subject: Re: R400X: converting DSSI shelves to SCSI - Message-ID: <87wti5ma77.fsf@prep.synonet.com>   / JF Mezei <jfmezei.spamnot@teksavvy.com> writes:   E > This week, I adopted a stray R400X with 5 drives (4*2gig, 1*1gig).    I > Those drives are noisy and generate heat and eat electricity. (they are $ > the big 5.25" full height format).  " I tought canuks would LIKE heat...  B > The specs for the R400X mention that the backplane is actually 3F > layers: power, SCSI and DSSI.  They also mention that it is possibleA > to insert SCSI drives next to DSSI drives, with the SCSI drives F > picking up the signals frm the SCSI signals and the DSSI drives from! > the DSSI signals.  Neat design.   > DSSI is SCSI done right. You can use the bus for DSSI or SCSI.  F > Tough question: if I were to cannabalise the 1 gig DSSI drive, couldD > I reuse the mounting aparatus and get it to feed me SCSI signals ?F > (i.e: does the connector into the backplane get all signals and then) > just feed the DSSI stuff to the drive ?    It is just a 50 pin header...   C > I know that the mounting bracket provides power to the drive in a  > standard power plug.    D Well, no. DSSI has a *5* pin plug so the disks know what day it is... You need the 5-4 pin power widget from a TK70.  @ > Failing this, I could always string my own ribbon cable in the7 > backplane and use that, but it wouldn't be as "neat".   F > Also, are DSSI drives hot removable/replaceable from the cabinet, or > must it be powered down ?   + yes. For a random power flake value of yes.   F > My goal to keep my cluster up and web server running while I move toF > my newsly acquired machines is proving more difficult than I though. > F > Also, does anyone know if there are QBUS SCSI interfaces which allow$ > clustering/dual access to drives ?  & Yes, but it wont do SCS over the DSSI.  A > Right now, the advantage of the DSSI drives is that I can get 2 E > vaxes to access them directly. Just need to make sure that the DSSI C > controllers have different BUS IDs. (and I think some SCS traffic $ > travels through the DSSI, right ?)  6 Yep. You can also look out for HSDs and use SCSI disks  F > Any chance I could do this with 1992 vintage SCSI controllers on the. > QBUS ?  (2 vaxes accessing the same drives).  D Do you want it to work as well? I think a CMD will do OK, or some of them.    --  < Paul Repacholi                               1 Crescent Rd.,7 +61 (08) 9257-1001                           Kalamunda. @                                              West Australia 6076* comp.os.vms,- The Older, Grumpier Slashdot. Raw, Cooked or Well-done, it's all half baked.F EPIC, The Architecture of the future, always has been, always will be.   ------------------------------  % Date: Fri, 16 Dec 2005 13:24:24 -0700 6 From: "Michael D. Ober" <obermd.@.alum.mit.edu.nospam>+ Subject: Samba v3 on VMS and HP VMS Roadmap - Message-ID: <ZnFof.38$44.431@news.uswest.net>   J To the VMS engineering team, is this still accurate?  If so, will passwordD synchronization and external authentication features be implemented?  
 Mike Ober.  ' >> From: Michael Ober mdo@wakeassoc.com   K >> Is anyone working on a port of Samba v3 or later to VMS? Also, is anyone I >> working on supporting the VMS EXTAUTH feature, which is currently only * >> supported by Pathworks Advanced Server?   > Does HP count? According to:   > U http://h71000.www7.hp.com/openvms/roadmap/openvms_roadmaps_files/openvms_roadmaps.pdf   % > it's on its way. I quote (page 38):   # > Samba V3.x Evaluation Release for ' > Integrity Feb 2006 & Alpha March 2006   " > Samba Production Release Alpha & > Integrity H2 2006   9 > Knowing this might tend to discourage independent work. 4 > I know nothing about the features to be supported.F > -------------------------------------------------------------------- > Steven M. Schweda    ------------------------------  # Date: Fri, 16 Dec 2005 21:57:22 GMT 7 From: John Malmberg <malmberg@dskwld.zko.hp.compaq.dec> / Subject: Re: Samba v3 on VMS and HP VMS Roadmap . Message-ID: <6LGof.801$Q96.6@news.cpqcorp.net>   Michael D. Ober wrote:6 > To the VMS engineering team, is this still accurate?  5 http://h71000.www7.hp.com/network/CIFS_for_Samba.html   3 > If so, will password synchronization and external *  > authentication features be implemented?  I As I understand it, it is in the plans for the production release.  I do  6 not think it will be ready for the evaluation release.   -John ! malmberg@dskwld.zko.hp.compaq.dec  Personal Opinion Only    ------------------------------    Date: 16 Dec 2005 15:19:10 -0500. From: brooks@cuebid.zko.hp.nospam (Rob Brooks)  Subject: Re: shadowing questions, Message-ID: <Dh01eBMRt2EE@cuebid.zko.hp.com>  R helbig@astro.multiCLOTHESvax.de (Phillip Helbig---remove CLOTHES to reply) writes:? > In article <Bc1of.677$w55.298@news.cpqcorp.net>, Keith Parris   E >> The Shadowing developer is also looking into the possibility that  L >> Shadowing may be able to, at the point where a member has to be removed, F >> convert a mini-merge bitmap to a mini-copy bitmap for that removed L >> member, and thus track all the changes subsequent to its loss, and allow @ >> a mini-copy operation to reintegrate it later. This would be I >> particularly handy to allow mini-copies in disaster-tolerant clusters  J >> after a failure which results in downtime of either one site or of the  >> inter-site link.  >  > Yes!  J Absent something completely unexpected, this functionality will be present" in the field test version of V8.3.   --    L Rob Brooks    VMS Engineering -- I/O Exec Group     brooks!cuebid.zko.hp.com   ------------------------------  % Date: Fri, 16 Dec 2005 21:42:24 -0600 2 From: David J Dachtera <djesys.nospam@comcast.net>  Subject: Re: shadowing questions+ Message-ID: <43A38920.DDC808A1@comcast.net>    prep@prep.synonet.com wrote: > 6 > David J Dachtera <djesys.nospam@comcast.net> writes: > E > > I should think there ought to be a way to recover the bitmap data - > > from a crash dump and write it to a file.  > J > Do you want to eat week old road kill as well? There is a reason for theL > crash, and with out a lot of carefull checking, none of the data in a dump > can be trusted.....   " Depends on the cause of the crash.   --   David J Dachtera dba DJE Systems  http://www.djesys.com/  ) Unofficial OpenVMS Hobbyist Support Page: " http://www.djesys.com/vms/support/  ( Unofficial Affordable OpenVMS Home Page: http://www.djesys.com/vms/soho/   " Unofficial OpenVMS-IA32 Home Page: http://www.djesys.com/vms/ia32/    Coming soon:& Unofficial OpenVMS Marketing Home Page   ------------------------------  + Date: Fri, 16 Dec 2005 20:23:42 +0000 (UTC) P From: helbig@astro.multiCLOTHESvax.de (Phillip Helbig---remove CLOTHES to reply)$ Subject: Re: Updated VMS information$ Message-ID: <dnv7oe$paj$4@online.de>  C In article <1134687447.342172.122080@g14g2000cwa.googlegroups.com>, , "Sue" <susan_skonetski@hotmail.com> writes:   6 > Host Based Volume Shadowing (HBVS) Basics and Beyond > D > BRUDEN Corporation is proud to announce that it will be sponsoringG > deliveries of a new course developed by John Andruszkiewicz (aka John G > AtoZ).  John, a former member of OpenVMS Engineering, was responsible 4 > for developing and maintaining the HBVS product.    E I have no idea why John Andruszkiewicz is a FORMER member of OpenVMS  E Engineering.  Maybe he left voluntarily.  If not, what sense does it  + make to let go of such knowledgeable folks?   D > Please check the BRUDEN web site (www.BRUDEN.com) or contact Bruce > Ellis ' > (Bruce.Ellis@BRUDEN.com) for details.   = Aaahhh, Bruce Ellis, author of THE HITCHHIKER'S GUIDE TO VMS.    ------------------------------   End of INFO-VAX 2005.700 ************************