1 INFO-VAX	Sun, 17 Aug 2003	Volume 2003 : Issue 454       Contents: Re: 306GB drives!  Re: 306GB drives!  Re: 306GB drives!  Re: 306GB drives!  RE: 306GB drives!  RE: 306GB drives! ' Advanced Server 7.3A & Active Directory   Bugchecking on Alpha evax_bugchk$ Re: Bugchecking on Alpha evax_bugchk DECdocument and pdfmark  Re: DECnet problem Re: DECnet problem Re: DECnet problem Re: DECnet problem decw$mwm change process name  Re: decw$mwm change process name% Re: itrc - H.P. I.T. Resource Center. % Re: itrc - H.P. I.T. Resource Center. % Re: itrc - H.P. I.T. Resource Center. 9 Priviliged Library Vector flags - PLV$M_WAIT_CALLERS_MODE = Re: Priviliged Library Vector flags - PLV$M_WAIT_CALLERS_MODE   Re: Querying UAF from MS Windows  Re: Querying UAF from MS Windows Re: Security -- MicroSoft Style  Re: Security -- MicroSoft Style ! Re: Website Based on ASP and VMS.   F ----------------------------------------------------------------------    Date: 16 Aug 2003 11:10:18 -0500+ From: young_r@encompasserve.org (Rob Young)  Subject: Re: 306GB drives!3 Message-ID: <Xrz7BErQmnjo@eisner.encompasserve.org>   c In article <%lj%a.93485$o27.2125405@twister.rdc-kc.rr.com>, "Mike Naime" <mnaime@kc.rr.com> writes: ' > OK Rob, I'll bite on this discussion.  > 8 > Rob Young <young_r@encompasserve.org> wrote in message/ > news:Wea6H8AO1F0F@eisner.encompasserve.org... K >> In article <5uYZa.78482$7O4.1900368@twister.rdc-kc.rr.com>, "Mike Naime"  > <mnaime@kc.rr.com> writes:   >>! >> Why or how would you use them?  > 
 > BACKUPS!!!!     >         Okay.  As long as it isn't mission critical - go aheadC         and use it.  You didn't mention usage in your initial post.   : 	But you certainly could do a cheaper solution for backups9 	(if new) since you have existing infrastructure it makes 0 	a lot of sense for you to pop in bigger drives.   >   > Oh, and did I mention backups? >    	Sure.   >>8 >> Would you put anything mission critical on a 5 member; >> RAID-5 (for example) and serve that LUN up to VMS or NT?  >> > K > W2K clusters -  Yes, We do it.  They are using about 3% of my SAN spindle N > count.  The Winders world doesn't like to play nice on the SAN.  They are in0 > the process of moving off the SAN to MSA1000's  > 	But you wouldn't want to use that larger storage here, right?  L > VMS Yes!  85% of SAN space.  We have one Production client system that has4 > an oracle database on 4 different 6x18GB raidsets.  > 	But you wouldn't want to use that larger storage here, right?   > K > Everyone was po-pooing the raidset and no local drives when we set up our K > data center.  Our Oracle DBA's told us that we needed bazillions of mount J > points.  The other VMS folks told us that we had to have local drive forK > page and swap files.  We do not have any noticable sustained disk queues. F > You see some during the backup windows, but not the rest of the day. >   , 	I don't use any local drives in my cluster.  L > When the Compaq/HP performance folks came out to show us how to use the T4I > data, they commented that it was a welcome change to see a bunch of VMS J > systems that did not have any real performance issues to speak of.  As aM > matter of fact, our spikes on CPU/disk where shown to be directly caused by N > all of the monitoring/performance collectors!  On some clusters, these where; > the processes that used the most overall CPU and disk IO.  > : >> The reason I ask ... is an old favorite subject of mine >> has come up:  >> >>N > http://groups.google.com/groups?selm=Lsvhk3%24YxMND%40eisner.encompasserve.o > rg&oe=UTF-8&output=gplain  >> > K > Personally, I think that shadowing is one of the biggest performance hits & > that you can take on a VMS system.    > 	Not really.  A large OEM of storage would try to convince you: 	of such a farce.  But when pressed for any data at all to% 	back up their FUD - they slink away.   5 	Think about it - if you have a storage back-end with C 	write caching enabled - where does the "Volume shadowing overhead" 	 	come in?   7 	Careful before you answer - you may want to read this:   = http://www.hpl.hp.com/hpjournal/dtj/vol3num3/vol3num3art1.pdf   ; 	And at that - yes there are folks that it is a performance < 	issue - a pathalogical case perhaps?  60% write traffic andG 	tons of it?  What if like many you do a normal 15% (or so) write rate? C 	Question that a FUDster can't really deal with and quickly changes . 	the subject (not you - my actual experience).  ' > Take the redundancy off the VMS level ) > and put it in the storage controller.     < 	It's about availability, see recent cov discussions.  It isG 	also about performance in many cases.  Other than EVA, you would have  + 	the other shadowset members for reading.     $ > This free's up a lot of the systemM > resources.  Also, when you have a system crash.  No 6+ hour performance hit M > from all the shadow copy merges.  One of the BIG complaints heard from some / > of our remote customers with LARGE databases.  >   ; 	Soon to be fixed and makes Volume Shadowing that much more  	of a powerful solution.  M > The EVA really is making it potentially possible to do away with shadowing. N > You do your backups and redundancy at the disk controller level.  Not at the > OS level.   E 	Nah.  You have two or more EVAs and shadow across datacenters.  That 5 	way when your datacenter loses power you keep going.   M > BACKUPS!  I would say that at least 1/2 of my SAN space is now dedicated to 
 > backups.  = 	Okay.  Not mission critical - though some would argue it is.    > J > How many copies of your mission critical data do YOU have on disk Rob???  . 	One on disk, many copies on tape.  Using TSM.   >  Or  > is it all going to tape?" > Tape is TOO SLOW for restores!    > 	LTOs are pretty fast - but yes disk can be faster and a greatC 	argument for.  One of the greatest arguments for (we are learning) D 	is mail restores.  The problem is files are across many tapes.  You? 	can run TSM consolidations to help.  This is a case where full < 	images all the time are a big win, but often not realistic.  N > Guess what.  Fairly soon you will not be able to buy those small disk drives
 > anymore.  8 > Have you tried to purchase any 18GB disks recently???? 	 @ 	Funny - overhear where folks are begging for 9 GBytes.  Hard to6 	stick an 18 Gbyte as a replacement for hardware raid.  D >> The point is with a 5 member RAID5 and HSG (using those 300 GByteD >> drives) - you have a 1.2 TByte LUN.  When/if the RAID5 array blew? >> out ( didn't rebuild properly) - you have a 1.2 TByte LUN to  >> restore.  >>A >> So you wouldn't use RAID5.  Okay, a 360 GByte mirror set blows  >> out.  You get the picture.  > D > It all depends on what your management is willing to risk/pay for.L > If I could convince my management to double my HSG count, I would not need > to have raidsets.  :-) >   = 	I would make them fully understand how long you will be down @ 	when you blow out a RAID5 and have to restore.  In fact, a testE 	restore might convince them you need to go in a different direction.   A > In 3 years, we have had less than 1% failure rate in our disks. 4 > Auto-sparing has made a drive failure a non-event.  ? 	Of course.  You are on this side of the 5 year service life of D 	those disks.  Come back and tell us how things are going when those, 	disks are 5 years and 6+ months old - :-) .   >>@ >> EVA and others get around this by partitioning - whether true@ >> virtualization at the hardware level like EVA does - or other# >> clunkier methods of competitors.  > I > Our EVA (Populated with 146GB drives) is used to server up LARGE backup . > drives so that we can SNAP and send to tape. >  >>1 >> I wouldn't stick 360 GByte drives in an HSG80,  > M > Actually, it is a 309GB drive. An odd number.  I did not remember correctly  > when I posted earlier. > @ >> unless of course you could take 5 , 10, 20, 30 hour downtimes@ >> for restoration - worst case of course (pick a worst case forE >> you - does it ever get less than 5 hours blowing out and restoring ? >> a modest 360 GByte mirrorset - 20 MByte/sec restore speed?).  >> > N > Worst case is that I restore the raidset from the backup drive.   If it is aK > backup drive that goes bad, what do you do?  Restore it? or overwrite it?  > J > We have seen 35GB/hour average transfer rates on disk-disk in the HSG's.  ; 	So when/if you blow out a 309 GByte mirrorset you are down 7 	for 10 hours - assuming you ever use it in production.   I > 75GB/hour going from HSG to EVA.   It all comes down to the question of & > How many drives you need to restore?C > How fast you can push it from one SAN drive to another SAN drive. M > How many systems you can have transfering it in parralel?  One per drive if D > you are doing an image backup.  More if you can split the data up. >   A 	All good things to consider.  My point still stands (or at least G 	I think it does).  Without sophisticated/good virtualization these big B 	drives are meant for D2D backups - you are risking a big business5 	hit if you are not shadowing and blow out a raidset.   ? 	Now of course in lieu of shadowing the recovery solution is to E 	take snaps every 4 hours.  So the plan is if you lose the production A 	storage OR have corruption you bring your snap online and replay  	logs.   				Rob    ------------------------------  # Date: Sat, 16 Aug 2003 18:33:30 GMT % From: "Mike Naime" <mnaime@kc.rr.com>  Subject: Re: 306GB drives!; Message-ID: <_Nu%a.93731$o27.2153066@twister.rdc-kc.rr.com>   6 Rob Young <young_r@encompasserve.org> wrote in message- news:Xrz7BErQmnjo@eisner.encompasserve.org... J > In article <%lj%a.93485$o27.2125405@twister.rdc-kc.rr.com>, "Mike Naime" <mnaime@kc.rr.com> writes:  ; > But you certainly could do a cheaper solution for backups : > (if new) since you have existing infrastructure it makes1 > a lot of sense for you to pop in bigger drives.  >   C Cheaper, yes.  No argument there.  But cheaper usually is inversely F proportional to faster!  After a test restore from tape.  (The cheaperH solution)   Upper management let my director buy more storage for online5 backups.  Restoring from tape is a 3rd or 4th option.   I Also look at the cost per spindle for your storage.   If I need to have a & 72GB LUN.  I have 3 options right now.L 1.) I use 5x18GB.  I pay for 5 slots in the controller, and I also pay for 5 disk drives.% 2.) Use 3x36GB.  3 slots, 3 spindles. - 3.) Use 2x72GB.  Use 2 slots, and 2 spindles.   G Option 3 is actually the cheapest option when you plug in real numbers! % Believe it or not.  larger = cheaper!   F If I can have twice the storage space in the same spindle count, I canL potentially have more backup copies online.  The problem being in convincingJ my mangers to spend todays  money on the larger drives.  Plus, what do you7 do with all of those smaller spindles that you removed?    > ? > But you wouldn't want to use that larger storage here, right?  >   F Personaly, no I would not WANT to use it here.   But when your managerE dictates what layout you are using for each client....you do what you  manager wants.  K We tend to use the larger drives for non-prod systems where performance and I uptime are not guaranteed, and not an issue.  We have started to use 72GB = drives for DB drives without seeing any performance problems.   K Also, what are you going to do in 10 years when you cannot buy anything new L that is smaller than a Terrabyte?  Historically we have been increasing hardJ drive technology by a factor of 10 about every 2-3 years.  I started aboutL 20 years ago on a 10 meg PC hard drive.  Now we see 250 GIG drives availableL in the store, that is a 25,000 times increase in drive size. This translatesI to about 16 generations in 20 years if we use the doubleing of  the drive L size to represent a generation in a geometric progression.  Also note that a? 40 GIG drive is the smallest drive that they are selling today.   J Let's project this into the future and be conservative for arguments sake.L If a drive doubles in size every 2 years, and we now have 300GB disks. (Nice> round number)  in 10 years we will have 9.6TB drives.  And theJ manufacturers/vendors will probably not be make anything less than a 1.2TB drive from 3 generations back.     > - > I don't use any local drives in my cluster.   " Great!  Ever done a hardware move?  K Example.  Alphaserver X freezes solid for no apparent reason  (Turns out to H be memory dimms), Or is bugchecking repeatedly during  system boot.  YouD zone in a spare Alphaserver. Enable the storage.  Patch  the networkI cabling, and boot up on the new server.   About 1 hours work without prep L time.  If I have time to prep the move, it is a Shutdown, Patch network, andJ Boot the new system.   This way I can troubleshoot hardware problems on my! time instead of the clients time.   L Except for physically having to patching the network cables, I can do all of the rest remotely.   > > H > > Personally, I think that shadowing is one of the biggest performance hits& > > that you can take on a VMS system. > ? > Not really.  A large OEM of storage would try to convince you ; > of such a farce.  But when pressed for any data at all to & > back up their FUD - they slink away. > 6 > Think about it - if you have a storage back-end withD > write caching enabled - where does the "Volume shadowing overhead"
 > come in? >   F When a cluster member crashes and the resulting shadow copies make the> responce time unacceptable to the end users for several hours.K I did not setup that system, and cannot comment on how it is setup.  I just > know that I do not have that particular problem on my systems.  8 > Careful before you answer - you may want to read this: > ? > http://www.hpl.hp.com/hpjournal/dtj/vol3num3/vol3num3art1.pdf  >   + Interesting article.   Thanks for the link!   < > And at that - yes there are folks that it is a performance= > issue - a pathalogical case perhaps?  60% write traffic and H > tons of it?  What if like many you do a normal 15% (or so) write rate?D > Question that a FUDster can't really deal with and quickly changes/ > the subject (not you - my actual experience).  >   F I'm not really taking issue with the normal routine performance of theJ system.   I'm more looking at the break-fix, and downtimes associated withE this.  This is what get rolled into OUTAGE or DOWNTIME that we can be  penalized for.  D > > The EVA really is making it potentially possible to do away with
 shadowing.L > > You do your backups and redundancy at the disk controller level.  Not at the 
 > > OS level.  > F > Nah.  You have two or more EVAs and shadow across datacenters.  That6 > way when your datacenter loses power you keep going.  C Yes, but my point is that it would not be possible without the EVA. I EVA's in 2 data centers does not necessarily need to be made redundant by  host based shadowing. B The solution is preferred if I can take the OS out of the picture.C Otherwise, I will have 150+ systems trying to write to the far end.   L If I could reduce the number of systems needing to do the remote write, that would also be prefered.    > " > > Tape is TOO SLOW for restores! > ? > LTOs are pretty fast - but yes disk can be faster and a great D > argument for.  One of the greatest arguments for (we are learning)E > is mail restores.  The problem is files are across many tapes.  You @ > can run TSM consolidations to help.  This is a case where full= > images all the time are a big win, but often not realistic.  >   I Why not???  Follow your own argument here.  If downtime potentially costs G management millions of dollars, and they are choking on the $$$ for the J solution.  That is their decision to make, but the downtime would probablyL cost them more than the solution.  The problem for them is that it comes out? of todays budgetary dollars rather than tomorrows lost revenue.    > > > I would make them fully understand how long you will be downA > when you blow out a RAID5 and have to restore.  In fact, a test F > restore might convince them you need to go in a different direction. >   I Actually a test restore of an entire cluster convinced them that tape was G too slow.  My director called us in one day and told us that Client X's I hardware is toasted.  Nothing is usable.  (I.E.  Logging onto the running H system to check something is not allowed here)   Here is a set of backupK tapes.  The clock is ticking.  Go restore this.  This pulls in the question K of exactly how good your documentation on everything in your setup is.  How 5 many copies of it you have, and where it is located..   I Catch-22  You have to have a running system in order to restore a system.   ! Gee, this is getting interesting!    Mike   ------------------------------    Date: 16 Aug 2003 17:27:18 -0500+ From: young_r@encompasserve.org (Rob Young)  Subject: Re: 306GB drives!3 Message-ID: <zYVV3vv$GJP9@eisner.encompasserve.org>   c In article <_Nu%a.93731$o27.2153066@twister.rdc-kc.rr.com>, "Mike Naime" <mnaime@kc.rr.com> writes:  > 8 > Rob Young <young_r@encompasserve.org> wrote in message/ > news:Xrz7BErQmnjo@eisner.encompasserve.org... K >> In article <%lj%a.93485$o27.2125405@twister.rdc-kc.rr.com>, "Mike Naime"  > <mnaime@kc.rr.com> writes: > < >> But you certainly could do a cheaper solution for backups; >> (if new) since you have existing infrastructure it makes 2 >> a lot of sense for you to pop in bigger drives. >> > E > Cheaper, yes.  No argument there.  But cheaper usually is inversely H > proportional to faster!  After a test restore from tape.  (The cheaperJ > solution)   Upper management let my director buy more storage for online7 > backups.  Restoring from tape is a 3rd or 4th option.  >   = 	I wasn't talking about tape.  A cheaper disk based way would C 	be to use ATA.  For instance, Nexsan ATA is about 1.7 cents/MByte. ? 	You can get adventurous and get cheaper than that.  But at 1.7 ? 	cents that is $2 or so a GByte, you won't be buying HSG drives  	that cheap any time soon.  ( http://www.fusiondm.net/pages/ataboy.htm    H > If I can have twice the storage space in the same spindle count, I canN > potentially have more backup copies online.  The problem being in convincingL > my mangers to spend todays  money on the larger drives.  Plus, what do you9 > do with all of those smaller spindles that you removed?   ; 	Put them on the aftermarket - they will be bought from you  	pennies on the dollar.    >  >>@ >> But you wouldn't want to use that larger storage here, right? >> > H > Personaly, no I would not WANT to use it here.   But when your managerG > dictates what layout you are using for each client....you do what you  > manager wants. > M > We tend to use the larger drives for non-prod systems where performance and K > uptime are not guaranteed, and not an issue.  We have started to use 72GB ? > drives for DB drives without seeing any performance problems.  >   < 	And EVA will mask the larger drives and performance issues.  M > Also, what are you going to do in 10 years when you cannot buy anything new N > that is smaller than a Terrabyte?  Historically we have been increasing hardL > drive technology by a factor of 10 about every 2-3 years.  I started aboutN > 20 years ago on a 10 meg PC hard drive.  Now we see 250 GIG drives availableN > in the store, that is a 25,000 times increase in drive size. This translatesK > to about 16 generations in 20 years if we use the doubleing of  the drive N > size to represent a generation in a geometric progression.  Also note that aA > 40 GIG drive is the smallest drive that they are selling today.  > L > Let's project this into the future and be conservative for arguments sake.N > If a drive doubles in size every 2 years, and we now have 300GB disks. (Nice@ > round number)  in 10 years we will have 9.6TB drives.  And theL > manufacturers/vendors will probably not be make anything less than a 1.2TB  > drive from 3 generations back. >   B 	I'm pretty sure this is something that won't extend out.  We will> 	see a flattening in drive sizes in a bit.  After that, surely1 	we cutover to some of this Oak Ridge technology:   0 http://www.acq.osd.mil/bmdo/bmdolink/pdf/roo.pdf  . Oak Ridge National Laboratory (Oak Ridge, TN)   N A new technique developed at Oak Ridge National Laboratory offers an answer toM the demand for increased storage density. Surface-enhanced Raman optical data F storage is based on a technology that detects the optical signature ofL laser-excited molecules. This data storage method involves the alteration ofK molecules that are embedded in a polymeric or silver- colored disk. A laser H is used to "write" information on a disk, on which the optically alteredJ molecules and unaltered molecules serve as "bits." The normally weak RamanK signature of each molecule is amplified or enhanced by the substrate of the G storage disk, such that the signature can be read by a signal detector.   " 	Or something simalarly whizbangy.  > 	You see a flattening today - even if only in home PC markets.@ 	In the future it may make little or no sense to get super large= 	sizes as even with partitioning you end up with catastrophic A 	effects if you blow out a RAIDset and other issues that I'm sure  	folks have in mind. >  >>. >> I don't use any local drives in my cluster. > $ > Great!  Ever done a hardware move? > M > Example.  Alphaserver X freezes solid for no apparent reason  (Turns out to J > be memory dimms), Or is bugchecking repeatedly during  system boot.  YouF > zone in a spare Alphaserver. Enable the storage.  Patch  the networkK > cabling, and boot up on the new server.   About 1 hours work without prep N > time.  If I have time to prep the move, it is a Shutdown, Patch network, andL > Boot the new system.   This way I can troubleshoot hardware problems on my# > time instead of the clients time.  >   H 	I think the PRIMARY purpose of a cluster is for availability.  You lose= 	a member , you replace it.  You can replace it in an hour or B 	so - great.  I'm content to replace a member within a day.  GoingD 	forward I would like to add a member and go to next day on service.A 	If they had a "next week" service option, I'd take that too ;-).   N > Except for physically having to patching the network cables, I can do all of > the rest remotely. >  >> >I >> > Personally, I think that shadowing is one of the biggest performance  > hits' >> > that you can take on a VMS system.  >>@ >> Not really.  A large OEM of storage would try to convince you< >> of such a farce.  But when pressed for any data at all to' >> back up their FUD - they slink away.  >>7 >> Think about it - if you have a storage back-end with E >> write caching enabled - where does the "Volume shadowing overhead"  >> come in?  >> > H > When a cluster member crashes and the resulting shadow copies make the@ > responce time unacceptable to the end users for several hours.M > I did not setup that system, and cannot comment on how it is setup.  I just @ > know that I do not have that particular problem on my systems. >   A 	Right.  Full merge during primetime can be a killer.  That is a  ? 	non-HSJ weakness - soon to be remedied.  The last criticism of @ 	Shadowing becomes a non-issue with host-based mini-merge coming 	to a cluster near you.   = >> And at that - yes there are folks that it is a performance > >> issue - a pathalogical case perhaps?  60% write traffic andI >> tons of it?  What if like many you do a normal 15% (or so) write rate? E >> Question that a FUDster can't really deal with and quickly changes 0 >> the subject (not you - my actual experience). >> > H > I'm not really taking issue with the normal routine performance of theL > system.   I'm more looking at the break-fix, and downtimes associated withG > this.  This is what get rolled into OUTAGE or DOWNTIME that we can be  > penalized for. >   
 	Hmmm - okay.   E >> > The EVA really is making it potentially possible to do away with  > shadowing.M >> > You do your backups and redundancy at the disk controller level.  Not at  > the  >> > OS level. >>G >> Nah.  You have two or more EVAs and shadow across datacenters.  That 7 >> way when your datacenter loses power you keep going.  > E > Yes, but my point is that it would not be possible without the EVA. K > EVA's in 2 data centers does not necessarily need to be made redundant by  > host based shadowing.   ? 	With VMS - sure.  VMS doesn't support DRM 2 and/or DRM 2 isn't > 	here yet.  To make it more generic - have 2 separate storage = 	"boxes" in 2 separate data centers.  Now you use HBVS across 
 	datacenters.   D > The solution is preferred if I can take the OS out of the picture.E > Otherwise, I will have 150+ systems trying to write to the far end.   @ 	The far end doesn't have to be that far away.  I'm not speaking 	of DR but fault tolerant.   >># >> > Tape is TOO SLOW for restores!  >>@ >> LTOs are pretty fast - but yes disk can be faster and a greatE >> argument for.  One of the greatest arguments for (we are learning) F >> is mail restores.  The problem is files are across many tapes.  YouA >> can run TSM consolidations to help.  This is a case where full > >> images all the time are a big win, but often not realistic. >> > K > Why not???  Follow your own argument here.  If downtime potentially costs I > management millions of dollars, and they are choking on the $$$ for the L > solution.  That is their decision to make, but the downtime would probablyN > cost them more than the solution.  The problem for them is that it comes outA > of todays budgetary dollars rather than tomorrows lost revenue.  >   B 	Mail is the key.  Mail proliferates like rabbits.  Unfortunately,> 	it is mostly flat files on disk.  Files that are old and makeG 	little sense to back up over and over again.  Unfortunately, scattered G 	on many tapes so restores are long.  However, mail is very distributed A 	and a small subset are impacted if a server is down.  This isn't ? 	everybody in the world.  Sure, consolidate your tapes OR go to   	large ATA storage pool for D2D.   >>? >> I would make them fully understand how long you will be down B >> when you blow out a RAID5 and have to restore.  In fact, a testG >> restore might convince them you need to go in a different direction.  >> > K > Actually a test restore of an entire cluster convinced them that tape was I > too slow.  My director called us in one day and told us that Client X's K > hardware is toasted.  Nothing is usable.  (I.E.  Logging onto the running J > system to check something is not allowed here)   Here is a set of backupM > tapes.  The clock is ticking.  Go restore this.  This pulls in the question M > of exactly how good your documentation on everything in your setup is.  How 7 > many copies of it you have, and where it is located..  >   A 	You mean [older] DLT is too slow?  Surely you didn't have 2 or 3 ? 	LTO tape drives actively doing the restore?  We typically get  A 	20 MByte/sec to/from LTO.  With 40 MByte/sec on restore - that's : 	140 GByte/hour.  That isn't VMS - but VMS plays well with= 	3rd party Enterprise Backup solutions and will natively work A 	with LTO 2 (faster yet) or so recent discussions have mentioned.   D 	What is your restore speed criteria?  Surely met by LTO (assumption+ 	you would have 2 streaming your restores).    				Rob    ------------------------------  % Date: Sat, 16 Aug 2003 23:49:22 -0500 % From: "Mike Naime" <mnaime@kc.rr.com>  Subject: Re: 306GB drives!< Message-ID: <gMD%a.108944$6a3.3561801@twister.rdc-kc.rr.com>  6 Rob Young <young_r@encompasserve.org> wrote in message- news:zYVV3vv$GJP9@eisner.encompasserve.org... J > In article <_Nu%a.93731$o27.2153066@twister.rdc-kc.rr.com>, "Mike Naime" <mnaime@kc.rr.com> writes: > > : > > Rob Young <young_r@encompasserve.org> wrote in message1 > > news:Xrz7BErQmnjo@eisner.encompasserve.org... F > >> In article <%lj%a.93485$o27.2125405@twister.rdc-kc.rr.com>, "Mike Naime" > > <mnaime@kc.rr.com> writes: > > > > >> But you certainly could do a cheaper solution for backups= > >> (if new) since you have existing infrastructure it makes 4 > >> a lot of sense for you to pop in bigger drives. > >> > > G > > Cheaper, yes.  No argument there.  But cheaper usually is inversely J > > proportional to faster!  After a test restore from tape.  (The cheaperL > > solution)   Upper management let my director buy more storage for online9 > > backups.  Restoring from tape is a 3rd or 4th option.  > >  > > > I wasn't talking about tape.  A cheaper disk based way wouldD > be to use ATA.  For instance, Nexsan ATA is about 1.7 cents/MByte.@ > You can get adventurous and get cheaper than that.  But at 1.7@ > cents that is $2 or so a GByte, you won't be buying HSG drives > that cheap any time soon.  > * > http://www.fusiondm.net/pages/ataboy.htm >   < How does it play with VMS???  Do you have one to comment on?   >  >  > C > I'm pretty sure this is something that won't extend out.  We will ? > see a flattening in drive sizes in a bit.  After that, surely 2 > we cutover to some of this Oak Ridge technology:   > 2 > http://www.acq.osd.mil/bmdo/bmdolink/pdf/roo.pdf > / > Oak Ridge National Laboratory (Oak Ridge, TN)  > F > A new technique developed at Oak Ridge National Laboratory offers an	 answer to J > the demand for increased storage density. Surface-enhanced Raman optical dataH > storage is based on a technology that detects the optical signature ofK > laser-excited molecules. This data storage method involves the alteration  ofG > molecules that are embedded in a polymeric or silver- colored disk. A  laser J > is used to "write" information on a disk, on which the optically alteredL > molecules and unaltered molecules serve as "bits." The normally weak RamanI > signature of each molecule is amplified or enhanced by the substrate of  the I > storage disk, such that the signature can be read by a signal detector.  > # > Or something simalarly whizbangy.  > ? > You see a flattening today - even if only in home PC markets. A > In the future it may make little or no sense to get super large > > sizes as even with partitioning you end up with catastrophicB > effects if you blow out a RAIDset and other issues that I'm sure > folks have in mind.   K I remember when I got my first 40MEG drive, one of the guys at work told me # "You will never fill that up!"  :-)   K I'm not real sure about a leveling off.  As the drives get bigger, and tech G gets faster, we put more and more on disk.  Save that high quality home H video directly on disk?  Or other online storage.  I'm sure that we willI start seeing the raid technology more prevalent in the home market fairly D soon.  I have a couple of friends that already have it in their home3 systems, but they are not your average "Home" user.   J I see this as opening the market for online movie ordering...Etc.   As theB broadband pipes get bigger...and storage gets larger, faster,  andL cheaper...  people use it up.  Look at the new "VCR's" that use a hard driveJ to record that TV show that you want to watch later.... Look at the amountL of people that now take broadband for granted.   Who want's dial up anymore?H Tapes get faster, storage gets faster.....  Who's to say that you cannotJ restore a 1TB "disk" in an hour 5-10 years from now.  I see the conceptualE idea behind the EVA expanding further.  You have some sort of modular K storage media that plugs into a storage controller that manages it for you. 0 Individual storage media failures are minimized.   > >  > >> > I > I think the PRIMARY purpose of a cluster is for availability.  You lose > > a member , you replace it.  You can replace it in an hour orC > so - great.  I'm content to replace a member within a day.  Going E > forward I would like to add a member and go to next day on service. B > If they had a "next week" service option, I'd take that too ;-). >   J It sounds like you are not being penalized for downtime, or that it reallyJ doesn't cost you anything when your systems are down.  It also sounds likeJ the APP and mailserver that you support is radically different from what IE am doing.  If you have the luxury of a day/week to replace something.  Great!L We use the cluster to distribute processing, and also for shared media.   AnI AIX 2-node system needs twice the amount of storage spindles that the VMS I system does because they CANNOT  share the same storage space at the same " time.  Ditto for the W2K "Cluster"  3 Do you have any uptime SLA's that you have to meet?   I > >> Nah.  You have two or more EVAs and shadow across datacenters.  ThatN9 > >> way when your datacenter loses power you keep going.e > >   L One thing that I missed here before.  Why SHOULD my data center loose power?K With PDU's, battery backup, and a generator that will lasts for a couple ofrA days.  No reason to take this as a given.  IF everything works as_F advertised. (And so far it has) our power system reports/records powerI problems/outages, but does not go down.  I lost power at home for 3 days, % but our data center never lost power.-  G > > Yes, but my point is that it would not be possible without the EVA.tJ > > EVA's in 2 data centers does not necessarily need to be made redundant by > > host based shadowing.d >b@ > With VMS - sure.  VMS doesn't support DRM 2 and/or DRM 2 isn't> > here yet.  To make it more generic - have 2 separate storage> > "boxes" in 2 separate data centers.  Now you use HBVS across > datacenters. >S  I You lost me on that one.  HBVS?  Do you mean Host Based Volume Shadowing?   G Just out of curiousity, How much data are you talking about transferingu between data centers?BD You probably could get away with just having a T-3 pipe between data centers.    F > > The solution is preferred if I can take the OS out of the picture.G > > Otherwise, I will have 150+ systems trying to write to the far end.u >rA > The far end doesn't have to be that far away.  I'm not speakingc > of DR but fault tolerant.i >   ) I have to consider both at the same time.m   > >>  C > Mail is the key.  Mail proliferates like rabbits.  Unfortunately,s? > it is mostly flat files on disk.  Files that are old and makenH > little sense to back up over and over again.  Unfortunately, scatteredH > on many tapes so restores are long.  However, mail is very distributedB > and a small subset are impacted if a server is down.  This isn't@ > everybody in the world.  Sure, consolidate your tapes OR go to! > large ATA storage pool for D2D.  >S  F Fortunately I do not mess with mail.  That would actually be a smaller
 problem.  :-)p   > >> >vB > You mean [older] DLT is too slow?  Surely you didn't have 2 or 3? > LTO tape drives actively doing the restore?  We typically getdB > 20 MByte/sec to/from LTO.  With 40 MByte/sec on restore - that's; > 140 GByte/hour.  That isn't VMS - but VMS plays well with'> > 3rd party Enterprise Backup solutions and will natively workB > with LTO 2 (faster yet) or so recent discussions have mentioned.  I Yes, (S)DLT is too slow.  Also, you possibly need to re-configure storage C (Drive normalization) and SAN zoning prior to starting the restore.t   >iE > What is your restore speed criteria?  Surely met by LTO (assumptione, > you would have 2 streaming your restores).  I That, I am not really sure about.  Or director and higher gets into thosefI discussions.  I'm too busy with my duties to really get into the nuts anduH bolts details of backups.  We actually have a full time person that justL works on backups.  They are in the midst of re-defining the standard for the tape media that we use.u   Mike   ------------------------------  % Date: Sun, 17 Aug 2003 06:51:42 -0700 # From: "Tom Linden" <tom@kednos.com>t Subject: RE: 306GB drives!9 Message-ID: <CIEJLCMNHNNDLLOOGNJIKEIJHMAA.tom@kednos.com>k   >-----Original Message-----e+ >From: Mike Naime [mailto:mnaime@kc.rr.com]/( >Sent: Saturday, August 16, 2003 9:49 PM >To: Info-VAX@Mvb.Saic.Com >Subject: Re: 306GB drives!C >a >d > 7 >Rob Young <young_r@encompasserve.org> wrote in messagee. >news:zYVV3vv$GJP9@eisner.encompasserve.org...K >> In article <_Nu%a.93731$o27.2153066@twister.rdc-kc.rr.com>, "Mike Naime"= ><mnaime@kc.rr.com> writes:  >> >; >> > Rob Young <young_r@encompasserve.org> wrote in messagen2 >> > news:Xrz7BErQmnjo@eisner.encompasserve.org...G >> >> In article <%lj%a.93485$o27.2125405@twister.rdc-kc.rr.com>, "Mikea >Naime"B >> > <mnaime@kc.rr.com> writes:l >> >? >> >> But you certainly could do a cheaper solution for backupsu> >> >> (if new) since you have existing infrastructure it makes5 >> >> a lot of sense for you to pop in bigger drives.r >> >>f >> >H >> > Cheaper, yes.  No argument there.  But cheaper usually is inverselyK >> > proportional to faster!  After a test restore from tape.  (The cheaperiB >> > solution)   Upper management let my director buy more storage >for onlinet: >> > backups.  Restoring from tape is a 3rd or 4th option. >> > >>? >> I wasn't talking about tape.  A cheaper disk based way wouldnE >> be to use ATA.  For instance, Nexsan ATA is about 1.7 cents/MByte.nA >> You can get adventurous and get cheaper than that.  But at 1.7 A >> cents that is $2 or so a GByte, you won't be buying HSG drivesa >> that cheap any time soon. >>+ >> http://www.fusiondm.net/pages/ataboy.htmh  J 72GB scsi sca drives are avaialble for about $235 which works out to aboutF 0.3 cents/Mbyte and to this you must add the cost of the cabinetry and$ cabling, so say a total of 0.4 cents  E The doubling time for capacity by my reckoning is less than two yearsrE I bought the first 8" 80MByte drives from Fujitsu in 1982 which meansoK 12 generations in 21 years.  A full length movie is about 1.5 to 2.5 GBytesi1 so I think you are right about the future usages.  >> >o= >How does it play with VMS???  Do you have one to comment on?  >t >> >> >>D >> I'm pretty sure this is something that won't extend out.  We will@ >> see a flattening in drive sizes in a bit.  After that, surely3 >> we cutover to some of this Oak Ridge technology:  >c >>3 >> http://www.acq.osd.mil/bmdo/bmdolink/pdf/roo.pdf  >>0 >> Oak Ridge National Laboratory (Oak Ridge, TN) >>G >> A new technique developed at Oak Ridge National Laboratory offers ane
 >answer toK >> the demand for increased storage density. Surface-enhanced Raman opticali >dataLI >> storage is based on a technology that detects the optical signature ofdL >> laser-excited molecules. This data storage method involves the alteration >of H >> molecules that are embedded in a polymeric or silver- colored disk. A >laserK >> is used to "write" information on a disk, on which the optically alteredbB >> molecules and unaltered molecules serve as "bits." The normally >weak RamanlJ >> signature of each molecule is amplified or enhanced by the substrate of >theJ >> storage disk, such that the signature can be read by a signal detector. >>$ >> Or something simalarly whizbangy. >>@ >> You see a flattening today - even if only in home PC markets.B >> In the future it may make little or no sense to get super large? >> sizes as even with partitioning you end up with catastrophicrC >> effects if you blow out a RAIDset and other issues that I'm suren >> folks have in mind. >GL >I remember when I got my first 40MEG drive, one of the guys at work told me$ >"You will never fill that up!"  :-) >/L >I'm not real sure about a leveling off.  As the drives get bigger, and techH >gets faster, we put more and more on disk.  Save that high quality homeI >video directly on disk?  Or other online storage.  I'm sure that we willoJ >start seeing the raid technology more prevalent in the home market fairlyE >soon.  I have a couple of friends that already have it in their home 4 >systems, but they are not your average "Home" user. > K >I see this as opening the market for online movie ordering...Etc.   As the C >broadband pipes get bigger...and storage gets larger, faster,  andeB >cheaper...  people use it up.  Look at the new "VCR's" that use a >hard drivedK >to record that TV show that you want to watch later.... Look at the amount A >of people that now take broadband for granted.   Who want's dial  >up anymore?I >Tapes get faster, storage gets faster.....  Who's to say that you cannothK >restore a 1TB "disk" in an hour 5-10 years from now.  I see the conceptualsF >idea behind the EVA expanding further.  You have some sort of modularL >storage media that plugs into a storage controller that manages it for you.1 >Individual storage media failures are minimized.r >s >> > >> >>r >>J >> I think the PRIMARY purpose of a cluster is for availability.  You lose? >> a member , you replace it.  You can replace it in an hour or D >> so - great.  I'm content to replace a member within a day.  GoingF >> forward I would like to add a member and go to next day on service.C >> If they had a "next week" service option, I'd take that too ;-).v >> >3K >It sounds like you are not being penalized for downtime, or that it really K >doesn't cost you anything when your systems are down.  It also sounds likeeK >the APP and mailserver that you support is radically different from what I F >am doing.  If you have the luxury of a day/week to replace something. >Great!cA >We use the cluster to distribute processing, and also for shareds >media.   AnJ >AIX 2-node system needs twice the amount of storage spindles that the VMSJ >system does because they CANNOT  share the same storage space at the same# >time.  Ditto for the W2K "Cluster"e > 4 >Do you have any uptime SLA's that you have to meet? >zJ >> >> Nah.  You have two or more EVAs and shadow across datacenters.  That: >> >> way when your datacenter loses power you keep going. >> > >N@ >One thing that I missed here before.  Why SHOULD my data center
 >loose power? L >With PDU's, battery backup, and a generator that will lasts for a couple ofB >days.  No reason to take this as a given.  IF everything works asG >advertised. (And so far it has) our power system reports/records powermJ >problems/outages, but does not go down.  I lost power at home for 3 days,& >but our data center never lost power. >cH >> > Yes, but my point is that it would not be possible without the EVA.K >> > EVA's in 2 data centers does not necessarily need to be made redundantw >by. >> > host based shadowing. >>A >> With VMS - sure.  VMS doesn't support DRM 2 and/or DRM 2 isn'ts? >> here yet.  To make it more generic - have 2 separate storageu? >> "boxes" in 2 separate data centers.  Now you use HBVS across  >> datacenters.y >> > J >You lost me on that one.  HBVS?  Do you mean Host Based Volume Shadowing? >tH >Just out of curiousity, How much data are you talking about transfering >between data centers?E >You probably could get away with just having a T-3 pipe between datan	 >centers.l >  >tG >> > The solution is preferred if I can take the OS out of the picture.IH >> > Otherwise, I will have 150+ systems trying to write to the far end. >>B >> The far end doesn't have to be that far away.  I'm not speaking >> of DR but fault tolerant. >> >s* >I have to consider both at the same time. >t >> >>p > D >> Mail is the key.  Mail proliferates like rabbits.  Unfortunately,@ >> it is mostly flat files on disk.  Files that are old and makeI >> little sense to back up over and over again.  Unfortunately, scatteredbI >> on many tapes so restores are long.  However, mail is very distributed C >> and a small subset are impacted if a server is down.  This isn'tmA >> everybody in the world.  Sure, consolidate your tapes OR go tor" >> large ATA storage pool for D2D. >> > G >Fortunately I do not mess with mail.  That would actually be a smallere >problem.  :-) >e >> >>o >>C >> You mean [older] DLT is too slow?  Surely you didn't have 2 or 3n@ >> LTO tape drives actively doing the restore?  We typically getC >> 20 MByte/sec to/from LTO.  With 40 MByte/sec on restore - that'sw< >> 140 GByte/hour.  That isn't VMS - but VMS plays well with? >> 3rd party Enterprise Backup solutions and will natively workiC >> with LTO 2 (faster yet) or so recent discussions have mentioned.d >rJ >Yes, (S)DLT is too slow.  Also, you possibly need to re-configure storageD >(Drive normalization) and SAN zoning prior to starting the restore. >e >>F >> What is your restore speed criteria?  Surely met by LTO (assumption- >> you would have 2 streaming your restores).s >hJ >That, I am not really sure about.  Or director and higher gets into thoseJ >discussions.  I'm too busy with my duties to really get into the nuts andI >bolts details of backups.  We actually have a full time person that juste< >works on backups.  They are in the midst of re-defining the >standard for the0 >tape media that we use. >r >Mike0 >a >w >---' >Incoming mail is certified Virus Free.r; >Checked by AVG anti-virus system (http://www.grisoft.com).B@ >Version: 6.0.506 / Virus Database: 303 - Release Date: 8/1/2003 >v --- & Outgoing mail is certified Virus Free.: Checked by AVG anti-virus system (http://www.grisoft.com).? Version: 6.0.506 / Virus Database: 303 - Release Date: 8/1/2003n   ------------------------------  % Date: Sun, 17 Aug 2003 10:06:49 -0400 ' From: "Main, Kerry" <kerry.main@hp.com>, Subject: RE: 306GB drives!R Message-ID: <FD827B33AB0D9C4E92EACEEFEE2BA2FB0C7ABE@tayexc19.americas.cpqcorp.net>   Mike, Rob -a  ? Interesting discussion - here are a few additional thoughts fort consideration:  A Re: monitoring processes impacting system with occasional spikes.   H Recommend setting these processes up with class scheduler to ensure theyF do not take more than x% of the cpu resources. See SYSMAN> help class.  G Re: slots on controller becoming a focus. Yep, certainly a big issue toAH keep in mind. Since the base controllers are the biggest cost, in largerD installations, you can run out very quickly - depending on # of diskA group and mirrorset choices. You are then faced with another base  controller cost.  C Re: local drives. While there may be some justification for a temp,hC non-prod type reason for these, I also recommend that when a SAN is E available, put all important drives (including page/swap) on the SAN.o? The issue with putting any Prod drives locally is that you thensF typically need to put in a HW RAID controller which takes up PCI slotsH and also becomes a single point of failure. You could use HBVS for localE drives, but then if a drive fails, the CPU is involved in re-buildingr0 and hence cycles are taken away from the system.  F Re: disaster tolerant config's. Something to consider is what actuallyE recently happened in Toronto (it was in all the press). An individualuD returns to work on 9th day after being told to stay home for 10 daysF because of SARS concern. The person felt fine. Health authorities findH out and immediately close the entire facility (including the datacenter)F for 10 days. Period. No "can I take a quick backup?" or "can I just goC get ..". Everyone leaves and goes home - now. Not to another site -  home.E  C US had similar issues with Anthrax scares a little over a year ago.e  D So, Mike - if this unfortunate event were to happen tomorrow in thisH large datacenter that you have - what would the impact be? Could backupsB and all systems be managed remotely (even if system crashed to >>> prompt)?  D Point of this is that there are (unfortunately) many more reasons toD consider DT configs these days and many Customers are now looking atF 100km as the starting discussion for these multi-site config's. We areB talking to one Customer who wants to do a multi-site Oracle 9i RACB cluster on OpenVMS 250km apart [creative clustering and TCPIP load balancing).   C For large config's, I typically recommend a mix of controller basedeC RAID, combined with HBVS. Controller based RAID is great for failedoG drive issues as host controllers, PCI slots, net links (multi-site) and G systems are not involved in any rebuilding of individual drives. On the F other hand, HBVS is also good for shadowing across controller sets forF mission critical applications. Have you ever had a firmware upgrade or> some other weird error on one controller take out or hang both controllers?=20y  H HBVS allows you to look at a disk controller set as a single controller.  G Yes, there is some additional write impact, but write back cache on theeG controllers helps to minimize this. There are HBVS parameters which can G be tuned to minimize any HBVS impact when, in the very rare occurrence,tE a system crashes. In addition, as Rob mentioned, the new HBVS changest, coming will also be a big help here as well.  G Now, should you HBVS all drives and applications? Likely not. CertainlytG there are always some Apps more important than others and some may onlyc! require HW DRM to somewhere else.c  E My recommendation is to classify applications with a priority that is B supported by business justification e.g. a sliding manner based onF criticality of data and how long the business can do without it. As anB example - APPs with rating "A" gets HW RAID + HBVS + active-activeF clustering. Apps with rating "B" get HW RAID + HW DRM + active-passive clustering.=20  F Unfortunately, what is happening in many businesses today, is that theA data between Apps in a Customer environment is becoming mush moreg@ integrated than in the past, so it is becoming more difficult to differentiate between them.=20  G At a bare minimum, imho, in view of the recent SARS (and Anthrax beforeoG that) it is absolutely essential that a company be able to manage their32 systems remotely. This means focus on consoles and backup/restore/archiving.e  A There are a number of multi-platform console management solutions / available to assist with the console component.e   Regardsz  
 Kerry Main Senior Consultantg Hewlett-Packard (Canada) Co.! Consulting & Integration Servicesm Voice: 613-592-4660m Fax   : 613-591-4477 Email: kerryDOTmain@hpDOTcom-     (remove the DOT's and replace with "."'s)n OpenVMS DCL - the original .COMs =20u   ------------------------------    Date: 16 Aug 2003 10:17:06 -0700* From: lee_morgan@newton.co.uk (Lee Morgan)0 Subject: Advanced Server 7.3A & Active Directory= Message-ID: <64d497a6.0308160917.3f1973d3@posting.google.com>i  @ Hi, I'm looking for some help with regards to AS 7.3A and Active
 Directory.  A Config as follows: Alpha ES40 running OpenVMS 7.3 & AS 7.3A ECO1.   @ I recently upgraded Pathworks Advanced Server 6.0D to AS 7.3A as recommended by HP.  C I performed this on our dev machine first which is identical to thea' server listed above without any issues.r  A After upgrading the live machine, I ran the PWRK$CONFIG procedureoC without any error's also. I then started the server which cam up OKw' and then recreated my 3 two-way trusts.   E 2 of the trusts are with NT4 server's which work fine. The problem isbD with the 3rd trust which when I try to setup on the Active Directory
 side, it saysw   "RPC server is unavailable"o  E The strange thing is that when I upgraded the dev machine, the Activet1 Server side created the trust without any issues.r  ) Both the dev and live machines are PDC's.   D I am not sure the problem is with AS 7.3A as it runs OK on the Alpha6 and we can connect fine through one of the NT4 shares.  F Also, I can see the domain and server ok if I look through 'my network' places' on the active directory server.v  & Any help would be gratefully received.   ------------------------------  + Date: Sun, 17 Aug 2003 13:15:17 +0000 (UTC) 3 From: "Richard Maher" <maher_rj@hotspamnotmail.com>c) Subject: Bugchecking on Alpha evax_bugchkl/ Message-ID: <bhnv55$ars$1@titan.btinternet.com>h   Hi,y  K Does everyone else know that EVAX_BUGCHK on Alpha from EXEC mode does *not*  kill the process?H  E How do I write to the error log and kill the process in a *guranteed*o fashion?   evax_bugchk     bug$_ssrvexcept 	 $delprc_sa' halt                    ; Just in case?   8 What use is evax_bugchk at all? (As opposed to $snderr?)  J Where can I find the documentation that discusses what happens when eitherD evax_bugchk or $delprc is called from a KERNEL mode rundown handler?E (Believe me, it's not pretty!!!) I don't want to be spoon fed, just ai pointer would be great!    Regards Richard Maher    ------------------------------    Date: 17 Aug 2003 08:25:45 -0500- From: Kilgallen@SpamCop.net (Larry Kilgallen)2- Subject: Re: Bugchecking on Alpha evax_bugchko3 Message-ID: <plYGjo+473Lk@eisner.encompasserve.org>A  e In article <bhnv55$ars$1@titan.btinternet.com>, "Richard Maher" <maher_rj@hotspamnotmail.com> writes:r  M > Does everyone else know that EVAX_BUGCHK on Alpha from EXEC mode does *not*  > kill the process?n  H I seem to recall some sort of permission being required for a process to BUGCHK.   G > How do I write to the error log and kill the process in a *guranteed*g
 > fashion?  I You cannot be guaranteed that a process is not so messed up as to prevent  writing to the error log.   L > Where can I find the documentation that discusses what happens when eitherF > evax_bugchk or $delprc is called from a KERNEL mode rundown handler?  , 1. The Internals and Data Structures Manual.   2. The source listings.o  G > (Believe me, it's not pretty!!!) I don't want to be spoon fed, just a  > pointer would be great!i   1. Digital Press   2. Hewlett Packard   ------------------------------  % Date: Sat, 16 Aug 2003 19:41:14 +0200 9 From: Jan-Erik =?iso-8859-1?Q?S=F6derholm?= <aaa@aaa.com>n  Subject: DECdocument and pdfmark' Message-ID: <3F3E6CBA.DB586DDE@aaa.com>r   Hi.a6 Has anyone done any integration of DECdocument and the( "pdfmark" feature of Acrobat Destiller ?  7 pdfmark lets you "mark" such things as "links" directly : in the PS source, so that you'll get the left-hand "index"0 when opening the PDF file in the Acrobat Reader.  > The same links can be created manualy in Acrobat Exchange, but8 I'm in investigating if it would be possible to automate this procedure.a  > The problem is to get the pdfmark statements into the PS file.  	 Jan-Erik.    ------------------------------  + Date: Sat, 16 Aug 2003 21:07:19 +0000 (UTC)d. From: Dennis Grevenstein <dennis@pcde.inka.de> Subject: Re: DECnet problemg, Message-ID: <bhm6e7$i89$1@aton.pcde.inka.de>  9 David McKenzie <david.mckenzie@paradigm-shift.biz> wrote:e >I- > when you say cluster, do you mean "cluster"n  9 Yes. They form a VMS cluster as long as there's no DECnet 6 on one ot them. I can boot one and stop/net DECnet and/ then boot the other into that two node cluster.r  eK > are the phase IV addresses different. This is derived from the the decnetu  > address by area*1024 + address  0 Phase IV addresses and node names are different.  y > more information please?   . Sorry, there's not much more I could think of.    > Ex VMS and now a lawyeri  % We've got to earn a living, don't we.x  t mfgp Dennis   -- gL "I remarked to Dennis that easily half the code I was writing in Multics wasO error recovery code. He said, "We left all that stuff out. If there's an error, N we have this routine called panic, and when it is called, the machine crashes,0 and you holler down the hall, 'Hey, reboot it.'"K        Tom van Vleck and Dennis Ritchie about Multics <-> UNIX relationship    ------------------------------  % Date: Sun, 17 Aug 2003 09:33:02 +0200 $ From: Michael Unger <unger@decus.de> Subject: Re: DECnet problemd9 Message-ID: <bhngb5$173nn$2@ID-152801.news.uni-berlin.de>n  / On 16-Aug-2003 23:07, Dennis Grevenstein wrote:t  ; > David McKenzie <david.mckenzie@paradigm-shift.biz> wrote:o >>. >> when you say cluster, do you mean "cluster" > ; > Yes. They form a VMS cluster as long as there's no DECnete8 > on one ot them. I can boot one and stop/net DECnet and1 > then boot the other into that two node cluster.  >  nL >> are the phase IV addresses different. This is derived from the the decnet! >> address by area*1024 + address  > 2 > Phase IV addresses and node names are different. >  a >> more information please?s >  a0 > Sorry, there's not much more I could think of. >  t > [...]?  G Do you start DECnet as the very first network protocol? (It is changingh the "MAC address".)a   Michaelt   -- u; Real names enhance the probability of getting real answers.e@ Please do *not* send "Security Patch Notifications" or "SecurityA Updates"; this system isn't running a Micro$oft operating system.a= And don't annoy me <mailto:postmaster@[127.0.0.1]> please ;-)i   ------------------------------  + Date: Sun, 17 Aug 2003 16:20:02 +0000 (UTC) . From: Dennis Grevenstein <dennis@pcde.inka.de> Subject: Re: DECnet problemd, Message-ID: <bho9vi$5um$1@aton.pcde.inka.de>   Hi,e  @ I have reinstalled VMS on the Alpha. It was a fresh installation1 anyway, so I haven't lost anything but some time.c9 Now the configuration is exactly the same except that the : DECnet (Phase IV) address is 1.19 now. It was 1.33 before.9 I just don't see why this could be a problem. The VAX hass a different address. However, it works now.    mfgb Dennis o   --  L "I remarked to Dennis that easily half the code I was writing in Multics wasO error recovery code. He said, "We left all that stuff out. If there's an error, N we have this routine called panic, and when it is called, the machine crashes,0 and you holler down the hall, 'Hey, reboot it.'"K        Tom van Vleck and Dennis Ritchie about Multics <-> UNIX relationshipe   ------------------------------  % Date: Sun, 17 Aug 2003 23:22:37 +1000l: From: "David McKenzie" <david.mckenzie@paradigm-shift.biz> Subject: Re: DECnet problem C Message-ID: <3f3f8193$0$95050$c30e37c6@lon-reader.news.telstra.net>   D Just on the offchance have you looked in the errorlog. Sometimes forK instance if the cluster authorization database is screwed you will messagess in their saying so.   L Is it a single system disk cluster? I don't have your original post at hand.0 If it is not are the cluster databases the same?   What version(s) of OpenVMS?e      ; "Dennis Grevenstein" <dennis@pcde.inka.de> wrote in message & news:bhm6e7$i89$1@aton.pcde.inka.de...; > David McKenzie <david.mckenzie@paradigm-shift.biz> wrote:  > >i/ > > when you say cluster, do you mean "cluster"  >t; > Yes. They form a VMS cluster as long as there's no DECnet 8 > on one ot them. I can boot one and stop/net DECnet and1 > then boot the other into that two node cluster.F >sF > > are the phase IV addresses different. This is derived from the the decnet" > > address by area*1024 + address >t2 > Phase IV addresses and node names are different. >d > > more information please? > 0 > Sorry, there's not much more I could think of. >. > > Ex VMS and now a lawyers > ' > We've got to earn a living, don't we.  >> > mfgw > Dennis >m > -- uJ > "I remarked to Dennis that easily half the code I was writing in Multics wassJ > error recovery code. He said, "We left all that stuff out. If there's an error,G > we have this routine called panic, and when it is called, the machinei crashes,2 > and you holler down the hall, 'Hey, reboot it.'"@ >        Tom van Vleck and Dennis Ritchie about Multics <-> UNIX relationship   ------------------------------    Date: 17 Aug 2003 01:23:14 -0700) From: meidanze@hotmail.com (meidan zemer) % Subject: decw$mwm change process namet< Message-ID: <3bbfbaa2.0308170023.9a55d2b@posting.google.com>   Hi,cA We have a DS10 alpha machine which supplied motif application fori PC's.a3 For each PC i run the decw$mwm as a detach process.rF My problem is when i run the decw$mwm it's chaning the process name toC decw$mwm_xxxx. I need to find a way to prevent the program decw$mwmT@ from changing the process name.(like a switch or a logical name) Does anybody knows a way?w Thanks,  Meidan   ------------------------------  % Date: Sun, 17 Aug 2003 09:57:51 -0500 1 From: "David J. Dachtera" <djesys.nospam@fsi.net>W) Subject: Re: decw$mwm change process name ' Message-ID: <3F3F97EF.D1093F10@fsi.net>    meidan zemer wrote:) >  > Hi,oC > We have a DS10 alpha machine which supplied motif application for2 > PC's. 5 > For each PC i run the decw$mwm as a detach process.6H > My problem is when i run the decw$mwm it's chaning the process name toE > decw$mwm_xxxx. I need to find a way to prevent the program decw$mwmGB > from changing the process name.(like a switch or a logical name) > Does anybody knows a way?M	 > Thanks,  > Meidan  F If you can explain why that's a problem, perhaps someone could suggest an alternate strategy.   -- @ David J. Dachterae dba DJE Systems  http://www.djesys.com/  ( Unofficial Affordable OpenVMS Home Page: http://www.djesys.com/vms/soho/:   ------------------------------  % Date: Sat, 16 Aug 2003 19:24:08 -0500>1 From: "David J. Dachtera" <djesys.nospam@fsi.net> . Subject: Re: itrc - H.P. I.T. Resource Center.' Message-ID: <3F3ECB27.19F34067@fsi.net>s   warren sander wrote: > L > if you want feedback to get to anyone in HP who you don't already know (ie > Hoff etc) then' > posting to OpenVMS.org isn't the way.t > J > Sending feedback via the web sites 'feedback' is one way. At least if it, > isn't already it is becoming a closed loopL > process with metrics for folks needing to answer the customers. And 'thank* > you and go ... yourself' isn't a reponse > that mgmt looks kindly upon. > K > Not everything can be done. HP does understand 'history' much better thane+ > Compaq ever did. But even inside HP therenH > is history and HISTORY. The HP customer base doesn't use as old-crufty7 > software at the same time they use new-shiny software E > so there is some education. backwards compatiblity is becoming more % > important but come on 5.5-2 is old.b  E V5.5-2 is where some folks are "stuck" because that's where their ISVg$ abandoned them. They have no choice.   > And even for y2k you had% > to have a contract for the patches.e  F Someone posted here that such was not the case, and I don't personally recall.    -- t David J. Dachterar dba DJE Systemse http://www.djesys.com/  ( Unofficial Affordable OpenVMS Home Page: http://www.djesys.com/vms/soho/p   ------------------------------  % Date: Sun, 17 Aug 2003 09:28:26 +0200o$ From: Michael Unger <unger@decus.de>. Subject: Re: itrc - H.P. I.T. Resource Center.9 Message-ID: <bhngb4$173nn$1@ID-152801.news.uni-berlin.de>   . On 17-Aug-2003 02:24, David J. Dachtera wrote:   > warren sander wrote: >> s >> [...]F >> so there is some education. backwards compatiblity is becoming more& >> important but come on 5.5-2 is old. > G > V5.5-2 is where some folks are "stuck" because that's where their ISVr& > abandoned them. They have no choice.  C IIRC that was the time when ISVs had to decide if they were able toeG support VAX _and_ Alpha hardware platforms. A lot of ISVs chose to dropeB support for VMS at all and moved to that famous "enterprise class"F operating system called "Winwoes" and perhaps a few Unices but neither Tru64 nor HP-UX.   >  >> And even for y2k you hadf& >> to have a contract for the patches. > H > Someone posted here that such was not the case, and I don't personally	 > recall.g   Michaelc   -- s; Real names enhance the probability of getting real answers.y@ Please do *not* send "Security Patch Notifications" or "SecurityA Updates"; this system isn't running a Micro$oft operating system.t= And don't annoy me <mailto:postmaster@[127.0.0.1]> please ;-)a   ------------------------------    Date: 17 Aug 2003 07:13:21 -0500- From: Kilgallen@SpamCop.net (Larry Kilgallen)t. Subject: Re: itrc - H.P. I.T. Resource Center.3 Message-ID: <eaxa257jIyrD@eisner.encompasserve.org>t  ` In article <bhngb4$173nn$1@ID-152801.news.uni-berlin.de>, Michael Unger <unger@decus.de> writes:0 > On 17-Aug-2003 02:24, David J. Dachtera wrote: >  >> warren sander wrote:n >>> 	 >>> [...] G >>> so there is some education. backwards compatiblity is becoming mores' >>> important but come on 5.5-2 is old.t >> tH >> V5.5-2 is where some folks are "stuck" because that's where their ISV' >> abandoned them. They have no choice.  > E > IIRC that was the time when ISVs had to decide if they were able tol- > support VAX _and_ Alpha hardware platforms.e  G Another factor was ISV's who were unwilling to exert the effort to makepG their application as secure as required by VAX/VMS V6.0.  Many had beene; "coasting" since V5.0 with no significant changes required.n   ------------------------------  + Date: Sun, 17 Aug 2003 13:02:06 +0000 (UTC)e3 From: "Richard Maher" <maher_rj@hotspamnotmail.com> B Subject: Priviliged Library Vector flags - PLV$M_WAIT_CALLERS_MODE/ Message-ID: <bhnucd$9k8$1@titan.btinternet.com>    Hi,   ' Can anyone tell me what that flag does?   L If I return ss$_waitcallersmode (sp?) from my UWSS then when do I get called< back? How do I tell the dispatcher that we're off again now?  H This looks good! The manual suggests that all these flags only come into2 effect when 64-bit args are enabled. Is this true?   Regards Richard Maherd   ------------------------------    Date: 17 Aug 2003 08:27:23 -0500- From: Kilgallen@SpamCop.net (Larry Kilgallen)tF Subject: Re: Priviliged Library Vector flags - PLV$M_WAIT_CALLERS_MODE3 Message-ID: <Z7szmBxubBWG@eisner.encompasserve.org>   e In article <bhnucd$9k8$1@titan.btinternet.com>, "Richard Maher" <maher_rj@hotspamnotmail.com> writes:p  ) > Can anyone tell me what that flag does?e  H On it's face, it would seem to allow user mode code in an AST or anotherE thread to take advantage of any privileges that have been temporarily ' set by the User Written System Service.s   ------------------------------  % Date: Sat, 16 Aug 2003 21:11:35 -0500r1 From: "David J. Dachtera" <djesys.nospam@fsi.net>o) Subject: Re: Querying UAF from MS Windowse' Message-ID: <3F3EE457.5909D2F8@fsi.net>o   Larry Kilgallen wrote: > W > In article <ZQPaGlF27BVM@elias.decus.ch>, p_sture@elias.decus.ch (Paul Sture) writes:eg > > In article <+TCKTTG7cuf5@eisner.encompasserve.org>, Kilgallen@SpamCop.net (Larry Kilgallen) writes:2` > >> In article <3F3C4F15.58DF8750@fsi.net>, "David J. Dachtera" <djesys.nospam@fsi.net> writes: > >>> Brian Conklin wrote: > >>>>
 > >>>> Hello,.1 > >>>>    I am a novice when it comes to OpenVMS.nL > >>>>    I have been searching for days for a Howto on querying the OpenVMSJ > >>>> Alpha 7.3 UAF from a Windows host that will be able to display userK > >>>> names, password expiration date, and whether the account is disabledn > >>>> or not on an ASP page. L > >>>>    I have a fairly large network that includes WinNT/2K servers, UnixL > >>>> servers, Linux servers, and this one OpenVMS server. We currently canE > >>>> display this user information from all but the OpenVMS server.t9 > >>>>    Thank you for any help you may be able to give.o > >>>eN > >>> Other posters have mentioned Management Station, but that's a WhineBloze/ > >>> app., not what you seem to be asking for.r > >>>eN > >>> The trick might be to get the data your want from a DCL proc. on the VMSM > >>> machine via REXEC or RSHELL, massage the output into some suitable HTMLdL > >>> and return it to the browser via the web server on a UN*X, W/NT or W2K
 > >>> box. > >>G > >> Doing such a thing on your own is fraught with Security issues (ofwF > >> course VMS Management Station raises some those concerns as well,6 > >> but one hopes DEC designed protections in there). > >eJ > > From my memories of VMS Management Station there was no way to extractH > > data to another file, or print the results except by taking a screen > > snapshot (i.e. an image).t > H > CMKRNL beats all, but my comment was triggered more by the question ofE > what protocols are on the wire carrying what sensitive information.   H As I recently learned, setting up a "trusted host" relationship prevents> having to send a password in clear text when using R-services.   -- i David J. Dachterad dba DJE Systemsy http://www.djesys.com/  ( Unofficial Affordable OpenVMS Home Page: http://www.djesys.com/vms/soho/a   ------------------------------  % Date: Sat, 16 Aug 2003 21:12:29 -0500w1 From: "David J. Dachtera" <djesys.nospam@fsi.net>t) Subject: Re: Querying UAF from MS Windowsu' Message-ID: <3F3EE48D.E01C777B@fsi.net>s   Brian Tillman wrote: > H > >   I have been searching for days for a Howto on querying the OpenVMSF > >Alpha 7.3 UAF from a Windows host that will be able to display userG > >names, password expiration date, and whether the account is disabledD > >or not on an ASP page.H > I > You'll never find a tool to display the password, since it's encrypted.:  @ I thought he only mentioned seeing the password expiration date.   -- @ David J. Dachterae dba DJE Systemsv http://www.djesys.com/  ( Unofficial Affordable OpenVMS Home Page: http://www.djesys.com/vms/soho/.   ------------------------------    Date: 16 Aug 2003 13:09:31 -0500- From: Kilgallen@SpamCop.net (Larry Kilgallen)m( Subject: Re: Security -- MicroSoft Style3 Message-ID: <6gjLXcJO6LDz@eisner.encompasserve.org>E  ^ In article <6Ue%a.992$aw5.98064490@news.netcarrier.net>, "rob kas" <rob@paychoice.com> writes:N >    Tough to have faith in Microsoft Security , when their own solution is to > turn off the Web Site. >  > 8 > http://news.com.com/2100-1002_3-5064433.html?tag=cd_mh  = "We have no plans to ever restore that to be an active site."   1 The criminals should have aimed at Microsoft.com.n   ------------------------------  # Date: Sat, 16 Aug 2003 19:00:21 GMT " From:   VAXman-  @SendSpamHere.ORG( Subject: Re: Security -- MicroSoft Style0 Message-ID: <00A2478F.9C8A7051@SendSpamHere.ORG>  c In article <6gjLXcJO6LDz@eisner.encompasserve.org>, Kilgallen@SpamCop.net (Larry Kilgallen) writes: _ >In article <6Ue%a.992$aw5.98064490@news.netcarrier.net>, "rob kas" <rob@paychoice.com> writes:MO >>    Tough to have faith in Microsoft Security , when their own solution is ton >> turn off the Web Site., >> w >> b9 >> http://news.com.com/2100-1002_3-5064433.html?tag=cd_mhh >:> >"We have no plans to ever restore that to be an active site." >a2 >The criminals should have aimed at Microsoft.com.  % The criminals *are* Microsoft (.com).c --O VAXman- OpenVMS APE certification number: AAA-0001     VAXman(at)TMESIS(dot)COMo             5   "Well my son, life is like a beanstalk, isn't it?" t   ------------------------------  # Date: Sat, 16 Aug 2003 20:49:02 GMT1% From: "Mike Naime" <mnaime@kc.rr.com>.* Subject: Re: Website Based on ASP and VMS.; Message-ID: <2Nw%a.93745$o27.2160296@twister.rdc-kc.rr.com>?   Possibly a VMS based web page?  K Why transfer it to another platform?  Or is that what you are more familiarW and comfortable with?n  J If your App is VMS based, it will be easier to pull data and present it on
 the VMS side.hF I was really surprised how simple it was to publish a stats page using Apache Webserver on VMS.; There are other web server engines that you can use on VMS.a   Mike+ Omri <omribi@zahav.net.il> wrote in messageo6 news:9fe63810.0308161159.365d3ef@posting.google.com... > Hi, ' > I have VMS 6.2 with some application.e@ > I want to create intranet based on Windows2000/IIS5 Server and
 ASP/VBSCRIPT.lK > I Need this intranet to read/update information from the VMS Application. 2 > The VMS Application using DBF files as Database.K > I Think that Ericom have the soultion for that (PowerTerm Host Publisher)l > Someone know another options?t >d	 > Thanks,s > Omri.b   ------------------------------   End of INFO-VAX 2003.454 ************************