1 INFO-VAX	Mon, 21 Jul 2003	Volume 2003 : Issue 399       Contents:% Re: dismounting disks during shutdown  Re: Does anyone remember IAS?  Re: duplicating system disks Re: duplicating system disks Re: duplicating system disks Re: duplicating system disks Re: duplicating system disks Re: duplicating system disks Re: duplicating system disks Re: getting rx2600 pricing Re: HP FUDBusting  OpenVms Backup" Re: Opteron motherboard maker sold Re: PDP-11 OS Release Dates  Re: PDP-11 OS Release Dates  Re: PDP-11 OS Release Dates ? Re: Using MIME and SMTP mail results in %TCPIP-I-SMTP_LINEWRAP,   F ----------------------------------------------------------------------  # Date: Mon, 21 Jul 2003 00:12:43 GMT  From: Rob.Buxton@wcc.govt.nz. Subject: Re: dismounting disks during shutdown% Message-ID: <3f1b2f00.259598562@news>   ) On Sun, 20 Jul 2003 15:38:28 +0000 (UTC), C helbig@astro.multiCLOTHESvax.de (Phillip Helbig---remove CLOTHES to 
 reply) wrote:   F >What is the recommended procedure for dismounting (members of) shadowD >sets which are mounted by more than one node and which have membersE >connected physically to more than one node?  Goal is to avoid (mini) I >merges as much as possible.  Obviously, if one node goes down, taking a  J >member with it, but the other member is still mounted, then a merge will J >be necessary when the member rejoins.  But this won't be the case if all B >members dismount the shadow set first (which might not always be  >desirable, of course).  > E >It seems to me that in the site-specific shutdown procedure, a node  J >should DISMOUNT/SYSTEM all shadow sets and DISMOUNT/CLUSTER non-shadowed J >disks which might be mounted by other nodes.  That will leave only those H >physically connected disks which are parts of shadow sets mounted from G >another node.  Obviously, these will be dismounted in some sense when  4 >the node shuts down, and a merge will be necessary. > I >What will happen---what will get dismounted in what order---if there is  + >no site-specific dismounting taking place?  > H >A related question: is it possible from one specific node to dismount aG >disk on other nodes but not on the specific node in question (without   >going through SYSMAN etc)?  > F I just let the OS remove the shadowsets when it's ready. What you needC to do to avoid merges etc. is to ensure all applications with files  open are shut down. F As part of my shutdown I do a show dev /files for each of the disks at! the end of my shutdown procedure. E If any of these are showing files open then I add additional shutdown  procedures to close them.    ------------------------------    Date: 20 Jul 2003 13:03:34 -0700  From: wmr282@hotmail.com (w m r)& Subject: Re: Does anyone remember IAS?= Message-ID: <398c9ca7.0307201203.7ada43dd@posting.google.com>   a Paul Repacholi <prep@prep.synonet.com> wrote in message news:<878yqulebn.fsf@prep.synonet.com>... $ > wmr282@hotmail.com (w m r) writes: > H > > Oh yeah, then we got another 11/70 and we cross-connected them usingF > > the dual ports of the RP07's.  I used DR-11C's to lock between the@ > > two and chopped up the F11ACP to share access on the drives. > G > Was it you who did the stuff that was on the RSX SIG tape to do this?   B I never sent what I did anywhere except to the customer.  It was aF contract for the Navy.  Maybe someone else from work did after I left.  A Do you know when it was written?  The stuff I did would have been  around '77 or '78.   Mike   ------------------------------  # Date: Sun, 20 Jul 2003 18:53:24 GMT  From: Beach Runner@nospam.com % Subject: Re: duplicating system disks * Message-ID: <3F1AE523.340F22D9@cfl.rr.com>  ? Yes, there are plenty of people that set up a master disk, then 7 make changes.  Some people have set up rather automated B systems, they have a master system disk that checks for something,= then configures itself based on volume label, cpu, something.     / Phillip Helbig---remove CLOTHES to reply wrote:   I > I always come back to this subject every few months, as my knowledge of  > VMS increases. > I > There must be many folks who have multiple system disks for redundancy, H > which should be identical except for NODE-SPECIFIC stuff.  Rather thanH > upgrading, installing layered products on etc ALL disks, it would makeH > more sense to do it just on one "master disk" then make copies of thisC > for other system disks (quite comfortably if all system disks are  > shadowed). >  > Does anyone actually do this?  > G > Apart from the stuff one needs to do when changing the node name etc, D > there is the stuff in the system-specific roots one needs to worryJ > about.  Actually, even the changing-the-nodename-stuff can be avoided ifH > one has many roots on all system disks, while on any given system diskC > only a few would be used, the consoles being set to boot from the  > appropriate root.  > I > Naively, it seems I should backup the roots which are used on any given D > system disk and copy them back after the disk is upgraded by being& > copied from an upgraded master disk. > C > A relatively fresh system in my hobbyist cluster looks like this:  > I >    $ dir/gra/siz sys$sysdevice:[...];/exc=([.syscommon...],*.sys,*.dmp)  > 9 >    Grand total of 11 directories, 65 files, 808 blocks.  >  > That's not too bad.  > G > Is there anything wrong with this approach?  In particular, can a VMS H > upgrade affect stuff in the specific roots so that this approach wouldH > overwrite any changes?  (I'm not talking about stuff related to system > parameters and AUTOGEN.)   ------------------------------  % Date: Sun, 20 Jul 2003 21:24:32 +0200 ) From: Bart Zorn <B.Zorn@xs4all.nospam.nl> % Subject: Re: duplicating system disks 6 Message-ID: <3f1aec70$0$49099$e4fe514c@news.xs4all.nl>  H If you want to do this, you should make sure that all systems boot from H a different system root. In that way, if you do an upgrade on on system E disk, all system roots will be upgraded as well. Afterwards, you can  9 replicate the upgraded system disk to whereever you want.    Regards,  	 Bart Zorn   / Phillip Helbig---remove CLOTHES to reply wrote: J > I always come back to this subject every few months, as my knowledge of  > VMS increases. > J > There must be many folks who have multiple system disks for redundancy, I > which should be identical except for NODE-SPECIFIC stuff.  Rather than  I > upgrading, installing layered products on etc ALL disks, it would make  I > more sense to do it just on one "master disk" then make copies of this  D > for other system disks (quite comfortably if all system disks are  > shadowed). >  > Does anyone actually do this?  > G > Apart from the stuff one needs to do when changing the node name etc, D > there is the stuff in the system-specific roots one needs to worryK > about.  Actually, even the changing-the-nodename-stuff can be avoided if  I > one has many roots on all system disks, while on any given system disk  D > only a few would be used, the consoles being set to boot from the  > appropriate root.  > J > Naively, it seems I should backup the roots which are used on any given E > system disk and copy them back after the disk is upgraded by being  & > copied from an upgraded master disk. > C > A relatively fresh system in my hobbyist cluster looks like this:  > I >    $ dir/gra/siz sys$sysdevice:[...];/exc=([.syscommon...],*.sys,*.dmp)  > 9 >    Grand total of 11 directories, 65 files, 808 blocks.  >  > That's not too bad.  > H > Is there anything wrong with this approach?  In particular, can a VMS I > upgrade affect stuff in the specific roots so that this approach would  I > overwrite any changes?  (I'm not talking about stuff related to system   > parameters and AUTOGEN.) >    ------------------------------  + Date: Sun, 20 Jul 2003 20:26:09 +0000 (UTC) 7 From: moroney@world.std.spaamtrap.com (Michael Moroney) % Subject: Re: duplicating system disks ( Message-ID: <bfett1$20a$1@pcls4.std.com>  G Your best bet is to invest in either a shadowing license or controller  ? based mirroring, and boot all nodes from the same virtual unit. K To upgrade using shadowing, you break up the shadowset, upgrade one of the  H drives, boot a node with the drive as a single member shadowset and add  the other drive(s) later.  --   -Mike    ------------------------------  + Date: Sun, 20 Jul 2003 19:38:32 +0000 (UTC) P From: helbig@astro.multiCLOTHESvax.de (Phillip Helbig---remove CLOTHES to reply)% Subject: Re: duplicating system disks $ Message-ID: <bfer3o$6gc$1@online.de>  @ In article <3f1aec70$0$49099$e4fe514c@news.xs4all.nl>, Bart Zorn" <B.Zorn@xs4all.nospam.nl> writes:   J > If you want to do this, you should make sure that all systems boot from K > a different system root. In that way, if you do an upgrade on one system  G > disk, all system roots will be upgraded as well. Afterwards, you can  ; > replicate the upgraded system disk to whereever you want.   H Right.  However, I'm still worried.  Let's say I have 2 nodes, each withH its own system disk, and two roots.  Since each boots from its own root,F then I don't have to worry about changing the node name etc.  However,F on the master disk, the root "used" by the other node is just a dummy.G While the other node is running, stuff gets written there etc.  After I C upgrade the master disk, I've upgraded the dummy root for the other D node, not the real one.  What I want to do is preserve stuff in the F specific root across the upgrade.  I can't just use the new root from H the upgraded master disk, since it doesn't have the "live" stuff.  If I G save the specific root and copy it back later, I overwrite the upgrade.   2 What I want to avoid is a file-by-file comparison.   ------------------------------  # Date: Sun, 20 Jul 2003 20:58:16 GMT  From: Beach Runner@nospam.com % Subject: Re: duplicating system disks * Message-ID: <3F1B0268.B6F0455B@cfl.rr.com>  J OK, I'll take the case of an classified weapon maker.  They have 1,000s of systems.L Based on the volume label they go out, they look in a master file somewhere,K and then configure the network and site specifics appropriately.  They have  one K master disk which they can pop into any system. YOu can do it, but you have H to be very clever, and it won't be the officially, by the book supported method.        Bart Zorn wrote:  I > If you want to do this, you should make sure that all systems boot from I > a different system root. In that way, if you do an upgrade on on system F > disk, all system roots will be upgraded as well. Afterwards, you can; > replicate the upgraded system disk to whereever you want.  > 
 > Regards, >  > Bart Zorn  > 1 > Phillip Helbig---remove CLOTHES to reply wrote: K > > I always come back to this subject every few months, as my knowledge of  > > VMS increases. > > K > > There must be many folks who have multiple system disks for redundancy, J > > which should be identical except for NODE-SPECIFIC stuff.  Rather thanJ > > upgrading, installing layered products on etc ALL disks, it would makeJ > > more sense to do it just on one "master disk" then make copies of thisE > > for other system disks (quite comfortably if all system disks are  > > shadowed). > > ! > > Does anyone actually do this?  > > I > > Apart from the stuff one needs to do when changing the node name etc, F > > there is the stuff in the system-specific roots one needs to worryL > > about.  Actually, even the changing-the-nodename-stuff can be avoided ifJ > > one has many roots on all system disks, while on any given system diskE > > only a few would be used, the consoles being set to boot from the  > > appropriate root.  > > K > > Naively, it seems I should backup the roots which are used on any given F > > system disk and copy them back after the disk is upgraded by being( > > copied from an upgraded master disk. > > E > > A relatively fresh system in my hobbyist cluster looks like this:  > > K > >    $ dir/gra/siz sys$sysdevice:[...];/exc=([.syscommon...],*.sys,*.dmp)  > > ; > >    Grand total of 11 directories, 65 files, 808 blocks.  > >  > > That's not too bad.  > > I > > Is there anything wrong with this approach?  In particular, can a VMS J > > upgrade affect stuff in the specific roots so that this approach wouldJ > > overwrite any changes?  (I'm not talking about stuff related to system > > parameters and AUTOGEN.) > >    ------------------------------  # Date: Sun, 20 Jul 2003 22:44:36 GMT 1 From: Michael Austin <maustin@firstdbasource.com> % Subject: Re: duplicating system disks 2 Message-ID: <3F1B1B2D.F267B574@firstdbasource.com>  / Phillip Helbig---remove CLOTHES to reply wrote:  > I > I always come back to this subject every few months, as my knowledge of  > VMS increases. > I > There must be many folks who have multiple system disks for redundancy, H > which should be identical except for NODE-SPECIFIC stuff.  Rather thanH > upgrading, installing layered products on etc ALL disks, it would makeH > more sense to do it just on one "master disk" then make copies of thisC > for other system disks (quite comfortably if all system disks are  > shadowed). >  > Does anyone actually do this?  > G > Apart from the stuff one needs to do when changing the node name etc, D > there is the stuff in the system-specific roots one needs to worryJ > about.  Actually, even the changing-the-nodename-stuff can be avoided ifH > one has many roots on all system disks, while on any given system diskC > only a few would be used, the consoles being set to boot from the  > appropriate root.  > I > Naively, it seems I should backup the roots which are used on any given D > system disk and copy them back after the disk is upgraded by being& > copied from an upgraded master disk. > C > A relatively fresh system in my hobbyist cluster looks like this:  > I >    $ dir/gra/siz sys$sysdevice:[...];/exc=([.syscommon...],*.sys,*.dmp)  > 9 >    Grand total of 11 directories, 65 files, 808 blocks.  >  > That's not too bad.  > G > Is there anything wrong with this approach?  In particular, can a VMS H > upgrade affect stuff in the specific roots so that this approach wouldH > overwrite any changes?  (I'm not talking about stuff related to system > parameters and AUTOGEN.)     I do this all the time... @ we have a "golden brick" with the latest version plus the latestH patches.  This golden brick is only used for "new systems" -- we will beF adding an additional ~24 systems over the next quarter.  Everything isB treated as a 2-node cluster. We minimum boot off of the zero root,B update the SCS info, update Modparams.dat, autogen, reboot, updateB DECNET and TCPIP and we then turn it over to a build team who thenE builds the application environment.  Repeat for node 2.  We can build B the initial system disk for a new 2 node cluster in about an hour.  H Any upgrades are handled as downtime upgrades - generally speaking 1-1.5? hrs depending on whether or not we are also updating firmware.  D DS10,20's ES40,45 adding DS15,25 and ES47 and a couple of GS1280's. H That should hold us through the middle of next quarter.  It doesn't makeF a lot of sense to use the "golden brick" to do the upgrade as it takesB approximately the same amount of time to make all of the necessaryE changes as it does to just copy the latest patch set from the "master H disk",  stick in the CD, do the upgrade, apply the patches and reboot...  H All of our systems are on a single SAN (currently 128TB raw disk). Using@ SAN Zoning and HSG/V selective presentation, if we have a severeG hardware issue, we will perform a "hardware move".  identify Spare box, F move the network cable, rezone, re-present the storage, and the systemH is up and running within an hour.  Meanwhile we log a call on the "down"G box and have HP come in and repair it.  At a later point we may move it B back to it's orginal hardware (when we can schedule the downtime).  D If the software vendor had chosen to use Rdb instead of the piece ofD garbage "parallel server" from Oracle, we would never need to do theD hardware move....  maybe 9iRAC will fix that now that they have realD engineers working on fixing Oracle by using the technology from Rdb.   --   Regards,  6 Michael Austin            OpenVMS User since June 19847 First DBA Source, Inc.    Registered Linux User #261163 7 Sr. Consultant            http://www.firstdbasource.com    ------------------------------   Date: 21 Jul 03 05:37:29 +0200) From: p_sture@elias.decus.ch (Paul Sture) % Subject: Re: duplicating system disks ) Message-ID: <7y2O5M8yTF6L@elias.decus.ch>   w In article <bfee37$phj$1@online.de>, helbig@astro.multiCLOTHESvax.de (Phillip Helbig---remove CLOTHES to reply) writes: J > I always come back to this subject every few months, as my knowledge of  > VMS increases. > J > There must be many folks who have multiple system disks for redundancy, I > which should be identical except for NODE-SPECIFIC stuff.  Rather than  I > upgrading, installing layered products on etc ALL disks, it would make  I > more sense to do it just on one "master disk" then make copies of this  D > for other system disks (quite comfortably if all system disks are  > shadowed). >  > Does anyone actually do this?   B Sort of. We don't have multiple system disks for redundancy - it'sD shadowing all round here. But, in the run up to Y2K we were not onlyI installing test systems, but performing VMS and layered product upgrades.   B We would upgrade one system (several hours work), then add a thirdG shadow member to the system disk to get a copy. Move that to the target I machine, then change nodename etc in MODPARAMS.DAT, Autogen, TCPIP$CONFIG H and reboot. (Our startup procedures automatically create the appropriate: nodename_BATCH queues, the DCPS ones need a manual tweak.)   > G > Apart from the stuff one needs to do when changing the node name etc, D > there is the stuff in the system-specific roots one needs to worryK > about.  Actually, even the changing-the-nodename-stuff can be avoided if  I > one has many roots on all system disks, while on any given system disk  D > only a few would be used, the consoles being set to boot from the  > appropriate root.  > J > Naively, it seems I should backup the roots which are used on any given E > system disk and copy them back after the disk is upgraded by being  & > copied from an upgraded master disk. > C > A relatively fresh system in my hobbyist cluster looks like this:  > I >    $ dir/gra/siz sys$sysdevice:[...];/exc=([.syscommon...],*.sys,*.dmp)  > 9 >    Grand total of 11 directories, 65 files, 808 blocks.  >  > That's not too bad.  > H > Is there anything wrong with this approach?  In particular, can a VMS I > upgrade affect stuff in the specific roots so that this approach would  I > overwrite any changes?  (I'm not talking about stuff related to system   > parameters and AUTOGEN.) > F The danger there is that you will overwrite the result of the upgrade.J IIRC one of the TCPIP facilities (NTP? SMTP?) changed the location of someH of its files from SYS$SPECIFIC to SYS$COMMON (or vice versa) in a recent release.  E The most obvious candidate to watch out for here is MODPARAMS.DAT, as  a VMS upgrade modifies that.   ------------------------------  % Date: Mon, 21 Jul 2003 00:16:05 +0800 , From: Paul Repacholi <prep@prep.synonet.com># Subject: Re: getting rx2600 pricing - Message-ID: <87fzl1jhru.fsf@prep.synonet.com>   + p_sture@elias.decus.ch (Paul Sture) writes:   A > Oh, and while I'm on the subject of G5s, the pricing represents F > stiff competition with the prices I have seen here this week for the
 > Itanium.  D > No choice of 3 OSes, but Mac OS X _is_ in with the purchase price,D > and a whole $525 extra to get 2 x 256 GB disks instead of the base > 160 GB offering.  ? And it is about to get more interesting; IBM are about to start ; shipping G5s in p-Series machine. About $3500 for a quad...    --  < Paul Repacholi                               1 Crescent Rd.,7 +61 (08) 9257-1001                           Kalamunda. @                                              West Australia 6076* comp.os.vms,- The Older, Grumpier Slashdot. Raw, Cooked or Well-done, it's all half baked.F EPIC, The Architecture of the future, always has been, always will be.   ------------------------------    Date: 20 Jul 2003 16:58:33 -0700( From: bob@instantwhip.com (Bob Ceculski) Subject: Re: HP FUDBusting= Message-ID: <d7791aa1.0307201558.7613bc22@posting.google.com>   f Stefaan A Eeckels <hoendech@ecc.lu> wrote in message news:<20030719230503.29ba5ecc.hoendech@ecc.lu>... > On 19 Jul 2003 09:25:46 -0700 + > bob@instantwhip.com (Bob Ceculski) wrote:  > 7 > > > Isn't VMS a case of "security through obscurity"?  > > > * > > > Just to make sure people get it: :-) > > ? > > no, vms is security through security ... and we all get it, : > > we get that you don't know what you are talking about! >  > Oh my, are we grumpy today.  > 5 > I'm pleased for you you feel Compaq/HP's management   > of VMS entirely satisfactory.  >  > --  	 > Stefaan   7 we are not talking about management, you stated vms was / security thru obscurity which is 100% false ... # don't try to change the subject ...    ------------------------------  % Date: Mon, 21 Jul 2003 06:17:17 +0200 & From: "maurix" <mizioduck@hotmail.com> Subject: OpenVms Backup - Message-ID: <bffpej$lcq$1@e3k.asi.ansaldo.it>    Hi! 8 I need a help with OpenVms 6.2 on a Digital DEC Station.  B I've done a system disk backup on tape with the following command:  E $$$ backup/image/ignore=nobackup/log dka300: mka500:name.bck/init/rew   L Then I restored this tape on another DEC and when I reboot appears a message like:    No PAGEFILE found    Do you know why?  : Thank you for your help and forgive me for my english.....   bye    -- md'a :-)  mizioduck@hotmail.com  (uck=a)    ------------------------------    Date: 20 Jul 2003 23:14:03 -0500+ From: young_r@encompasserve.org (Rob Young) + Subject: Re: Opteron motherboard maker sold 3 Message-ID: <$yh0guK6bFI6@eisner.encompasserve.org>   _ In article <HpmdndUNq6P73ISiXTWJig@metrocast.net>, "Bill Todd" <billtodd@metrocast.net> writes:  > : > "Rob Young" <young_r@encompasserve.org> wrote in message/ > news:fACBuR+2UNqB@eisner.encompasserve.org...    >> > Spin on, Rob. >> >>' >> > Newisys didn't 'go down the tubes'  >> >> Well, sure: >>, >> http://www.theinquirer.org/?article=10525 >>: >> Newisys blames lack of demand for AMD Opterons for sale >>7 >> Will staff now be forced to make Intel motherboards?  >>K >> The problem is not many people were buying Opteron boxes - and as far as  > we canL >> gather, still aren't. Moles at Newisys say that Clay Cipione likes to put > itJ >> this way. "We've done all we can to make AMD successful. Now it's up to > them". >>E >> It might have helped if IBM had bought their machines after making 
 > encouraging M >> noises, but it appears like Big Blue just decided to let Newisys flounder.  > K > So we have the usual rumor-level Inquirer reporting, and then we have the E > considerably more substantive statements noted below.  Why am I not 7 > surprised which you prefer to highlight in this case?  >  >> >> >N > http://www.statesman.com/business/content/auto/epaper/editions/friday/busine! >> > ss_f371d9d1c525513f0010.html  >> >J >> > The only people let go were redundant sales and marketing types:  the > restM >> > of the crew, including the top management, is staying on to continue the I >> > work without the distractions inherent in keeping a start-up afloat.  >> > >>; >> Maybe if we read the Statesman's article a little closer = >> we are able to deduce those 100 engineers Sanmina acquired & >> will indeed be doing Intel designs: >>F >> "Newisys hoped to create sophisticated server designs that it would > license toK >> major computer makers, such as IBM, Dell Computer Corp., Hewlett-Packard  > Co. K >> and Sun Microsystems Inc. So far, the company has struck licensing deals  > withE >> smaller computer makers, including San Diego-based RackSaver Inc."  >>A >> Now that they are freed from the "Opteron only shackles", they C >> can design servers that Dell and HP might look at.  Dell doesn't 7 >> do Opteron.  "Friends don't let friends do Opteron."  > K > Nice try, turkey, but in this day and age I'm afraid that companies don't N > shell out the better part of $100 million just to obtain the services of 100H > engineers/managers and transfer them to some job other than that which > they're currently doing. >    	Well isn't this special:   J http://www.crn.com/sections/BreakingNews/dailyarchives.asp?ArticleID=43370  K Gibson said the Newisys relationship with AMD will not affect Sanmina-SCI's L relationship with Intel. "We're bringing (Newisys) in for OEM customers," heM said. "We have no plan to brand our systems. We are also looking at using the < Newisys team to develop Intel-based systems in the future."   B 	Apparently they do shell out money to acquire talent.  After all,C 	as the Inquirer pointed out, looks like the Newisys folks will get = 	to design Intel kit after all.  But doesn't that make sense? A 	After all, maybe Opteron is 2% of the server market 2 years from : 	now, doesn't make sense to ignore the other 90%+ segment.   				Rob    ------------------------------  % Date: Sun, 20 Jul 2003 16:56:19 -0700 ) From: Lars Poulsen <lars@beagle-ears.com> $ Subject: Re: PDP-11 OS Release Dates. Message-ID: <3F1B2C23.6020809@beagle-ears.com>   Rob Warnock wrote:J >> And OS-8 could even run with *only* DECtape as the "system disk", too!!   Douglas A. Gwyn wrote:6 > I once ran RT-11 off DECtape on a PDP-11/70 in DEC's% > Marlboro facility.  It was amusing.   1 Three phrases: VAX-11/730. TU-58. Microcode load.   ? The TU-58 was a cassette tape (actually, a miniature QIC, IIRC) @ interfaced through a DL-11 UART, used as a block device, usually: with an RT-11 file structure. It made a DECtape look good.  A I heard to my amazement, that the Soviet VAX-cloning group cloned @ the 730, TU-58 and all. That was so stupid. They should at least? have replaced the TU-58 with an 8080 controlling a floppy disk.  --  H / Lars Poulsen        +1-805-569-5277   http://www.beagle-ears.com/lars/I    125 South Ontare Rd, Santa Barbara, CA 93105 USA  lars@beagle-ears.com    ------------------------------  % Date: Sun, 20 Jul 2003 23:26:24 -0400 ) From: "Douglas A. Gwyn" <DAGwyn@null.net> $ Subject: Re: PDP-11 OS Release Dates0 Message-ID: <dWmdnbT2M7TAwIaiXTWJjQ@comcast.com>   Bob Kaplow wrote: 0 > REAL DecTape, or that hightmare that was TU58?  
 Real DECtape.    ------------------------------  # Date: Mon, 21 Jul 2003 04:01:53 GMT  From: ian@hammo.com (paramucho) $ Subject: Re: PDP-11 OS Release Dates2 Message-ID: <3f2564d2.14695839@news.supernews.com>  2 On Sun, 20 Jul 2003 13:31:28 -0400, Glenn Everhart  <Everhart-nospam@gce.com> wrote:  C >OS/8 booted from real DECtape. So did RT11. I had a DOS-11 version A >that booted and ran from DECtape also, though DOS had a built in B >limit to the number of files it allowed to be opened on a DECtapeA >at any one time that limited its usefulness there. It could have ? >been lifted but DOS swapped so much that I ran out of patience  >trying to use it.    D That's why RT-11 went through all the pain of a position-independentC KMON -- so that they could keep it in memory as long as possible to @ reduce trips back to reload it from DECtape. It wasn't necessary, performance-wise with a disk-based platform.  + >  Many other machines also could boot from @ >DECtape; I believe even some of the old pdp10s did this. I used* >to boot KM9/15 off DECtape every day too. > B >By modern standards 576 blocks of 512 bytes isn't much space, but- >somehow we got a lot done with it back then.   0 It seemed like a vast expanse to me at the time.     -- Ian " Impressive If Haughty - Q Magazine   ------------------------------  # Date: Sun, 20 Jul 2003 22:54:05 GMT 1 From: Michael Austin <maustin@firstdbasource.com> H Subject: Re: Using MIME and SMTP mail results in %TCPIP-I-SMTP_LINEWRAP,2 Message-ID: <3F1B1D69.A179C26B@firstdbasource.com>   Jan-Erik Sderholm wrote:  > B > Others has answered on the "where", I'd just like to show "how". > B > I use MPACK/NBL on a 7*24*356 batch system that sends files from1 > a few 100 bytes to several 100 K bytes in size.  > 3 > One example of the DCL code used (I'v stipped the $ > error checking code for clarity) : > 3 > $! trc_file is (in this case) a textfile with 150 1 > $! char records and from a few to several 10 of 3 > $! 1000 of records. The other symbols are more or 8 > $! less self explainationary (or whatever it's called)3 > $! mailard is an address in the form "user@a.b.c"  > $!  > $ zip -jl 'zip_file' 'trcfile'3 > $ mpack -s "''subject'" -o 'mime_file' 'zip_file' + > $ mail 'mime_file'   "nbl%""''mailadr'"""  > $! > $! That's all folks !  > $! > C > The system sends a couple of 100's of mailis per day and there is D > no problems at "the other side", mostly PC users around the world. >  > Jan-Erik.  >  > John Brandon wrote:  > >  > > > That tool is a mess.$ > > > I use MPACK/MUNPACK and NBL... > > > Jan-Erik.  > > 1 > > I am not familiar with this.  Curious, where?     
 Jan Erick,   try...   $ zip -jl 'zip_file' 'trcfile'F $ pipe mpack -s "''subject'" -o sys$output 'zip_file' | mail sys$pipe % "nbl%""''mailadr'""" /subj="whatever"    This is what I use..    @ $pipe uuencode <input file> sys$output | mail sys$pipe "''user'" /subj="''subject'"  6 works on all kinds of files. zip, text, executables...  F I like this method because now I don't need to worry about cleaning upG orphaned temp files.  I have not been able to figure out how to get zip  to play this game yet... --   Regards,  6 Michael Austin            OpenVMS User since June 19847 First DBA Source, Inc.    Registered Linux User #261163 7 Sr. Consultant            http://www.firstdbasource.com    ------------------------------   End of INFO-VAX 2003.399 ************************                   Host 81.208.39.161 accepted.1 <<< PORT 81,208,39,161,117,48H5 >>> 200 Port 117.48 at Host 81.208.39.161 accepted.1 <<< PORT 81,208,39,161,117,48H5 >>> 200 Port 117.48 at Host 81.208.39.161 accepted.1 <<< PORT 81,208,39,161,117,48H5 >>> 200 Port 117.48 at Host 81.208.39.161 accepted.1 <<< PORT 81,208,39,161,117,48H5 >>> 200 Port 117.48 at Host 81.208.39.161 accepted.1 <<< PORT 81,208,39,161,117,48H5 >>> 200 Port 117.48 at Host 81.208.39.161 accepted.1 <<< PORT 81,208,39,161,117,48H5 >>> 200 Port 117.48 at Host 81.208.39.161 accepted.1 <<< PORT 81,208,39,161,117,48H5 >>> 200 Port 117.48 at Host 81.208.39.161 accepted.1 <<< PORT 81,208,39,161,117,48H5 >>> 200 Port 117.48 at Host 81.208.39.161 accepted.1 <<< PORT 81,208,39,161,117,48H5 >>> 200 Port 117.48 at Host 81.208.39.161 accepted.1 <<< PORT 81,208,39,161,117,48H5 >>> 200 Port 117.48 at Host 81.208.39.161 accepted.1 <<< PORT 81,208,39,161,117,48H5 >>> 200 Port 117.48 at Host 81.208.39.161 accepted.1 <<< PORT 81,208,39,161,117,48H5 >>> 200 Port 117.48 at Host 81.208.39.161 accepted.1 <<< PORT 81,208,39,161,117,48H5 >>> 200 Port 117.48 at Host 81.208.39.161 accepted.1 <<< PORT 81,208,39,161,117,48H5 >>> 200 Port 117.48 at Host 81.208.39.161 accepted.1 <<< PORT 81,208,39,161,117,48H5 >>> 200 Port 117.48 at Host 81.208.39.161 accepted.1 <<< PORT 81,208,39,161,117,48H5 >>> 200 Port 117.48 at Host 81.208.39.161 accepted.1 <<< PORT 81,208,39,161,117,48H5 >>> 200 Port 117.48 at Host 81.208.39.161 accepted.1 <<< PORT 81,208,39,161,117,48H5 >>> 200 Port 117.48 at Host 81.208.39.161 accepted.1 <<< PORT 81,208,39,161,117,48H5 >>> 200 Port 117.48 at Host 81.208.39.161 accepted.1 <<< PORT 81,208,39,161,117,48H5 >>> 200 Port 117.48 at Host 81.208.39.161 accepted.1 <<< PORT 81,208,39,161,117,48H5 >>> 200 Port 117.48 at Host 81.208.39.161 accepted.1 <<< PORT 81,208,39,161,117,48H5 >>> 200 Port 117.48 at Host 81.208.39.161 accepted.1 <<< PORT 81,208,39,161,117,48H5 >>> 200 Port 117.48 at Host 81.208.39.161 accepted.1 <<< PORT 81,208,39,161,117,48H5 >>> 200 Port 117.48 at Host 81.208.39.161 accepted.1 <<< PORT 81,208,39,161,117,48H5 >>> 200 Port 117.48 at Host 81.208.39.161 accepted.1 <<< PORT 81,208,39,161,117,48H5 >>> 200 Port 117.48 at Host 81.208.39.161 accepted.1 <<< PORT 81,208,39,161,117,48H5 >>> 200 Port 117.48 at Host 81.208.39.161 accepted.1 <<< PORT 81,208,39,161,117,48H5 >>> 200 Port 117.48 at Host 81.208.39.161 accepted.1 <<< PO