				  "newcm_d"
			    Version 1.1 - 12/23/93
	     An "rpc.cmsd" companion program allowing the use of
		  centralized calendar manager data servers.
				       
==============================================================================
Author: Alan Humpherys			(Address will be changing within
	Amdahl Corporation		 the next several months.)
	1250 E. Arques Ave. M/S 253
	Sunnyvale, CA 94088-3470

	email: agh20@amdahl.com
	       alangh@aol.com

Special Thanks to Patrick Horgan for helping with design issues, testing,
and review of this product.

This product is "mail-ware" just send me an e-mail to let me know that you
are using it.  I will add your name to a mailing list for discussion of
newcm_d.

			      Table of Contents:

I.   Purpose of "newcm_d"
II.  How "newcm_d" works
III. Configuring and Compiling "newcm_d"
IV.  Installing "newcm_d"
V.   Troubleshooting Problems
VI.  System Requirements and Assumptions

			   I. Purpose of "newcm_d"
			   =======================

	The calendar manager program (called "cm") from Sun's DeskSet of tools
for OpenWindows stores the calendar data for a given user on the system where
they are running the "cm" process.  It also provides a means for accessing the
calendar data of other users on different systems. This supports a network of
workstations running the cm process.  cm accomplishes this task by requiring
the user to enter a "<user>@<host>" string which the calendar manager process
uses to locate the host that contains the calendar for the specified user.

	In our environment of 900 Sun Workstations, there were 2 motivations
for wanting to modify this scheme, so that "cm" can be better used here in
Compatible Systems at Amdahl.  The first motivation for making a change was
that our support organization could not reliably back-up the calendar data
spread out over 900 individual workstations.  I wanted to centralize the
calendar information on a fewer number of machines to make the backup process
more manageable.

	The second motivation for change comes from having to know a given
user's workstation hostname in order to access their calendar data. Having to
know the hostname is an undue burden for the typical "cm" user, and presents a
barrier to the effective use of "cm" for enterprise wide calendaring.  I set
out to make it so that the program did the work of finding out where a given
user's calendar resides, to take that burden off of the end-user.

	These two motivations led me to the creation of "newcm_d", which acts
as a replacement for the original "rpc.cmsd" from OpenWindows.  This daemon
meets these two needs in a transparent manner, so that the user can better use
"cm" to its full potential.

	There is a FAQ (Frequently Asked Questions) list for calendar manager
maintenance and it lists several possible solutions to this problem.  One of
the primary ones in use today is to centralize calendars on a single host, and
retrain the user community to always access calendars on that host.  I prefer
a less obtrusive solution which lets the workstations do the work instead of
the users.  Another solution presented there consists of a daemon very similar
to newcm_d which forwards the requests for calendars on to a central machine.
Unfortunately, this scheme does not allow users to be able to use the "for my
eyes only" feature in cm, and only allows for one cm server.  Newcm_d does not
have these shortcomings because it cooperates with cm to allow all features of
cm to be used without having the users change how they use the tool, while
still centralizing calendar data.  For details about this operation, please
refer to the file README.conv, which describes the data fields which had to be
manipulated in order to allow for this transparent operation.

	newcm_d is not a replacement for Sun's calendar manager product, it is
instead a companion program which transparently enhances the abilities of the
original product by making it appear that all calendars reside on the local
machine, when they are in fact still distributed across the network.

			 II. How "newcm_d" Works.
			 ========================

	The basic operation of "newcm_d" can probably best be described by a
set of diagrams.  In normal operation, the "cm" process interacts with
"rpc.cmsd" as follows:

      +---------- host1 -----------+        +----- host2 -----+
      |               +----------+ |        |                 |
      |    ---------->|          | |        |                 |
      |  (user1@host1)| rpc.cmsd | |        |                 |
      |    <----------|          | |        |                 |
      | cm            +----------+ |        |    +----------+ |
      |    ------------------------------------->|          | |
      |  (user2@host2)             |        |    | rpc.cmsd | |
      |    <------------------------------------ |          | |
      |                            |        |    +----------+ |
      +----------------------------+        +-----------------+

                        Figure 1: Normal cm operation.


	The "cm" process talks to the "rpc.cmsd" daemon on the host which is
specified by the "<user>@<host>" specification in the Browse menu.  As stated
above, this requires the "cm" user to know which host contains the calendar
for a given user so that the calendar can be accessed.  Figure 1 shows 2
hosts, but can be expanded to (n) hosts which each contain their own calendar
data.

	A diagram of the basic operation of "newcm_d" is as follows:

      +---------- host1 ----------+        +---- cmsrvr1 ----+
      |               +---------+ |        |    +----------+ |
      |    ---------->|         |-------------->| rpc.cmsd | |
      |  (user1@host1)|         |(user1@cmsrvr1)|    &     | |
      |    <----------|         |<--------------| querycm_d| |
      | cm            | newcm_d | |        |    +----------+ |
      |    ---------->|         |-------+  +-----------------+
      |  (user2@host1)|         | |     |
      |    <----------|         |<---+  |  +---- cmsrvr2 ----+
      |               +---------+ |  |  |  |    +----------+ |
      +---------------------------+  |  +------>| rpc.cmsd | |
                                 (user2@cmsrvr2)|    &     | |
                                     +----------| querycm_d| |
                                           |    +----------+ |
                                           +-----------------+

			Figure 2. cm operation with newcm_d.


	The basic premise of "newcm_d" is that several "calendar servers" will
be designated, and all calendar data will reside on them.  Those "calendar
servers" will each run the standard "rpc.cmsd" daemon that is shipped with
OpenWindows 3.

	All other machines will run "newcm_d", which will take care of finding
out which "calendar server" contains the calendar for the specified user and
forward all commands for that user on the the correct central "calendar
server".  The effect of this is that all users simply browse calendars as if
they resided on their local host, and the daemon takes care of finding and
working with the actual data, which resides elsewhere on the network.

	In summary, the "newcm_d" makes it so that the users do not have to
know what host a given user uses to store their calendar information, they
simply access all calendars as if they were on their local host.  It also
minimizes the number of machines containing calendar data, so that they can be
effectively backed up in a large scale environment.

	       III. Configuring and Compiling "newcm_d"

	"newcm_d" was developed on a system running SunOS 4.1.1B.  It has
been tested on systems running 4.1.1A, 4.1.1B, 4.1.2, and 4.1.3.  The port
to Solaris 2.x has not yet taken place, but it is in progress at this time,
and should be completed within the next several weeks.

	This daemon was only designed to work with any release of the the
OpenWindows 3 version of "cm".  If you are using OpenWindows 2.x, or plan on
moving to OpenWindows 4 when it is released, this daemon will not work for
you.  The reason behind this is that the daemon is built around the version of
the protocol used by the OpenWindows 3 "cm".  Adding support for the different
"cm" versions is straightforward, and will probably be done in a future
release.

	All local configuration options are contained in the file
"config.h" which is a part of this distribution.  The items in the file
which should be customized prior to building the product are:

* NOTE: All of these have defaults except for CMADMIN which MUST be
        configured prior to building the application.

 1. CMSERVERSMAP

	This defines the string name for the NIS map which contains a list
	of all of your calendar manager servers.  Be sure to put double
	quotes (") around the name.  As shipped, this is defined to be
	"cmservers".

 2. CMSERVERSFILE

	This defines a file which is used as a backup in the event that the
	cmservers NIS map is unavailable, or the NO_NIS option is chosen.
	If the NIS map does exist, this file is not consulted.  It behaves
	much like the hosts NIS map does. This defaults to "/etc/cmservers".

 3. CMADMIN

	This defines the mail address/alias where notifications of problems
	with the calendar system should be sent.  Currently "newcm_d" sends
	mail to this address when a user's calendar exists on more than one
	of the calendar servers.  It is important that this mail be
	monitored so that duplicates can be consolidated into a single
	calendar.  Since there is no sure way of knowing which
	calendar to use, the system will send out mail notifying the
	cmadmin	of the duplication, and it will silently use the larger of
	the two calendars.

 4. NO_NIS

	If you wish to configure the daemon to use only the CMSERVERSFILE,
	simply uncomment the "#define NO_NIS 1" line.  The effect will be
	that the NIS map CMSERVERSMAP will not be consulted, even if it
	exists.

 5. HOST_CACHE_SIZE

	Because finding a user's calendar on a long list of cmservers can
	take several seconds, "newcm_d" caches the last "n" entries
	found, so that subsequent accesses of that user's calendar data
	will not require querying all of the cmservers to find the calendar
	again.  This cache of entries is maintained in a FIFO queue where
	the newest entry will replace the oldest in the list once the cache
	becomes full.  The cache is searched sequentially, and so it is
	assumed that its size will be rather small (<50 entries).  As
	shipped, this is configured to contain 20 entries.

6. INTERVAL

	This value controls the duration of time between checks for
	consistency of the cached data in the daemon.  Once this number of
	seconds has expired, the source of the cmservers list (either the
	NIS map or CMSERVERSFILE) is checked to see if it has been
	modified.  If the timestamp on the file or map has changed, the
	cached list of cmservers and the HOST_CACHE mentioned above, are
	both flushed and the cmservers list is rebuilt.  If the cmservers
	list is unchanged, only entries in the HOST_CACHE which are
	suspect are flushed.  Suspect entries are those which were
	found when one or more cmservers was not responding, so the
	calendar may not have been correctly located.  As shipped, this
	is set to be 10 minutes (600 seconds).

	Once all of these configuration options have been set, you can type
"make", which will generate the "newcm_d" and "querycm_d" daemons.  The make
process will also generate a program called "testfind" which can be used to
test the portion of newcm_d which locates an individual's calendar.  This
program is most useful when compiled with the debug flags described within the
config.h file.  These may be uncommented inside the config.h file, or placed
in the Makefile as values for the CDEBUGFLAGS variable.

	If you have any problems with the make, see the Troubleshooting
section below for help.

			 IV.  Installing "newcm_d"
			 =========================

	Once the applications are configured and built, the job of
installation can begin. This process consists of the following tasks:

	1. Identify which machines to use as cmservers.
	2. Create & distribute the cmservers map or file.
	3. Install the querycm_d daemon on the cmservers.
	4. Install newcm_d on the client workstations.

STEP 1. Identify which machines to use as cmservers.

	The first step in installing newcm_d is to determine which machines to
use as cmservers.  Because this machine will run the standard rpc.cmsd daemon
from Sun, anyone who uses this machine will not have the benefit of having
newcm_d automatically find a calendar for a user.  In our implementation, we
chose to designate several machines to act as cm servers which were not being
used as desktop machines for users.

	The number of servers needed for your implementation depends upon how
many users will be using the scheme.  The amount of memory required to run
rpc.cmsd is discussed in the System Requirements and Assumptions section
below.  A good rule of thumb would be 200 users on an IPX class machine.  Due
to the need to search each cmserver for a given user's calendar, it would be
wise to limit the number of cmservers to less than 30 or so.

STEP 2. Create & distribute the cmservers map or file.

	First chose whether to use an NIS map or a file to store the list of
cmservers.  If an NIS map is chosen, it must be added on the NIS master for
your domain(s) as described in Chapter 16 of the Sun System and Network
Administration Guide, which describes the Network Information Service.  Once
it has been added to the master, and the map built, it must be made available
on each of the NIS Slave systems by running ypxfr(8) on each slave to transfer
the map from the master to the slave.  Once this process has been completed,
all further updates to the cmservers file on the NIS master machine will be
available to all machines in the domain.  The name of this map defaults to
"cmservers", but can be changed by modifying the CMSERVERSMAP macro in
config.h.

	If you chose to use a file to contain the list of cmservers, you must
decide where the file is to be located.  The default name of this file is
/etc/cmservers, but that can be changed to be any file in the filesystem.  The
only restriction is that it must be readable by root.  The format of the file
is one hostname per line, with no comments or blank lines.  If the file is
left in /etc, it must be updated on each machine whenever it is changed.  In
order to minimize the number of locations which must be updated, you may wish
to locate this file in a filesystem which is NFS mounted to all machines.  The
filename which is used by newcm_d is defined by the macro CMSERVERSFILE in
config.h.

	In either case, the newcm_d daemon will re-read the map or file when
its contents have been changed.  By default it will do this every 10 minutes,
but the daemon can be forced to re-read the contents by sending it a SIGHUP
signal.

STEP 3. Install the querycm_d daemon on the cmservers.

	Once the list of cmservers is made available via file or NIS map, you
are now ready to begin installation of the software on the machines.  The
first software which must be installed is a daemon on each of the cmservers.
This daemon works in concert with newcm_d to locate a given calendar for a
particular user.  This daemon is designed to be started by inetd when needed,
so the installation process consists of making the program available on each
cmserver and adding the following lines to /etc/inetd.conf:

300319/1      dgram   rpc/udp wait root /usr/etc/querycm_d     querycm_d
300319/1      stream  rpc/tcp wait root /usr/etc/querycm_d     querycm_d

	Note that the above lines assume that querycm_d will be installed in
/usr/etc on the system,  Modify them appropriately to indicate where querycm_d
will be located on your system.  Remember to send a SIGHUP (kill -HUP) to the
inetd process after editing /etc/inetd.conf, so that it will re-read the file.

	For your convenience, a script called "install_querycm" has been
provided to install the software and make the changes to the /etc/inetd.conf
file.  If you chose to use this script, you must edit it and change the SRCDIR
and TGTDIR variables so that they point to the source and target directories
respectively.  This script assumes that the querycm_d program is to be copied
locally to each machine, so if it is to be installed in a common NFS mounted
directory, the script will try and copy querycm_d into that directory each
time it is run.

STEP 4. Install newcm_d on the client workstations.

	The final step in the installation process consists of changing each
client workstation so that it uses newcm_d instead of rpc.cmsd to service
calendar manager requests.  As in STEP 3 above, a script has been provided to
assist with this installation.  This script is called "install_newcm".  It
also contains the SRCDIR and TGTDIR variables, which must be edited to contain
the correct values for your system.

	The installation can also be done by hand, instead of using the
script.  The steps are very similar to those required to install the querycm_d
on the cmservers.  They consist of making the newcm_d software available on
each machine, and editing /etc/inetd.conf to have inetd start up newcm_d
instead of rpc.cmsd.  The line in /etc/inetd.conf should be changed from:

100068/2-3      dgram   rpc/udp wait root /usr/openwin/bin/rpc.cmsd rpc.cmsd

to:

100068/3      dgram  rpc/udp wait root /usr/etc/newcm_d     newcm_d


	Note that the above entry in /etc/inetd.conf assumes that newcm_d will
be installed in /usr/etc on the system.  This location can be changed by
simply changing the 6th field in the entry to the correct location for the
daemon.  Also remember that inetd must be reinitialized by sending it a SIGHUP
signal (kill -HUP) so that it rereads the file.

	Once inetd.conf has been reinitialized, the rpc.cmsd process should be
killed.  The next request from a cm process will then automatically start up
newcm_d.

	IMPORTANT NOTE!!! All cmservers need to run the standard rpc.cmsd from
Sun.  Do not install newcm_d on the cmservers themselves.

	As we have installed newcm_d on machines here at Amdahl, we have
noticed that the cm process occasionally hang when rpc.cmsd is killed.  If
this happens on your systems, simply have the users kill their cm process and
restart another one.

			 V. Troubleshooting Problems
			 ===========================

	This application has gone through an in-depth QA process, including
alpha & beta testing and code review.  All lint errors have been corrected,
and the program has been validated with "purify" to detect and correct any
memory management problems with the code.  The make process has been designed
to be as simple as possible to minimize the number of potential problems.

	This section deals with potential problems which may be encountered
during the course of operation and installation of the newcm_d daemon.  If
you encounter a problem not addressed here, please e-mail a description of
the problem to me at the address listed above.  I will make an effort to
correct any problems, and include appropriate questions and their answers
in future revisions of this document.  Admittedly with this first release,
this portion of the documentation does not address all the areas which
ultimately need to be addressed.  Please help me add to this section.

Q. My machines are set up dataless, what changes need to be made on each
   client, and what needs to be done on each /usr server?

A. On each client workstation, /etc/inetd.conf needs modified to run
   newcm_d instead of rpc.cmsd.  This can easily be accomplished by
   running the provided "install_newcm" script.

   If the target location for the newcm_d daemon is /usr/etc, then it
   must be made available in /export/exec/${arch}.${os}/etc on each of
   the /usr servers.  If you chose to make it available in another
   NFS mounted location, merely change the "TGTDIR variable in the
   install_newcm script to the appropriate location.

Q. Our policy is to not make changes to the contents of directories
   like /etc or /usr/etc.  Can the newcm_d information be located
   elsewhere?

A. Yes the newcm_d product is relocatable.  The CMSERVERSFILE defaults
   to /etc/cmservers, but that location can be changed by modifying
   the CMSERVERSFILE #define in "config.h".  The daemon itself can
   be installed anywhere.  Simply change the "TGTDIR" variable
   in install_newcm to be the directory where newcm_d should reside.

Q. A user does not appear to have insert access into their own
   calendar now that the change has been made.  What is wrong?

A. The access list for each calendar lists who has browse insert
   and delete access for the calendar.  newcm_d makes it so that
   all accesses to a calendar appear to be happening from the host
   where the calendar actually resides.  In order to make the
   calendars transportable, have the user change the Access List
   property to simply allow Browse Insert & Delete permission for
   the calendar "<userid>" rather than the traditional
   "<userid>@<host>".  This change will allow them to update their
   calendar from any host participating in the newcm_d scheme.

Q. What should I do about the "duplicate calendar" e-mails sent out
   by the newcm_d process?

A. It is important not to ignore these messages.  They indicate that a
   user has more than one calendar out there.  Currently, newcm_d
   simply picks the largest of these calendars to use, but this may
   not always be the correct calendar.  Every effort is made by the
   newcm_d daemon to eliminate the creation of duplicate calendars,
   by removing duplicates if they are really empty.  When one of
   these messages is received, contact the user and work with them
   to determine which of the calendars is the one that they would
   like to use and then remove the other one.  Remember that after
   making any changes to the /var/spool/callog directory, it is
   necessary to kill the rpc.cmsd daemon on that machine in order
   for the changes to be recognized.

		   VI. System Requirements and Assumptions
		   =======================================

	In the setup and installation described above, a cm server was
designated in each building.  It is assumed that the cm server (which runs
the standard rpc.cmsd) will be a machine which is not used by a normal user
(or at least is not used by a user who wishes to participate in the
centralized cm scheme.  The number of calendars on a machine is limited by
the amount of processing and memory available to the rpc.cmsd process.  A
first order approximation of the amount of memory required for
rpc.cmsd is given by the formula:

	(Sum of the disk size of all calendars * 4.2) + 110k

	It has been reported by several people that they were able to
run hundreds of calendars off of a single server.

	The program is intended to work with the following:

		Openwindows 3.x
		SunOs 4.1.x

	The port to Solaris 2.3 is underway and should be available soon.
