 





        RELEASE NOTES for OpenView Operations Agents and SPI for OpenVMS

                                   Version 2.1


              This document contains the following main sections:

              1. New features

              2. Restrictions

              3. Documentation notes

        1 New Features

              This section contains a brief list of some new features as
              well as subsections describing additional new features in
              more detail.

              New features in this release include the following:

              o  Support for OpenVMS 8.3 has been added.

              o  New features added to VMSSPI include the following
                 abilities:

                 -  Specifying a new EXTERNAL MONITOR template

                 -  Specifying 'Rules' file using INCLUDE

                 -  Specifying Print Queues with '*' wildcard

              o  Optionally, you can redirect the VMSSPI log files to an
                 alternate disk.

              o  The VMSSPI now writes a time stamp into the log files
                 when a significant event occurs. This reduces the size
                 of the log file that is created over time.

        1.1 Modifications to the System SPI

              Monitoring of disk freespace includes Dynamic Volume
              Expansion (DVE) and volume sets.

        1.2 Modifications and Enhancements to the Performance SPI

              Beginning in OpenView Operations SPI for OpenVMS Version
              2.0, the Performance SPI uses XFC to monitor hot files.


                                                                        1

 







        1.3 Addition of the Security SPI

              A new SPI has been added to the OpenView Operations SPI for
              OpenVMS Version 2.0 kit: the Security SPI, which detects
              security-relevant activity as it occurs on the system-that
              is, any activity related to user access to the system or to
              a protected object within the system.

        1.4 Use of the OVOU Management Server GUI to Install the OVO
            Agent

              Beginning with OpenView Operations Agent Version 2.0,
              you can use the Management Server GUI to distribute and
              install the OVO Agent software on OpenVMS managed nodes. In
              previous versions, this procedure was performed manually.

        2 Restrictions

              The following subsections describe restrictions in the
              Version 2.1 release.

        2.1 OPCTRANM Transfer Manager Not Supported

              The OPCTRANM transfer manager was introduced in OpenView as
              a common method for distributing and installing subagents
              to all the various agent platforms. Its two basic functions
              are to transfer files and to execute commands remotely.
              OPCTRANM is not supported in this version of the OpenView
              Agent for OpenVMS.

        2.2 OVOW Servers Do Not Recognize Some Versions of OpenVMS

              When you add a node on the OVOW server, if the OpenVMS
              node is running a version of OpenVMS that the server does
              not recognize, the node is displayed as "unknown." You can
              force the node on the OVOW server to be the version the
              server does understand (that is, the version shown in the
              list the server displays). In other words, force the OVOW
              server to think the OpenVMS node is Version 7.3-1, Version
              7.3-2, or Version 8.2.

              This does not change the management of the OpenVMS node;
              it does, however, allow the OVOW database to recognize
              OpenVMS.

        2

 







        2.3 Messages Not Showing Up in the Message Browser on the OVOU
            Server

              If the OpenVMS agent and the VMSSPI have started and no
              messages appear in the OVOU server's browser, make sure
              opcmsgm is running on the server. Use the opcsv command
              on the OVOU server to determine if the opcmsgm process is
              running.

        2.4 PCSI Installation Requirement

              If your system disk is an ODS-5 system disk, the file names
              resulting from the FTP "put" operation on the PCSI kits
              are in lowercase. Due to a restriction in the POLYCENTER
              Software Installation procedure, you must rename the file
              name to all uppercase before using the PRODUCT INSTALL
              command.

        2.5 opcinfo Replaced During Upgrade

              When a new version of the OpenView Agent software is
              installed on a managed node, the opcinfo file is replaced.
              If your opcinfo file has been customized, save a copy of
              it before installing the new agent software. After the new
              software is installed, you can update the new opcinfo with
              your customizations.

              The opcinfo file is at the following location:

                  OVO$ROOT:[OPT.OV.BIN.OPC.INSTALL]OPCINFO.

        3 Documentation Notes

              The following subsections contain notes containing
              additional instructions about the use of this version.

        3.1 User Name and Password Authentication

              When you start to work in the Application Window,
              the Action Agent on the target OpenVMS node or nodes
              authenticates both the user name and password when it
              receives your request. By default, OpenVMS requires you
              to enter a user name and password when you start the
              application. (Scheduled and automatic actions have no such
              requirements.) The Action Agent uses the local SYSUAF file
              to validate the user name and password; it then reports
              failures to the console.

                                                                        3

 







              You can change the default behavior by using a system
              logical name on the target node or nodes. If the logical
              name OVO$AA_AUTHENTICATE is defined in the OpenVMS system
              logical name table, the user name and password combination
              is authenticated for every action except scheduled and
              automatic actions on that OpenVMS node. If the logical name
              is not defined, no authentication is performed.

              To change how authentication works on your system, perform
              either of the following actions:

              o  To disable authentication, enter the following OpenVMS
                 command on the target system or systems:

                     $ DEASSIGN /SYSTEM OVO$AA_AUTHENTICATE

              o  To enable authentication (the default), enter the
                 following OpenVMS command on the target system or
                 systems:

                     $ DEFINE /SYSTEM OVO$AA_AUTHENTICATE 1

        3.2 RPC Communication Error Displayed on the Management Server

              If an RPC Communication Error is displayed when you are
              trying to deploy policy to an OpenVMS managed node, the
              problem might be because the OpenVMS agent cannot resolve
              the ip-address of the management server. In this case,
              add the OPC_RESOLVE_IP field with the ip address of the
              management server in the opcinfo file and restart the
              agents to resolve the issue:

                   OPC_RESOLVE_IP xx.xx.xxx.xxx
                   OPC_MGMT_SERVER mgmt_srv.abc.xyz

              (xx.xx.xxx.xxx is the ip-address of mgmt_srv.abc.xyz)

        3.3 Distributing Scripts and Programs

              For OpenVMS managed nodes, the platform selectors and
              architecture identifiers are the following:

                  /hp/alpha/vms
                  /hp/ia64/vms

        4

 







              Location of User Scripts and Programs

              The following table shows the location of scripts and
              programs on the management server:

                     ("/<arch>/" is either /alpha/ or /ia64/):)

              ___________________________________________________________
              Scripts Program       Location
              ___________________________________________________________

              Automatic actions,    /var/opt/OV/share/databases/Opc/mgd_
              operator-initiated    node
              actions, scheduled    /customer/hp/<arch>/vms/actions/*
              actions

              Monitor scripts and   /var/opt/OV/share/databases/Opc/mgd_
              programs used by      node
              Monitor Agent and     /customer/hp/<arch>/vms/monitor/*
              Logfile Encapsulator

              Scripts and programs  /var/opt/OV/share/databases/Opc/mgd_
              called through        node
              broadcasts or from    /customer/hp/<arch>/vms/cmds/*
              Application Desktop
              ___________________________________________________________

              Temporary Directories

              The following temporary directories are used on the managed
              node for distributed scripts and programs:

                   OVO$ROOT:[VAR.OPT.OV.TMP.OPC.BIN.ACTIONS]
                   OVO$ROOT:[VAR.OPT.OV.TMP.OPC.BIN.CMDS]
                   OVO$ROOT:[VAR.OPT.OV.TMP.OPC.BIN.MONITOR]

              Target Directories

              The following target directories on the managed node are
              used as the final destination for distributed scripts and
              programs:

                   OVO$ROOT:[VAR.OPT.OV.BIN.OPC.ACTIONS]
                   OVO$ROOT:[VAR.OPT.OV.BIN.OPC.CMDS]
                   OVO$ROOT:[VAR.OPT.OV.BIN.OPC.MONITOR]


                                                                        5

 







        3.4 Execution of OpenVMS Command Strings on Managed Nodes

              OpenVMS commands in templates and operator-initiated
              actions and applications can invoke either DCL scripts
              or images. When a command string beginning with "@" is
              executed on an OpenVMS managed node, it is assumed to be
              a DCL command procedure invocation. The command string
              following the "@" must provide a path specification
              sufficient to identify the runtime location of the command
              procedure (for example, "@SYS$MANAGER:procedure.com").

              If the command string does not begin with "@", it is
              treated as an OpenVMS foreign command. In this case, the
              OpenVMS logical name DCL$PATH is automatically defined when
              the script or program is executed to one of the following
              search lists, depending on the kind of script or program
              executed:

                  OVO$ACTION_PATH
                  OVO$MONITOR_PATH
                  OVO$LOGFILE_PATH

              OVO$ACTION_PATH is defined on the managed node to be:

          OVO$ACTIONS,OVO$CMDS,OVO$MONITOR,OVO$INSTALL,OVO$INSTRUMENTATION

              OVO$MONITOR_PATH is defined on the managed node to be:

          OVO$MONITOR,OVO$SYSTEM,OVO$ACTIONS,OVO$CMDS,OVO$INSTRUMENTATION

              OVO$LOGFILE_PATH is defined on the managed node to be:

          OVO$MONITOR,OVO$ACTIONS,OVO$CMDS,OVO$INSTRUMENTATION

              Foreign command strings do not need an explicit path to
              identify the command procedure or image. If the command
              procedure or image exists anywhere in the appropriate
              search list above, it is executed automatically.

              The path search list is searched in the order defined;
              therefore, if a DCL command procedure or image with the
              same name exists in multiple directories in the path, the
              first encountered is executed. (If a DCL command procedure
              and an executable image in the same directory have the same
              name, the DCL command procedure is executed.)

        6

 







              Some examples are the following:

              ___________________________________________________________
              Command String
              Specified             What Is Executed on Managed Node
              ___________________________________________________________

              "@sys$manager:proc"   @SYS$MANAGER:PROC.COM

              "action p1"           @OVO$ACTION_PATH:action.com p1
                                    [if action.com exists]

                                    Or:
                                    action.exe p1
                                    [if action.exe exists]
              ___________________________________________________________

              In general, scripts need to handle their own errors and
              exit with the OpenVMS final exit status of "1". Exiting
              with any other final OpenVMS exit status might cause the
              entire script to appear to have failed at the management
              server console.

        3.5 Switching Managed Nodes Between OVOU and OVOW Servers

              OVOU and OVOW servers use different directories on the
              managed node to store template/policy information. If you
              need to switch a node already being managed by OVOU to
              OVOW, or vice versa, be sure to delete any old template
              or policy files before making the switch to the new
              server. (To find these files, set your default directory
              to OVO$HOME and do a directory [...] for the following
              files: MONITOR., MSGI., LE.).

        3.6 Change for User-Written Programs Linked Against
            OVO$LIBOPC_R.EXE

              In Version 1, user-written programs that linked against the
              shared image OVO$LIBOPC_R.EXE (typically user-written Smart
              Plug-In's or SPI's) inadvertently called code that enabled
              several C RTL feature switches to support UNIX portability
              features.

              These C RTL feature switches are the following:



                                                                        7

 







                      DECC$ARGV_PARSE_STYLE                   TRUE
                      DECC$EFS_CASE_PRESERVE                  TRUE
                      DECC$EXEC_FILEATTR_INHERITANCE          TRUE
                      DECC$DISABLE_TO_VMS_LOGNAME_TRANSLATION TRUE
                      DECC$FILE_OWNER_UNIX                    TRUE
                      DECC$FILE_PERMISSION_UNIX               TRUE
                      DECC$UMASK                              027

              In Version 2.1, these C RTL feature switches are no longer
              enabled by the code in OVO$LIBOPC_R. If an SPI depended on
              this functionality, those feature switches must be enabled
              explicitly.

        3.7 Changing the TCP/IP "Default" Network Node Name Behavior on
            OpenVMS

              As its default network node name, TCP/IP on OpenVMS uses
              the short version of the name in the message body. In
              other words, by default on OpenVMS, you do NOT get a
              fully qualified TCP/IP node name such as "node.domain"
              in a message body; you get only "node."

              If you want the fully qualified TCP/IP network node name,
              you must change the default alias. You can easily do this
              by issuing the following commands:

     $ TCPIP set nohost "your_short_host_name"  ! note the " are important
     $ TCPIP set host "your_full_host_name"/alias="your_short_host_name"
                       /address=your_host_address

        3.8 OVOW 7.5 and "Self Healing"

              The OpenView Operations for Windows server assumes that
              all agents support "Self Healing" in the OVO agents. The
              OpenVMS agents do NOT support "Self Healing." Users of OVOW
              7.5 must remove or disable "Self Healing" for the OpenVMS
              agents. You can do this on the OVOW management server by
              following these steps:

              1. On the Management Server Console, select: 
                 Action->Configure->Nodes

              2. Open the OpenView Defined Groups->Unix

              3. Right-Click the OpenVMS or the OpenVMS(Itanium) group;
                 select Properties

              4. Select the Deployment Tab

        8

 







              5. Select the "Self Healing" entry in the listbox and
                 Disable (or Remove) the Autodeployment of the selected
                 group.

        These steps turn off the automatic deployment of the Self Healing
        Policies.


                ________________________ Note ________________________

                If the policy is automatically deployed to the OpenVMS
                managed node before it is disabled on the management
                server, you must disable the policy on the OpenVMS
                node.

                ______________________________________________________

              Use the opctemplate command on OpenVMS to determine whether
              the policy is enabled or not:

                  $ opctemplate -l

              Look for this policy:

                  SCHEDULE  "Self-Healing Registration Scheduler" enabled

              If this policy is enabled, disable it:

                  $ opctemplate -d "Self-Healing Registration Scheduler"

              A message similar to the following should be displayed:

      opctemplate(790627360) : Template Self-Healing Registration Scheduler
      onnode nnnnn.dom.sub.com has been disabled. (OpC30-3006)











                                                                        9
