Cloudscale Readme Note, Documentation is freely available publicly here: https://docs.hitachivantara.com/search/documents?filters=Product_custom~%2522Content+Platform+for+Cloud+Scale%2522&virtual-field=title_only&content-lang=en-US Make sure that each of your nodes has at least 32 GB of RAM, 8-core CPU, and 500GB available disk. Less RAM WILL CAUSE THE INSTALLER/CONFIG process to fail. You can use only one NIC, or two. CLoudscale has an internal and external network. Internal is a private cluster interconnect network, External is the data-ingest/management interface. Install Option 1: Boot from the hcpcs_cluster_deployment_tool_v2.6.iso When you boot from the hcpcs_cluster_deployment_tool_v2.6.iso, and you see the RHEL installer GUI, ***DO NOT CLICK ON ANYTHING BUT setting the ROOT password**, and clicking Install. *Note: If you DO click anywhere else, it will disregard our custom kickstart file and your partitioning will default to the RHEL default, and limit the application directory (/opt) to 70GB, which is a ticking TIMEBOMB). Login with the root password, stop & disable firewalld as well as set selinux to disabled after the RHEL install. Reboot. Ensure after booting to the login prompt, you remount the ISO. mkdir /opt/hcpcs cp -rvuT /run/media/root/RHEL-8-6-0-BaseOS-x86_64/hcpcsInstaller/ /opt/hcpcs/ chmod 777 /opt/hcpcs/hcpcsInstaller/csinstaller/*.* /opt/hcpcs/hcpcsInstaller/csinstaller/start.sh After the start.sh script is complete, the node will automatically reboot. If you did not disable selinux, run: setenforce Permissive On each Node run the script /opt/hcpcs/hcpcsInstaller/csinstaller/jump_server_network_config.sh -i -I -p -P -d -g -n -N -b -B systemctl stop chronyd vi /etc/chrony.conf server xxx.xxx.xxx.xxx iburst systemctl enable chronyd systemctl restart chronyd chronyc sources chronyc -a makestep On each node run the script /opt/hcpcs/hcpcsInstaller/csinstaller/jump_server_post_deploy.sh [-h] -i -I -m -M =========================================================================================================================================================================================================== Install Option 2: Install a RHEL 8.10 (9 and 10 are NOT SUPPORTED for various reasons) host. Install Docker CE 20.10.10, build b485636 (no other version is supported). Unpack the hcpcs-2.6.0.8.tgz file to /opt/hcpcs/ cd to /opt/hcpcs/ Execute ./install Run setup script: in /opt/hcpcs/bin/ The following example sets up a single-instance system that uses only one network type for all services: /opt/hcpcs/bin/setup.sh [-h] -i setup -i 192.0.2.4 The following example sets up a multi-instance system that uses both internal and external networks, type the command in this format: /opt/hcpcs/bin/setup ‑i external_instance_ip ‑I internal_instance_ip ‑m external_master_ips_list ‑M internal_master_ips_list IE: /opt/hcpcs/bin/setup ‑i 192.0.2.4 ‑I 10.236.1.0 ‑m 192.0.2.0,192.0.2.1,192.0.2.3 ‑M 10.236.1.1,10.236.1.2,10.236.1.3 Note: In any case, it is recommended to stop & disable firewalld as well as set selinux to disabled after the RHEL install. The following table shows sample commands to create a four-instance system. Each command is entered on a different server or virtual machine that is to be a system instance. The resulting system contains three master instances and one worker instance and uses both internal and external networks. Instance internal IP Instance external IP Master or worker Command 192.0.2.1 10.236.1.1 Master install_path/hcpcs/bin/setup ‑I 192.0.2.1 ‑i 10.236.1.1 ‑M 192.0.2.1,192.0.2.2,192.0.2.3 ‑m 10.236.1.1,10.236.1.2,10.236.1.3 192.0.2.2 10.236.1.2 Master install_path/hcpcs/bin/setup ‑I 192.0.2.2 ‑i 10.236.1.2 ‑M 192.0.2.1,192.0.2.2,192.0.2.3 ‑m 10.236.1.1,10.236.1.2,10.236.1.3 192.0.2.3 10.236.1.3 Master install_path/hcpcs/bin/setup ‑I 192.0.2.3 ‑i 10.236.1.3 ‑M 192.0.2.1,192.0.2.2,192.0.2.3 ‑m 10.236.1.1,10.236.1.2,10.236.1.3 192.0.2.4 10.236.1.4 Worker install_path/hcpcs/bin/setup ‑I 192.0.2.4 ‑i 10.236.1.4 ‑M 192.0.2.1,192.0.2.2,192.0.2.3 ‑m 10.236.1.1,10.236.1.2,10.236.1.3 Start the application on each server or virtual machine On each server or virtual machine that is to be a system instance: Start the application script run using whatever methods you usually use to run scripts. Important: Ensure that the method you use can keep the run script running and can automatically restart it in the event of a server restart or other availability event. You can run the script in the foreground: install_path/product/bin/run When you run the run script this way, the script does not automatically complete, but instead remains running in the foreground. You can run the script as a service using systemd: Copy the hcpcs.service file to the appropriate location for your OS. For example: cp install_path/product/bin/hcpcs.service /etc/systemd/system systemctl enable hcpcs.service systemctl start hcpcs.service After creating all of your instances and starting HCP for cloud scale, use the service deployment wizard. This wizard runs the first time you log in to the system. Open a web browser and go to https://instance_ip_address:8000 Set and confirm the password for the main admin account. (this is the ONLY local account allowed, all others are via AD Integration via adding AD Groups to HCPCS) On the next page of the deployment wizard, type the cluster host name (as a fully qualified domain name in lowercase ASCII letters) in the Cluster Hostname/IP Address field, then click Continue. Omitting this can cause links in the System Management application to function incorrectly. On the next page of the deployment wizard, confirm the cluster topology. Verify that all the instances that you expect to see are listed and that their type (Master or Worker) is as you expect. If some instances are not displayed, in the Instance Discovery section, click Refresh Instances until they appear. When you have confirmed the cluster topology, click Continue. On the next page of the deployment wizard, confirm the advanced configuration settings of services. Important: If you decide to reconfigure networking or volume usage for services, you must do so now, before deploying the system. On the last page of the deployment wizard, to deploy the cluster, click Deploy Cluster. If your network configuration results in a port collision, deployment stops and the deployment wizard notifies you which port is at issue. If this happens, edit the port numbers and try again. After a brief delay, the deployment wizard displays the message "Starting deployment" and instances of services are started. When the deployment wizard is finished, it displays the message "Setup Complete." Click Finish. The HCP for cloud scale Applications page opens.