Configure Local Disk for DHCP Service

Contents

1Introduction
1.1Prerequisites
1.2Related Information

2

Configure DHCP Local Disk for New IPWorks on KVM

3

Configure DHCP Local Disk for Upgraded IPWorks on KVM

Reference List

1   Introduction

This document describes the following two parts:

1.1   Prerequisites

This section describes the prerequisites which must be fulfilled before configuring the local disk.

1.1.1   Conditions

Before stating the configuration, IPWorks must have been installed or upgraded to the latest version successfully.

1.2   Related Information

Trademark information, typographic conventions, and definition and explanation of abbreviations and terminology can be found in the following documents:

2   Configure DHCP Local Disk for New IPWorks on KVM

When IPWorks is deployed on KVM successfully, before starting the DHCPv4 server, users can do the following to add and configure the local disk for DHCP service:

  1. Add the local disk for PL-3.
    1. Log on to the host machine of PL-3.
    2. Create a QCOW2 image file on the host machine.

      The QCOW2 file is recommended to be created in the folder run. Users can create the file according to their own requirements and the file size is recommended to be not less than 10 GB.

      For example:

      cluster1-b-1:/ # cd /root/auto_deployment/images/IPW2/run/

      cluster1-b-1:~/auto_deployment/images/IPW2/run# qemu-img create -f qcow2 newdisk.qcow2 20G

      Formatting 'newdisk.qcow2', fmt=qcow2 size=21474836480 encryption=off cluster_size=65536 lazy_refcounts=off refcount_bits=16

    3. Get the domain name of PL-3.

      For example:

      cluster1-b-1:/ # virsh list

      Id    Name                           State
      ----------------------------------------------------
       1     IPWSC-1                        running
       2     IPWPL-3                        running
      

    4. Shut down the domain PL-3.

      For example:

      cluster1-b-1:/ # virsh shutdown IPWPL-3

      cluster1-b-1:/ # virsh list --all

      Id    Name                           State
      ----------------------------------------------------
       1     IPWSC-1                        running
       -     IPWPL-3                        shut off
      

    5. Add new disk configuration into PL-3.

      Ensure to set driver name as gemu, type as qcow2, source file as the folder path where the new file is located, target dev as vdb, and bus as virtio.

      For example:

      cluster1-b-1:/ # virsh edit IPWPL-3

      Add the following text into the config file of IPWPL-3.

      ......
          <disk type='file' device='disk'>
            <driver name='qemu' type='qcow2'/>
            <source file='/root/auto_deployment/images/IPW2/run/newdisk.qcow2'/>
            <target dev='vdb' bus='virtio'/>
          </disk>
      ......
      

    6. Start the domain PL-3.

      For example:

      cluster1-b-1:/ # virsh start IPWPL-3

      Domain IPWPL-3 started

  2. Configure the local disk on PL-3.
    1. Log on to PL-3, and check if the new disk is added into PL-3 with command fdisk -1.

      For example:

      PL-3:~ # fdisk -l

      Disk /dev/vda: 1 MiB, 1048576 bytes, 2048 sectors
      Units: sectors of 1 * 512 = 512 bytes
      Sector size (logical/physical): 512 bytes / 512 bytes
      I/O size (minimum/optimal): 512 bytes / 512 bytes
      Disklabel type: dos
      Disk identifier: 0x3490c1e4
      
      Device     Boot Start   End Sectors Size Id Type
      /dev/vda1  *        0  2047    2048   1M 17 Hidden HPFS/NTFS
      
      
      Disk /dev/vdb: 20 GiB, 21474836480 bytes, 41943040 sectors
      Units: sectors of 1 * 512 = 512 bytes
      Sector size (logical/physical): 512 bytes / 512 bytes
      I/O size (minimum/optimal): 512 bytes / 512 bytes
      

    2. Format the new disk on PL-3 with command mkfs.ext4 /dev/vdb.

      For example:

      PL-3:~ # mkfs.ext4 /dev/vdb

      mke2fs 1.42.11 (09-Jul-2014)
      Creating filesystem with 5242880 4k blocks and 1310720 inodes
      Filesystem UUID: e31f13da-020f-4ef4-b1db-d5d893e908e0
      Superblock backups stored on blocks:
          32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
          4096000
      
      Allocating group tables: done                           
      Writing inode tables: done                           
      Creating journal (32768 blocks): done
      Writing superblocks and filesystem accounting information: done
      

    3. Mount the new disk to directory localdisk on PL-3.

      For example:

      PL-3:/ # mount -t ext4 /dev/vdb /localdisk

    4. Check that whether the disk is mounted in the folder localdisk.

      For example:

      PL-3:/localdisk # df -h

      Filesystem                 Size  Used Avail Use% Mounted on
      root                       3.0G  2.6G  510M  84% /
      devtmpfs                    16G  8.0K   16G   1% /dev
      tmpfs                       16G  732K   16G   1% /dev/shm
      tmpfs                       16G   11M   16G   1% /run
      tmpfs                       16G     0   16G   0% /sys/fs/cgroup
      tmpfs                      3.2G     0  3.2G   0% /run/user/0
      169.254.100.101:/.cluster   30G  2.2G   26G   8% /cluster
      /dev/vdb                    20G   44M   19G   1% /localdisk
      

    5. Create the file dhcpd.leases on PL-3 with command touch.

      For example:

      PL-3:/ # touch /localdisk/dhcpd.leases

      PL-3:/ # ll /localdisk/dhcpd.leases

      -rw-r--r-- 1 root root 0 Nov 14 14:10 dhcpd.leases

  3. Repeat Step 1 and Step 2 to add and configure the local disk for PL-4.

3   Configure DHCP Local Disk for Upgraded IPWorks on KVM

After IPWorks deployed on KVM is upgraded successfully, users can do the following steps to add and configure the local disk for DHCP service.

  1. Configure the DHCPv4 service MO.
    1. Log on to the SC node.

      # ssh <Username>@<MIP_OAM_IP>

      password:<Password>

      Note:  
      To get the OAM IP address, check oam in /etc/hosts.

    2. Start an ECLI session.

      # /opt/com/bin/cliss

      For details on how to use ECLI, refer to Ericsson Command-Line Interface User Guide.

    3. Configure the DHCPv4 service MO.

      > dn ManagedElement=<Node Name>,IpworksFunction= 1,IPWorksDHCPRoot=1,DHCPv4Service=1

      (DHCPv4Service=1)>configure

      (config-DHCPv4Service=1)>arguments="-lf /localdisk/dhcpd.leases ipw_sig_sp"

      (config-DHCPv4Service=1)>commit

      (DHCPv4Service=1)>show -v

      DHCPv4Service=1
      arguments="-lf /localdisk/dhcpd.leases ipw_sig_sp"
      authenticationLevel=NONE <default>
      dhcpServiceId="1"
      EnableAutoReconfig=false<default>
      lowTPSThreshold=0
      reconfigThreshold=0 <default>
      

  2. Log on to the SC node and stop DHCP service on PL-3.

    #ipw-ctr stop dhcp pl-3

  3. Log on to IPWorks CLI on the storage server.

    # ipwcli

    IPWorks> Login: <Username>

    IPWorks> Password: <Password>

  4. Place the server in the Partner-Down mode.

    If one of the failover partners needs to be taken offline for an extended time period for system or other maintenance, after shutting down that server, place the other failover server into the partner-down mode so that it can access the entire lease pool (after the maximum client lead time expires).

    IPWCLI (partnerdown command) is used to do the operation. For example:

    IPWorks> select dhcpserver dhcp2

    Selected 1 object(s).

    IPWorks> partnerdown

    The DHCP server 'dhcp2' was set to the partnerdown state.

    IPWorks>

    When the offline server is ready to back online, ensure that the return of the server is done properly so that it can synchronize with the running server. Start the offline server while the server previously set to partner-down mode is running and ensure that no network problem prevents communication. In this way, the two servers can synchronize properly and address assignments are done safely.

    If the offline server is back online and cannot communicate with the server that was in the partner-down mode, it can resume leasing activity but both servers could assign the same address to different clients.

  5. Add the local disk for PL-3.
    1. Log on to the host machine of PL-3.
    2. Create a QCOW2 image file on the host machine.

      The QCOW2 file is recommended to be created in the folder run. Users can create the file according to their own requirements and the file size is recommended to be not less than 10 GB.

      For example:

      cluster1-b-1:/ # cd /root/auto_deployment/images/IPW2/run/

      cluster1-b-1:~/auto_deployment/images/IPW2/run# qemu-img create -f qcow2 newdisk.qcow2 20G

      Formatting 'newdisk.qcow2', fmt=qcow2 size=21474836480 encryption=off cluster_size=65536 lazy_refcounts=off refcount_bits=16

    3. Get the domain name of PL-3.

      For example:

      cluster1-b-1:/ # virsh list

      Id    Name                           State
      ----------------------------------------------------
       1     IPWSC-1                        running
       2     IPWPL-3                        running
      

    4. Shut down the domain PL-3.

      For example:

      cluster1-b-1:/ # virsh shutdown IPWPL-3

      cluster1-b-1:/ # virsh list --all

      Id    Name                           State
      ----------------------------------------------------
       1     IPWSC-1                        running
       -     IPWPL-3                        shut off
      

    5. Add new disk configuration into PL-3.

      Ensure to set driver name as gemu, type as qcow2, source file as the folder path where the new file is located, target dev as vdb, and bus as virtio.

      For example:

      cluster1-b-1:/ # virsh edit IPWPL-3

      Add the following text into the config file of IPWPL-3.

      ......
          <disk type='file' device='disk'>
            <driver name='qemu' type='qcow2'/>
            <source file='/root/auto_deployment/images/IPW2/run/newdisk.qcow2'/>
            <target dev='vdb' bus='virtio'/>
          </disk>
      ......
      

    6. Start the domain PL-3.

      For example:

      cluster1-b-1:/ # virsh start IPWPL-3

      Domain IPWPL-3 started

  6. Configure the local disk for PL-3.
    1. Log on to PL-3, and check if the new disk is added into PL-3 with command fdisk -1.

      For example:

      PL-3:~ # fdisk -l

      Disk /dev/vda: 1 MiB, 1048576 bytes, 2048 sectors
      Units: sectors of 1 * 512 = 512 bytes
      Sector size (logical/physical): 512 bytes / 512 bytes
      I/O size (minimum/optimal): 512 bytes / 512 bytes
      Disklabel type: dos
      Disk identifier: 0x3490c1e4
      
      Device     Boot Start   End Sectors Size Id Type
      /dev/vda1  *        0  2047    2048   1M 17 Hidden HPFS/NTFS
      
      
      Disk /dev/vdb: 20 GiB, 21474836480 bytes, 41943040 sectors
      Units: sectors of 1 * 512 = 512 bytes
      Sector size (logical/physical): 512 bytes / 512 bytes
      I/O size (minimum/optimal): 512 bytes / 512 bytes
      

    2. Format the new disk on PL-3 with command mkfs.ext4 /dev/vdb.

      For example:

      PL-3:~ # mkfs.ext4 /dev/vdb

      mke2fs 1.42.11 (09-Jul-2014)
      Creating filesystem with 5242880 4k blocks and 1310720 inodes
      Filesystem UUID: e31f13da-020f-4ef4-b1db-d5d893e908e0
      Superblock backups stored on blocks:
          32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
          4096000
      
      Allocating group tables: done                           
      Writing inode tables: done                           
      Creating journal (32768 blocks): done
      Writing superblocks and filesystem accounting information: done
      

    3. Mount the new disk to directory localdisk on PL-3.

      For example:

      PL-3:/localdisk # mount -t ext4 /dev/vdb /localdisk

    4. Check that whether the disk is mounted on the folder localdisk.

      For example:

      PL-3:/localdisk # df -h

      Filesystem                 Size  Used Avail Use% Mounted on
      root                       3.0G  2.6G  510M  84% /
      devtmpfs                    16G  8.0K   16G   1% /dev
      tmpfs                       16G  732K   16G   1% /dev/shm
      tmpfs                       16G   11M   16G   1% /run
      tmpfs                       16G     0   16G   0% /sys/fs/cgroup
      tmpfs                      3.2G     0  3.2G   0% /run/user/0
      169.254.100.101:/.cluster   30G  2.2G   26G   8% /cluster
      /dev/vdb                    20G   44M   19G   1% /localdisk
      

    5. Log on to PL-3, and move files dhcpd.leases and dhcpd.leases~.gz to the folder /localdisk.

      PL-3:/localdisk # mv /etc/ipworks/PL-3/dhcp/dhcpd.leases /localdisk/

      PL-3:/localdisk # mv /etc/ipworks/PL-3/dhcp/dhcpd.leases~.gz /localdisk/

  7. Log on to the SC node and start DHCP service.

    #ipw-ctr start dhcp pl-3

  8. Log on to the IPWorks CLI on the storage server.

    # ipwcli

    IPWorks> Login: <Username>

    IPWorks> Password: <Password>

  9. Use the IPWorks CLI to check whether the DHCPv4 server is running normally.

    IPWorks>show status dhcpv4server

    [DhcpV4Server dhcp1] (169.254.100.3) On 11/30/18 at 11:20:54 server is 'running normal'
    [DhcpV4Server dhcp2] (169.254.100.4) On 11/30/18 at 11:20:54 server is 'running normal'
    

    IPWorks>exit

    Continue Step 10 after running normal is displayed.

  10. Repeat Step 3 to Step 9 to add and configure the local disk for PL-4.

Reference List

Ericsson Documents
[1] Ericsson Command-Line Interface User Guide.
[2] Trademark Information.
[3] Typographic Conventions.
[4] Glossary of Terms and Acronyms.