hit counter script
Dell EMC VxFlex Ready Node Operating System Installation Manual

Dell EMC VxFlex Ready Node Operating System Installation Manual

Hide thumbs Also See for EMC VxFlex Ready Node:
Table of Contents

Advertisement

Quick Links

Dell EMC VxFlex Ready Node
Operating System Installation and Configuration Guide
for Linux
Rev 01
October 2019

Advertisement

Table of Contents
loading

Summary of Contents for Dell EMC VxFlex Ready Node

  • Page 1 Dell EMC VxFlex Ready Node Operating System Installation and Configuration Guide for Linux Rev 01 October 2019...
  • Page 2 MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. USE, COPYING, AND DISTRIBUTION OF ANY DELL SOFTWARE DESCRIBED IN THIS PUBLICATION REQUIRES AN APPLICABLE SOFTWARE LICENSE. Dell Technologies, Dell, EMC, Dell EMC and other trademarks are trademarks of Dell Inc. or its subsidiaries. Other trademarks may be the property of their respective owners. Published in the USA.
  • Page 3: Table Of Contents

    Contents Figures ............................5 Tables ............................7 Chapter 1 VxFlex Ready Node deployment overview..............9 Typographical conventions............................. 10 Hardware and operating systems........................... 10 Supported operating systems and requirements....................... 10 VxFlex OS packages..............................12 VxFlex OS component requirements..........................13 VxFlex OS cluster components..........................13 VxFlex OS Gateway server requirements—VxFlex Ready Node................
  • Page 4 Next steps.......................73 Next steps..................................74 Chapter 10 Reference material....................75 DTK - Hardware Update Bootable ISO........................... 76 Dell EMC OpenManage DRAC Tools (RACADM)...................... 76 Recommended BIOS and firmware settings......................78 Troubleshooting the Hardware ISO...........................80 Additional resources..............................82 Chapter 11 Getting help......................85 Contacting Dell EMC..............................
  • Page 5: Figures

    Figures R640 and R740xd PCI slots, integrated NICs and BMC port locations..............37 Correlating physical slot and Ethernet network names.....................41 R840 PCI slots, integrated NICs, and iDRAC port locations..................45 Correlating physical slot and Ethernet network names.................... 48 VxFlex Ready Node Operating System Installation Guide for Linux...
  • Page 6 Figures VxFlex Ready Node Operating System Installation Guide for Linux...
  • Page 7: Tables

    Tables Linux..................................10 Windows, except Core systems (see note) ......................11 Linux..................................11 Windows.................................. 12 VxFlex OS management IP network.........................30 VxFlex OS data IP network for Subnet #1........................30 VxFlex OS data IP network for Subnet #2........................31 VxFlex OS IP network for 2-layer configuration (Data 3 + Data 4)................31 VxFlex OS management IP network.........................32 VxFlex OS data IP network for Subnet #1........................32 VxFlex OS data IP network for Subnet #2.......................
  • Page 8 Tables VxFlex Ready Node Operating System Installation Guide for Linux...
  • Page 9: Vxflex Ready Node Deployment Overview

    CHAPTER 1 VxFlex Ready Node deployment overview This guide explains how to install the operating system and perform system configurations such as network connectivity and ports, on VxFlex Ready Node servers. Use this guide after installing a new or replacement VxFlex Ready Node server at the customer site, before deploying the system.
  • Page 10: Typographical Conventions

    The following is a list of supported operating systems and additional requirements for this version of VxFlex OS on VxFlex Ready Node servers. For the most updated information, see the Dell EMC VxFlex Ready Node Firmware and Driver Matrix at https:// support.emc.com/products/42216_ScaleIO-Ready-Node--PowerEdge-14G.
  • Page 11: Windows, Except Core Systems (See Note)

    VxFlex Ready Node deployment overview Table 1 Linux (continued) Component Requirement Additional requirements for specific operating SLES 12.3, 12.4: systems hwinfo net-tools pciutils ethtool Hypervisor support: Red Hat KVM Additional packages required for MDM bash-completion (for SCLI completion) components Latest version of Python 2.X Secure authentication mode Ensure that OpenSSL 64-bit v1.0.1 or later is installed on all servers in the system:...
  • Page 12: Vxflex Os Packages

    VxFlex Ready Node deployment overview Table 3 Linux (continued) Component Requirement Additional requirements for specific operating SLES 12.4: systems hwinfo net-tools pciutils ethtool Hypervisor support: Red Hat KVM Additional packages required for MDM bash-completion (for SCLI completion) components Latest version of Python 2.X Secure authentication mode Ensure that OpenSSL 64-bit v1.0.1 or later is installed on all servers in the system:...
  • Page 13: Vxflex Os Component Requirements

    VxFlex Ready Node deployment overview VxFlex OS component requirements Components and servers in the VxFlex OS system must meet the following requirements: VxFlex OS cluster components The following is the list of required VxFlex OS servers: VxFlex OS component servers 3-node cluster One Master MDM One Slave MDM...
  • Page 14: General Prerequisites

    The following TCP ports are not used by any other application, and are open in the local firewall of the server: 80 and 443 (or 8080 and 8443). You can change the default ports. For more information, see "Communication Security Settings" in the Dell EMC VxFlex OS Security Configuration Guide .
  • Page 15: Disk Prerequisites

    VxFlex Ready Node deployment overview Console operations (KVM access) Ensure that you have either a VGA tool kit to allow console connection from a laptop computer to a server, or a computer screen and keyboard connection to the rack. Disk prerequisites For R640 and R740xd systems only, if an H730p/H740p controller card is installed on the server, install the PERCCLI disk utility.
  • Page 16 VxFlex Ready Node deployment overview In cases of only two 25 GbE ports, the ports are also used for other network traffic. The switches must have sufficiently available network ports to accommodate the following: Data network 10/25 GbE switches: Two 10/25 GbE ports per node, per switch Management network switches: One 1/10 GbE and one iDRAC port per node.
  • Page 17: Configuring The Hardware

    CHAPTER 2 Configuring the hardware This section describes how to configure the hardware, set iDRAC IP addresses, and map the ISO for servers in a VxFlex Ready Node environment. Set up the iDRAC IP address and BIOS........................18 Verify the status of the system hardware and drives.....................20 Log in to the KVM console ............................
  • Page 18: Set Up The Idrac Ip Address And Bios

    - Scaleio123 Note: Dell EMC recommends that you change the iDRAC password as soon as possible, because leaving the default password may create a security risk. During the iDRAC IP and BIOS setup, use the following keyboard operations: Use the arrow keys to navigate in the BIOS screens.
  • Page 19 Configuring the hardware Static IP Address = Static IP address (customer-provided). The static IP address must be accessible by the remote computer that will be used for system setup. Static Gateway = Gateway IP address Static Subnet Mask = Subnet mask IP address From the IPv6 Settings pane, configure the IPv6 parameter values for the iDRAC port.
  • Page 20: Verify The Status Of The System Hardware And Drives

    A table displays information regarding the physical drives, and an option to Blink/Unblink the selected drive. Verify that no drive is in failed state. If any of the drives has failed, refer to the drive FRU procedure in the relevant Dell EMC VxFlex Ready Node Field Replaceable Unit Guide .
  • Page 21: Log In To The Kvm Console

    <iDRAC_IP_address> . From your Internet browser, go to https:// In the DELL Console Login window, type the user name and password, and click Login. From the dashboard, click Launch Virtual Console to start a console session. If a security warning appears, select Accept, and then click Run.
  • Page 22: Updating The Bios, Firmware And Settings

    To perform any updates needed to meet VxFlex Ready Node requirements, use the VxFlex Ready Node Hardware Update Bootable ISO ("Hardware ISO"). The Hardware ISO is based on the Dell OpenManage Deployment Toolkit (DTK). The DTK provides a framework of tools necessary for the configuration of VxFlex Ready Node servers. For VxFlex OS, a custom script has been injected, along with specific qualified BIOS/firmware update packages.
  • Page 23 For each VxFlex Ready Node server, after the updates are finalized, clear the iDRAC job queue using the iDRAC GUI: <iDRAC_IP_address> . From your Internet browser, go to https:// In the DELL Console Login window, type these credentials: username: root <password> password: Click Login.
  • Page 24 Configuring the hardware VxFlex Ready Node Operating System Installation Guide for Linux...
  • Page 25: Installing The Linux Operating System

    CHAPTER 3 Installing the Linux operating system This section describes the procedures for installing a Linux operating system on a VxFlex Ready Node server. Linux system requirements............................ 26 Rebuild the RAID 1 boot device using the replaced M.2 module................26 Map the Linux ISO file on a VxFlex Ready Node server..................27 Optimize CPU performance on RHEL...
  • Page 26: Linux System Requirements

    During operating system installation, use Scaleio123 as the password for the user name root or administrator. (Alternately, provide a password that meets the local security criteria.) In a 2-Layer installation, Dell EMC supplies a VxFlex Ready Node image ISO. Follow the wizard installation steps. The default password is Scaleio123. If required, you can change the password by using the passwd command.
  • Page 27: Map The Linux Iso File On A Vxflex Ready Node Server

    A Red Hat license is required for storage-only configurations with Red Hat OS image downloaded from the VxFlex OS support page. Customers can choose a Red Hat licensing option listed on this page, or bring their own Red Hat license. Dell EMC is not responsible for enforcing OS licensing.
  • Page 28: Optimize Cpu Performance On Suse Systems

    Installing the Linux operating system Find the GRUB_CMDLINE_LINUX configuration option and append the following to the line to the kernel parameters: intel_idle.max_cstate=0 processor.max_cstate=1 intel_pstate=disable Example: GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap rhgb intel_idle.max_cstate=0 processor.max_cstate=1 intel_pstate=disable quiet" Compile the new GRUB: grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg Stop and then disable the tuned service: systemctl stop tuned systemctl disable tuned...
  • Page 29: Network Architecture And Physical Connectivity For Linux Servers

    CHAPTER 4 Network architecture and physical connectivity for Linux servers This section provides networking requirements and connectivity information for VxFlex Ready Node Linux servers. Networking connectivity architecture and cabling best practice for Linux deployments........30 IP addresses for R640 and R740xd servers......................
  • Page 30: Networking Connectivity Architecture And Cabling Best Practice For Linux Deployments

    Network architecture and physical connectivity for Linux servers Networking connectivity architecture and cabling best practice for Linux deployments The following information describes connectivity architecture, cabling best practice information, and cable connection examples from typical VxFlex OS configurations to help you plan your network. Note: If you are not familiar with VxFlex Ready Node system architecture, refer to the "Architecture"...
  • Page 31: Vxflex Os Data Ip Network For Subnet #2

    Network architecture and physical connectivity for Linux servers Table 6 VxFlex OS data IP network for Subnet #1 (continued) Item Description Comments IP address pool The pools of IP addresses used for static allocation for the following groups: For clarity, the first subnet is for Subnet #1 referred to as "Data1"...
  • Page 32: Ip Addresses For R840 Servers

    Network architecture and physical connectivity for Linux servers IP addresses for R840 servers Prepare IP addresses in your network for the VxFlex Ready Node servers in Linux-based environments based on the following calculations: Note: In addition to the networking requirements below, ensure that you have prepared the items described in Additional equipment and network resource requirements on page 15 earlier in this guide.
  • Page 33: Vxflex Os Ip Network For 2-Layer Configuration (Data 3 + Data 4)

    Network architecture and physical connectivity for Linux servers Table 12 VxFlex OS IP network for 2-layer configuration (Data 3 + Data 4) Item Description Comments Number of nodes IP address pool The pool of IP addresses used for static allocation for the following group: For clarity, the second for Data 3 subnet is referred to as...
  • Page 34 Network architecture and physical connectivity for Linux servers VxFlex Ready Node Operating System Installation Guide for Linux...
  • Page 35: Configuring Network Ports On Linux Servers

    CHAPTER 5 Configuring network ports on Linux servers This section describes how to configure the ports on a VxFlex Ready Node Linux server. Linux ports overview............................. 36 Configure network ports on R640 or R740xd servers.................... 36 Configure network ports on R840 servers......................
  • Page 36: Linux Ports Overview

    Procedure <iDRAC_IP_address> . From your Internet browser, go to http:// In the DELL Console Login window, type the user name and password, and click Login. In the navigation pane, select System Inventory Hardware inventory, and then select the CPU node.
  • Page 37: R640 And R740Xd Pci Slots, Integrated Nics And Bmc Port Locations

    Configuring network ports on Linux servers Figure 1 R640 and R740xd PCI slots, integrated NICs and BMC port locations Integrated (iDRAC) NICs iDRAC Integrated (iDRAC) NICs Data cables are connected to two switches for high availability, via the nodes' 10 GbE, 25 GbE, or 100 GbE ports. Management interfaces are connected to a switch on a separate management network using onboard and iDRAC ports.
  • Page 38: Vxflex Ready Node R640 Port Designations - Linux

    Configuring network ports on Linux servers The second port from the left, and the right port are used for application\client traffic VxFlex Ready Node R640 port designations - Linux For single-node VxFlex Ready Node servers running RHEL or SLES, connect the cables, and configure the ports as shown in the configuration tables according to the server type.
  • Page 39: Correlate The Pci Slot Locations And Interface Names On R640/R740Xd Servers

    Configuring network ports on Linux servers Note: You can also use a second NIC in Slot 2 for a 4*25 GbE option. Table 14 R740xd single-node server configuration for 10/25GbE, 2CPU, SFP+/SFP28 Description iDRAC VxFlex OS VxFlex OS VxFlex OS VxFlex OS VxFlex OS Not in...
  • Page 40 Configuring network ports on Linux servers In this case, the 10GB PCI NIC is presented on slot 6, and the logical bus address of its ports is 5:0 and 5:1. (See the image in substep c, in which the correlation between PCIe slot and the logical bus address is highlighted in yellow.) Designation: PCIe Slot 1 Current Usage: Available...
  • Page 41: Correlating Physical Slot And Ethernet Network Names

    Configuring network ports on Linux servers Figure 2 Correlating physical slot and Ethernet network names Create a table for your own use, similar to the example table shown below: Base the table on the RHEL port definitions described for the relevant server model (R640 or R740xd) and type.
  • Page 42 Configuring network ports on Linux servers Create three files that correspond to the names you are assigning the NICS (ifcfg-sio_mgmt, ifcfg- sio_d_1, and ifcfg-sio_d_2): cp /etc/sysconfig/network-scripts/ifcfg-eth0 /etc/sysconfig/network-scripts/ifcfg- sio_mgmt cp /etc/sysconfig/network-scripts/ifcfg-sio_mgmt /etc/sysconfig/network-scripts/ifcfg- sio_d_1 cp /etc/sysconfig/network-scripts/ifcfg-sio_mgmt /etc/sysconfig/network-scripts/ifcfg- sio_d_2 Configure each of the files you created with the correct management or data IP addresses. For the ifcfg-sio_mgmt file, run: echo DEVICE= sio_mgmt >...
  • Page 43 Configuring network ports on Linux servers Change the permissions of the file you just created to allow the root user access: chmod a+x /etc/udev/rules.d/70-persistent-ipoib.rules Find the MAC address for each NIC: ethtool –p <NIC_name> Output similar to the following should appear: ethtool -P ens192 Permanent address: 00:50:56:a7:b0:19 For each NIC, edit the 70-persistent-ipoib.rules file according to the information in the output in the...
  • Page 44 Configuring network ports on Linux servers After updating the 70-persistent-net.rules files, reboot the node using the reboot command. After the node comes up, log in again to the console. Create the ifcfg-emX files for all interfaces. Note: A good way to create a file is using the cp and echo commands: Example: cp /etc/sysconfig/network/ifcfg-em3 /etc/sysconfig/network/ifcfg-em4;...
  • Page 45: Configure Network Ports On R840 Servers

    Configuring network ports on Linux servers Note: Only the one default gateway is required. Most likely this will be the management network interface. Create this file by using the following command: touch /etc/sysconfig/network/ifroute-emX Perform reboot to the host. After the host is up, log in to the management IP address using SSH, and ping all data IP addresses of another node to make sure that you have the correct connectivity on each node.
  • Page 46: Vxflex Ready Node R840 Linux Server Port Designations

    Configuring network ports on Linux servers The default configuration used in simple mode deployment follows the following rules of thumb: If there is a 1G NIC onboard, the left one of the pair is always used for the management network. If there is only one NIC, the left 10G onboard port is used for the data network.
  • Page 47 Configuring network ports on Linux servers Correlate the PCIe slot locations and interface names - R840 servers Gather and correlate PCIe slot locations and interface names in order to build a table with the relevant interface names for the Linux nodes. About this task RHEL 7.x does not use the 70-persistent-net.rules file for Ethernet persistency.
  • Page 48: Correlating Physical Slot And Ethernet Network Names

    Configuring network ports on Linux servers Bus Address: 0000:01:00.1 Reference Designation: Integrated NIC 3 Bus Address: 0000:09:00.0 Reference Designation: Integrated NIC 4 Bus Address: 0000:09:00.1 Correlate the list of OS/Ethernet interface names to a logical bus address: ip a | grep ": " | awk '{print $2}' | tr -d ":" | grep -v ^"lo"$ | xargs -I '{}' sh - c 'echo {};...
  • Page 49 Configuring network ports on Linux servers Configure the ports on RHEL 7.x nodes - R840 servers Configure ports on Linux-based (RHEL 7.x type) nodes according to the following procedure. About this task As part of this procedure, you will assign the traffic management and two data NICs the following names: Traffic management NIC: ifcfg-sio_mgmt VxFlex OS Data NIC 1: ifcfg-sio_d_1 VxFlex OS Data NIC 2: ifcfg-sio_d_2...
  • Page 50 Configuring network ports on Linux servers For the ifcfg-sio_d_2 file, run: echo DEVICE= sio_d_2 > /etc/sysconfig/network-scripts/ifcfg-sio_d_2 echo STARTMODE=onboot >> /etc/sysconfig/network-scripts/ifcfg-sio_d_2 echo USERCONTROL=no >> /etc/sysconfig/network-scripts/ifcfg-sio_d_2 echo BOOTPROTO=static >> /etc/sysconfig/network-scripts/ifcfg-sio_d_2 echo NETMASK=X.X.X.X >> /etc/sysconfig/network-scripts/ifcfg-sio_d_2 echo IPADDR=X.X.X.X >> /etc/sysconfig/network-scripts/ifcfg-sio_d_2 echo 1 >/sys/bus/pci/rescan Configure the gateway: echo “NETWORKING=yes”...
  • Page 51 Configuring network ports on Linux servers Configure the ports on SLES nodes - R840 servers Configure the ports on Linux-based (SLES type) nodes. Procedure Configure the /etc/udev/rules.d/70-persistent-net.rules file to match the following: The 1 GB port is em3 and is used for management. The 10 GB data ports are em1 (Data 1) and p1p1 (Data 2) .
  • Page 52 Configuring network ports on Linux servers Example: cp /etc/sysconfig/network/ifcfg-em3 /etc/sysconfig/network/ifcfg-em4 echo DEVICE=em3 > /etc/sysconfig/network/ifcfg-emX echo STARTMODE=onboot >> /etc/sysconfig/network/ifcfg-emX echo USERCONTROL=no >> /etc/sysconfig/network/ifcfg-emX echo BOOTPROTO=static >> /etc/sysconfig/network/ifcfg-emX echo NETMASK=X.X.X.X >> /etc/sysconfig/network/ifcfg-emX echo IPADDR=X.X.X.X >> /etc/sysconfig/network/ifcfg-emX echo 1 >/sys/bus/pci/rescan sleep 2 ifup emX Configure the default gateway: echo "default <XXX.XXX.XXX.XXX>...
  • Page 53: Installing The Drivers

    CHAPTER 6 Installing the Drivers The following topics contain information regarding VxFlex OS drivers. Install the VxFlex OS drivers on a Linux server...................... 54 VxFlex Ready Node Operating System Installation Guide for Linux...
  • Page 54: Install The Vxflex Os Drivers On A Linux Server

    Installing the Drivers Install the VxFlex OS drivers on a Linux server Dell EMC recommends that you install all applicable VxFlex OS drivers on each Linux server in preparation for installing the VxFlex OS software. Procedure Download the VxFlex OS driver Zip file from https://support.emc.com/products/42216...
  • Page 55: Preparing Disks

    CHAPTER 7 Preparing Disks The following topics describe how to prepare disks so that they can be added to VxFlex OS SDS devices. Verify the disk controller type..........................56 Enable PERCCLI ..............................56 Create virtual drives with PERCCLI........................57 Retrieve device paths on the server........................
  • Page 56: Verify The Disk Controller Type

    Preparing Disks Verify the disk controller type Verify the controller type installed on a VxFlex Ready Node R640 or R740xd server using the integrated Dell Remote Access Controller (iDRAC) web utility. Before you begin Ensure that you have access to:...
  • Page 57: Create Virtual Drives With Perccli

    Preparing Disks For ESXi-based systems using DirectPath, connect to the SVM where the SDS is installed. For Linux-based systems, connect to the SDS. Install the PERCCLI package: rpm -Uvh /tmp/perccli_linux_NF8G9_A07_7.529.00.tar.gz Results PERCCLI is ready for use. Create virtual drives with PERCCLI Create virtual drives (VDs) on drives using the PERCCLI utility.
  • Page 58 Preparing Disks Removing all the existing VDs from the node Setting up the controller card boot parameter Procedure Log in to the node. Display the disk information on the node: /opt/MegaRAID/perccli/perccli64 /c0/eall/sall show Output similar to the following is displayed: The output shows the following: Enclosure ID (EID): Used in a later step when creating VDs Slot ID (SLT) of each drive: Used in a later step when creating VDs...
  • Page 59: Create Virtual Drives For An Hdd Using Perccli

    Preparing Disks The boot parameter of the controller card is defined. Results You have verified that the disks are in the UGood (unconfigured, but good) state and that the controller card boot parameter is set to on. You can now create virtual drives. Create virtual drives for an HDD using PERCCLI Use PERCCLI to create virtual drives (VDs) for HDDs on a VxFlex Ready Node server.
  • Page 60: Ensure Virtual Drive Creation With Perccli

    Preparing Disks Create the VD: /opt/MegaRAID/perccli/perccli64 /c0 add vd type=raid0 drives=<EID>:<Slt> direct wt nora Example: opt/MegaRAID/perccli/perccli64 /c0 add vd type=raid0 drives=32:0 direct wt nora EID and Slt are the Enclosure ID and Slot ID values, which in this example are 32:0. where Results The VD was created successfully.
  • Page 61: Retrieve Device Paths On The Server

    Preparing Disks Output similar to the following is displayed: Verify that all the VDs are configured correctly. The following values should appear in the display: Cache = NRWTD (for SSDs) Cache = RWBD (for HDDs) After you finish Continue by retrieving the device path on the server. Retrieve device paths on the server The manner of retrieving device paths in a Linux-based VxFlex Ready Node server differs, depending on the type of controller card in the node.
  • Page 62: Retrieving Device Paths In A Linux Server With An H730P/H740P Controller

    Preparing Disks Output similar to the following appears: In the output, search for the lines starting with: pci-0000:0#:00.0-sas- # is a number. (For example: 2 or 3.) where In the lines you just located, search for sdX at the end of the lines. The devices paths are /dev/sdX.
  • Page 63: Retrieving Device Paths In A Linux Server With Nvme Devices Configured

    Preparing Disks The output displays the device groups (DG) and their associated VD. Match a VD to a device path: Run the following command: ls –l /dev/disk/by-path/ Output similar to the following appears: In the command output, search for the line: pci-0000:02:00.0-scsi-0:2:X:0 X is the number assigned to the VD.
  • Page 64 Preparing Disks Display the NVMe devices: ls –l /dev/nvm*n1 Output similar to the following appears: Use these paths when adding devices to a VxFlex OS SDS. VxFlex Ready Node Operating System Installation Guide for Linux...
  • Page 65: Additional Configurations

    CHAPTER 8 Additional configurations This section describes additional configurations required for the VxFlex Ready Node server installation process. Install OpenManage Enterprise..........................66 Disable Smartmontool error messages........................66 iDRAC Service Module............................67 Prepare the DAX devices............................67 iDRAC Service Module............................71 VxFlex Ready Node Operating System Installation Guide for Linux...
  • Page 66: Install Openmanage Enterprise

    Additional configurations Install OpenManage Enterprise Dell EMC OpenManage Enterprise is a web-based console that simplifies hardware monitoring and firmware maintenance on VxFlex Ready Node servers. It is recommended that you install OpenManage Enterprise. About this task Note: An additional SupportAssist option is available for hardware home call capabilities.
  • Page 67: Idrac Service Module

    Additional configurations On servers with an H730p/H740p controller card, create a startup script containing the following command so that it runs each time the node reboots: smartctl -s off <device_path> <device_path> is the location of the device that Smartmontools does not recognize. where For example: smartctl -s off /dev/sdo...
  • Page 68: Prepare An Nvdimm As A Dax Device

    Additional configurations Table 16 NVDIMM information table Item Replacement NVDIMM Serial Number Device name (NMEM) Namespace DAX device name (chardev) Acceleration device path Prepare an NVDIMM as a DAX device Prepare a new or replacement NVDIMM as a DAX device before adding it to the SDS. This step is optional when replacing an NVDIMM battery.
  • Page 69 Additional configurations "id": "802c-0f-1722-17496594", "handle": 1, "phys_id": 4358, "health": { "health_state": "ok", "temperature_celsius": 255, "life_used_percentage": 29 In the output from the previous step, find the device (dev) with the id that partially correlates with the serial number you discovered previously for the failed device. For example: The NVDIMM output displays serial number 19BA9C2D for the NVDIMM device.
  • Page 70 Additional configurations "dev":"namespace1.0", "mode":"devdax", "map":"dev", "size":16909336576, "uuid":"7d905ce0-49ed-42ba-8ad3-3981eb434f4d", "numa_node":1 "dev":"namespace0.0", "mode":"devdax", "map":"dev", "size":16909336576, "uuid":"47165786-f91f-4d33-86bd-6d80aa3141f1", "numa_node":0 In the output displayed in the previous step, locate the namespace that correlates with the NMEM name and DIMM serial number, and record it in the NVDIMM information table. In the above example, nmem0's namespace is namespace0.0.
  • Page 71: Idrac Service Module

    Additional configurations "dev":"region0", "size":17179869184, "available_size":0, "max_available_extent":0, "type":"pmem", "numa_node":0, "persistence_domain":"unknown", "namespaces":[ "dev":"namespace0.0", "mode":"devdax", "map":"dev", "size":16909336576, "uuid":"47165786-f91f-4d33-86bd-6d80aa3141f1", "daxregion":{ "id":0, "size":16909336576, "align":2097152, "devices":[ "chardev":"dax0.0", "size":16909336576 "numa_node":0 The DAX device name appears in the output as the chardev value. In the example output above, the DAX device name is dax0.0. Record the DAX device name in the NVDIMM information table.
  • Page 72: Idrac Service Module

    Additional configurations iDRAC Service Module The iDRAC Service module (iSM) is a small OS-resident process that expands iDRAC management into supported host operating systems. Services that the iSM adds include OS information, automatic system recovery, and remote server power cycle. It also enables NVMe device removal without shutting down or rebooting the system.
  • Page 73: Chapter 9 Next Steps

    CHAPTER 9 Next steps Next steps and operating system-specific guidelines for deploying VxFlex OS on the servers that you have prepared Next steps................................74 VxFlex Ready Node Operating System Installation Guide for Linux...
  • Page 74: Next Steps

    The server is now ready for VxFlex OS deployment. Take note of the following required steps for deploying your system: The Dell EMC Deploy VxFlex OS Guide explains how to deploy VxFlex OS on VxFlex Ready Node servers. Follow the preparation guidelines and deployment procedures relevant to your environment.
  • Page 75: Chapter 10 Reference Material

    CHAPTER 10 Reference material This section contains additional information that may be required for the procedures described in this document. DTK - Hardware Update Bootable ISO........................76 VxFlex Ready Node Operating System Installation Guide for Linux...
  • Page 76: Dtk - Hardware Update Bootable Iso

    Update the hardware using remote RACADM You can install and execute the Dell EMC RACADM tool from any management system with access to the iDRAC network. The remote RACADM command set is useful in this situation to mount and execute the Hardware ISO to a large number of VxFlex Ready Node servers.
  • Page 77 Reference material Mount the Hardware ISO to the iDRAC from the remote share, where the following command is all on one line: racadm -r <dracIP> -u root -p <password> remoteimage -c -u <myuser> -p <mypass> -l // <myip>/SIORN/VxFlex-Ready-Node-Hardware-Update-for-Dell_14G_2018_May_A00.iso Where: <dracIP> is the iDRAC IP address <password>...
  • Page 78: Recommended Bios And Firmware Settings

    ISO attempts to apply all firmware updates, but only those updates that are compatible will be installed. Applying settings using RACADM The individual firmware files are also available on the Dell EMC Online Support site, and can easily be installed using the following remote RACADM command: racadm -r <dracIP>...
  • Page 79: Hardware Iso Configuration Settings

    Reference material Note: The default password is Scaleio123. Configuration settings The Hardware ISO runs a script that automatically configures the BIOS and iDRAC settings listed in the table below. Some settings are dependent on the server model. Table 17 Hardware ISO configuration settings Description Setting Value...
  • Page 80: Troubleshooting The Hardware Iso

    Reference material <password> is the password for the server <setting> is the BIOS/iDRAC setting name <value> is the BIOS/iDRAC setting value Note: When setting the BIOS configuration, include this command: racadm -r <dracIP> -u root -p <password> jobqueue create BIOS.Setup.1-1 <value> Troubleshooting the Hardware ISO This section describes troubleshooting procedures for problems you may encounter while using the Hardware ISO.
  • Page 81 Open the log to check the contents for errors: less /bundleapplicationlogs/apply_components.log You can also view the script for the Hardware ISO, which is useful in helping to identify and troubleshoot log entries: less /opt/dell/toolkit/systems/drm_files/apply_bundles.sh VxFlex Ready Node Operating System Installation Guide for Linux...
  • Page 82: Additional Resources

    Node system. For additional information regarding the VxFlex Ready Node product, documentation, advisories, downloads, and white papers, visit https://support.emc.com/products/42216. Dell Lifecycle Controller (LC) At the heart of the VxFlex Ready Node servers' embedded management is the iDRAC with Lifecycle Controller (LC) technology.
  • Page 83 Dell EMC OpenManage Deployment Toolkit (DTK) The Dell EMC OpenManage Deployment Toolkit (DTK) includes a set of utilities, sample scripts, and sample configuration files that you can use to deploy and configure Dell systems. You can use the DTK to build script-based and RPM-based installation for deploying large number of systems on a pre- operating system environment in a reliable way, without changing their current deployment processes.
  • Page 84 Reference material VxFlex Ready Node Operating System Installation Guide for Linux...
  • Page 85: Chapter 11 Getting Help

    CHAPTER 11 Getting help This section explains the different resources available for getting help for your system. Contacting Dell EMC............................. 86 Secure Remote Services............................86 Recycling or End-of-Life service information......................86 VxFlex Ready Node Operating System Installation Guide for Linux...
  • Page 86: Contacting Dell Emc

    Dell EMC provides several online and telephone-based support and service options. Availability varies by country and product, and some services may not be available in your area. To contact Dell EMC for sales, technical support, or customer service issues use the steps in this task.

Table of Contents