image server A node specifically designated to hold images that will be distributed to one or more client
systems. In a standard HP XC installation, the head node acts as the image server and golden
client.
improved
availability
A service availability infrastructure that is built into the HP XC system software to enable an
availability tool to fail over a subset of eligible services to nodes that have been designated as
a second server of the service
See also availability set, availability tool.
Integrated Lights
Out
See iLO.
interconnect A hardware component that provides high-speed connectivity between the nodes in the HP
XC system. It is used for message passing and remote memory access capabilities for parallel
applications.
interconnect
module
A module in an HP BladeSystem server. The interconnect module provides the Physical I/O
ports for the server blades and can be either a switch, with connections to each of the server
blades and some number of external ports, it can be or a pass-through module, with individual
external ports for each of the server blades.
See also server blade.
interconnect
network
The private network within the HP XC system that is used primarily for user file access and
for communications within applications.
Internet address A unique 32-bit number that identifies a host's connection to an Internet network. An Internet
address is commonly represented as a network number and a host number and takes a form
similar to the following: 192.0.2.0.
IPMI Intelligent Platform Management Interface. A self-contained hardware technology available
on HP ProLiant DL145 servers that enables remote management of any node within a system.
ITRC HP IT Resource Center. The HP corporate Web page where software patches are made available.
The web address is http://www.itrc.hp.com. To download patches from this Web page, you
must register as an Americas/Asia Pacific or European customer.
L
Linux Virtual
Server
See LVS.
load file A file containing the names of multiple executables that are to be launched simultaneously by
a single command.
Load Sharing
Facility
See LSF-HPC with SLURM.
local storage Storage that is available or accessible from one node in the HP XC system.
LSF execution
host
The node on which LSF runs. A user's job is submitted to the LSF execution host. Jobs are
launched from the LSF execution host and are executed on one or more compute nodes.
LSF master host The overall LSF coordinator for the system. The master load information manager (LIM) and
master batch daemon (mbatchd) run on the LSF master host. Each system has one master host
to do all job scheduling and dispatch. If the master host goes down, another LSF server in the
system becomes the master host.
LSF-HPC with
SLURM
Load Sharing Facility for High Performance Computing integrated with SLURM. The batch
system resource manager on an HP XC system that is integrated with SLURM. LSF-HPC with
SLURM places a job in a queue and allows it to run when the necessary resources become
available. LSF-HPC with SLURM manages just one resource: the total number of processors
designated for batch processing.
LSF-HPC with SLURM can also run interactive batch jobs and interactive jobs. An LSF interactive
batch job allows you to interact with the application while still taking advantage of LSF-HPC
with SLURM scheduling policies and features. An LSF-HPC with SLURM interactive job is run
without using LSF-HPC with SLURM batch processing features but is dispatched immediately
by LSF-HPC with SLURM on the LSF execution host.
See also LSF execution host.
149