...

Scalability Guide Version 1 IBM Endpoint Manager for Software Use Analysis

by user

on
Category: Documents
1

views

Report

Comments

Transcript

Scalability Guide Version 1 IBM Endpoint Manager for Software Use Analysis
IBM Endpoint Manager for Software Use Analysis
Version 9.2
Scalability Guide
Version 1
IBM Endpoint Manager for Software Use Analysis
Version 9.2
Scalability Guide
Version 1
Scalability Guide
This edition applies to versions 9.2 of IBM Endpoint Manager for Software Use Analysis (product number 5724-F57)
and to all subsequent releases and modifications until otherwise indicated in new editions.
© Copyright IBM Corporation 2002, 2015.
US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract
with IBM Corp.
Contents
Scalability Guidelines . . . . . . . . . 1
Introduction . . . . . . . . . . . . . . 1
Scanning and uploading scan data . . . . . . 1
Extract, Transform, Load (ETL) . . . . . . . 2
Decision flow . . . . . . . . . . . . . . 4
Planning and installing Software Use Analysis . . . 6
Hardware requirements. . . . . . . . . . 6
Network connection and storage throughput . . 7
Dividing the infrastructure into scan groups . . . . 7
Good practices for running scans and imports . . . 8
Plan the scanning schedule . . . . . . . . 8
Avoid scanning when it is not needed . . . . . 8
Limit the number of computer properties that are
to be gathered during scans . . . . . . . . 8
Limit the number of Software Use Analysis
computer groups . . . . . . . . . . . . 8
Ensure that scans and imports are scheduled to
run at night. . . . . . . . . . . . . . 9
Run the initial import . . . . . . . . . . 9
Review import logs . . . . . . . . . . . 9
Maintain frequent imports . . . . . . . . 10
Disable collection of usage data . . . . . . . 10
Make room for end-of-scan-cycle activities . . . . 10
Configuring the application and its database for
medium and large environments . . . . . . . 10
Increasing Java heap size . . . . . . . . . 11
Configuring the transaction logs size for DB2 . . 11
Configuring the transaction log location for DB2 12
Configuring swappiness in Linux hosting DB2
database server . . . . . . . . . . . . 12
© Copyright IBM Corp. 2002, 2015
Configuring the DB2_COMPATIBILITY_VECTOR
variable for improved UI performance . . . .
Configuring and maintaining MS SQL database
Optimizing the tempdb database in Microsoft
SQL Server . . . . . . . . . . . . .
Backing up and restoring the database . . . . .
Backing up the DB2 database . . . . . . .
Backing up the SQL Server database . . . . .
Restoring the DB2 database . . . . . . . .
Restoring the SQL Server database . . . . .
Preventive actions . . . . . . . . . . . .
Limiting the number of scanned signature
extensions . . . . . . . . . . . . . . .
Recovering from accumulated scans . . . . . .
IBM PVU considerations . . . . . . . . . .
Web user interface considerations . . . . . . .
REST API considerations . . . . . . . . . .
Using relays to increase the performance of IBM
Endpoint Manager . . . . . . . . . . . .
Reducing the Endpoint Manager server load . .
13
13
13
13
14
14
16
17
18
18
19
21
21
21
22
22
Appendix. Executive summary . . . . 23
Notices . . . . . . . . . . . . . . 25
Trademarks . . . . . . . . . . . . .
Terms and conditions for product documentation.
. 27
. 27
iii
iv
Scalability Guidelines
This guide is intended to help system administrators plan the infrastructure of IBM® Endpoint Manager
for Software Use Analysis and to provide recommendations for configuring the application server to
achieve optimal performance. It explains how to divide computers into scan groups, schedule software
scans, and run data imports. It also provides information about other actions that can be undertaken to
avoid low performance.
Introduction
IBM Endpoint Manager clients report data to the Endpoint Manager server that stores the data in its file
system or database. The Software Use Analysis server periodically connects to the Endpoint Manager
server and its database, downloads the stored data and processes it. The process of transferring data from
the Endpoint Manager server to the Software Use Analysis server is called Extract, Transform, Load (ETL).
By properly scheduling scans and distributing them over the computers in your infrastructure, you can
reduce the length of the ETL process and improve its performance.
Scanning and uploading scan data
To evaluate whether particular software is installed on an endpoint, you must run a scanner. It collects
information about files with particular extensions, package data, and software identification tags. It also
gathers information about the running processes to measure software usage. The software scan data must
be transferred to the Endpoint Manager server from which it can be later on imported to Software Use
Analysis.
To discover software that is installed on a particular endpoint and collect its usage, you must first install
a scanner by running the Install Scanner fixlet. After the scanner is successfully installed, the Initiate
Software Scan fixlet becomes relevant on the target endpoint. The following types of scans are available:
Catalog-based scan
In this type of scan, the Software Use Analysis server creates scanner catalogs that are sent to the
endpoints. The catalogs do not include signatures that can be found based on the list of file
extensions or entries that are irrelevant for a particular operating system. Based on these catalogs,
the scanner discovers exact matches and sends its findings to the Endpoint Manager server. This
data is then transferred to the Software Use Analysis server.
File system scan
In this type of scan, the scanner uses a list of file extensions to create a list of all files with those
extensions on an endpoint.
Package data scan
In this type of scan, the scanner searches the system registry (Windows) or package management
system (Linux, UNIX) to gather information about packages that are installed on the endpoints.
Then, it returns the findings to the Endpoint Manager server where the discovered packages are
compared with the software catalog. If a particular package matches an entry in the catalog, the
software is discovered.
Application usage statistics
In this type of scan, the scanner gathers information about processes that are running on the
target endpoints.
Software identification tags scan
In this type of scan, the scanner searches for software identification tags that are delivered with
software products.
© Copyright IBM Corp. 2002, 2015
1
You should run the catalog-based, file system, package data, and software identification tags scans on a
regular basis as they are responsible for software discovery. The application usage statistics gathers usage
data and can be disabled if you are not interested in this information.
When the status of the Initiate Software Scan fixlet shows complete (100%), it indicates that the scan was
successfully initiated. It does not mean that the relevant data was already gathered. After the scan
finishes, the Upload Software Scan Results fixlet becomes relevant on the targeted endpoint. It means
that the relevant data was gathered on the endpoints. When you run this fixlet, the scan data is uploaded
to the Endpoint Manager server. It is then imported to Software Use Analysis during the Extract,
Transform, Load (ETL) process.
Extract, Transform, Load (ETL)
The Extract, Transform, Load (ETL) is a process in the database usage that combines three database
functions that transfer data from one database to another. The first stage, Extract, involves reading and
extracting data from various source systems. The second stage, Transform, converts the data from its
original format into the format that meets the requirements of the target database. The last stage, Load,
saves the new data into the target database, thus finishing the process of transferring the data.
In Software Use Analysis, the Extract stage involves extracting data from the Endpoint Manager server.
The data includes information about the infrastructure, installed agents, and detected software. ETL also
checks whether a new software catalog is available, gathers information about the software scan and files
that are present on the endpoints, and collects data from VM managers.
The extracted data is then transformed to a single format that can be loaded to the Software Use Analysis
database. This stage also involves matching scan data with the software catalog, calculating processor
value units (PVUs), processing the capacity scan, and converting information that is contained in the
XML files. After the data is extracted and transformed, it is loaded into the database and can be used by
Software Use Analysis.
2
Extract, Transform, and Load
Client computer
- console
Client computer
- browser
Software
Use Analysis
server
Endpoint
Manager
server
1. Extract
• Infrastructure
information
• Installed agents
• Scan data files
• Software use data files
• Capacity data files
• Package data files
• Files with VM manager
information
Endpoint Manager
file system
Raw
scan
files
Endpoint
Manager
database
Core
business
logic
Information
about files
Web user
interface
Core
business
logic
High-speed
network
connection
2. Transform
• Information from
the XML files is
processed.
• Data is transformed
to a single format.
• Raw data is matched
with the software catalog.
• PVU and RVU
values are calculated.
• The capacity scan
is processed.
Software Use
Analysis
database
Software catalog
3. Load
Data is loaded into
the Software Use Analysis
database tables.
Relay
Endpoint Manager Client
on Windows, Linux and UNIX
XML
Usage
Scan data
data
Catalog-based
File system
Capacity
Package
Software identification tags
Endpoint Manager client
on Linux on System z
XML
VM
manager
data
(Windows and
Linux x86/x64
only)
XML
Usage
Scan data
data
Catalog-based
File system
Capacity
Package
Software identification tags
Capacity
configuration
XML
The hardest load on the Software Use Analysis server occurs during ETL when the following actions are
performed:
v A large number of small files is retrieved from the Endpoint Manager server (Extract).
v Many small and medium files that contain information about installed software packages and process
usage data are parsed (Transform).
v The database is populated with the parsed data (Load).
At the same time, Software Use Analysis prunes large volumes of old data that exceeds its data
rentention period.
Performance of the ETL process depends on the number of scan files, usage analyses, and package
analyses that are processed during a single import. The main bottleneck is storage performance because
many small files must be read, processed, and written to the Software Use Analysis database in a short
time. By properly scheduling scans and distributing them over the computers in your infrastructure, you
can reduce the length of the ETL process and improve its performance.
Scalability Guidelines
3
Decision flow
To avoid running into performance issues, you should divide the computers in your infrastructure into
scan groups and properly set the scan schedule. You should start by creating a benchmark scan group on
which you can try different configurations to achieve optimal import time. After the import time is
satisfactory for the benchmark group, you can divide the rest of your infrastructure into analogous scan
groups.
Start by creating a single scan group that will be your benchmark. The size of the scan group might vary
depending on the size of your infrastructure. However, the recommendation is to avoid creating a group
larger than 20 000 endpoints.
Scan the computers in this scan group. When the scan finishes, upload its results to the Endpoint
Manager server and run an import. Check the import time and decide whether its duration is satisfactory.
For information about running imports, see section “Good practices for running scans and imports” on
page 8.
If you are not satisfied with the import time, check the import log and try undertaking one of the
following actions:
v If you see that the duration of the import of raw file system scan data or package data takes longer
than one third of the ETL time and the volume of the data is large (a few millions of entries), create a
smaller group. For additional information, see section “Dividing the infrastructure into scan groups” on
page 7.
v If you see that the duration of the import of raw file system scan data or package data takes longer
than one third of the ETL time but the volume of the data is low, fine-tune hardware. For information
about processor and RAM requirements as well as network latency and storage throughput, see section
“Planning and installing Software Use Analysis” on page 6.
v If you see that processing of usage data takes an excessive amount of time and you are not interested
in collecting usage data, disable gathering of usage data. For more information, see section “Disable
collection of usage data” on page 10.
After you adjust the first scan group, run the software scan again, upload its results to the Endpoint
Manager server and run an import.
When you achieve an import time that is satisfactory, decide whether you want to have a shorter scan
cycle. For example, if you have an environment that consists of 42 000 endpoints and you created seven
scan groups of 6000 endpoints each, your scan cycle will last seven days. To shorten the scan cycle, you
can try increasing the number of computers in a scan group, for example, to 7000. It will allow you for
shortening the scan cycle to six days. After you increase the scan group size, observe the import time to
ensure that its performance remains on an acceptable level.
When you are satisfied with the performance of the benchmark scan group, create the remaining groups.
Schedule scans so that they fit into your preferred scan cycle. Then, schedule import of data form
Endpoint Manager. Observe the import time. If it is not satisfactory, adjust the configuration as you did
in the benchmark scan group. When you achieve suitable performance, plan for end-of-cycle activities.
Use the following diagram to get an overview of actions and decisions that you will have to undertake to
achieve optimal performance of Software Use Analysis.
4
Installation
Plan and install
Software Use
Analysis
Configuration
Create a scan group
(up to 20 000
computers)
Fine tune hardware
(if possible)
Initiate the scan and
upload scan results
Create a smaller
scan group
Run an import
and check its time
Disable gathering
of usage data
(if you do not need it)
Is the import
time satisfactory?
No
Yes
Increase the size of
the scan group
Yes
Do you want
to have a shorter
scan cycle?
No
Create the remaining
scan groups
Fine tune hardware
(if possible)
Schedule the scans
to fit into the scan cycle
Create a smaller
scan group
Schedule daily imports
Disable gathering
of usage data
(if you do not need it)
Is the import time
still satisfactory?
No
Yes
Plan for
end-of-cycle activities
Scalability Guidelines
5
Planning and installing Software Use Analysis
Your deployment architecture depends on the number of endpoints that you want to have in your audit
reports.
For information about the Endpoint Manager requirements, see Server requirements available in the
documentation.
Hardware requirements
If you already have the Endpoint Manager server in your environment, plan the infrastructure for the
Software Use Analysis server. Software Use Analysis server stores its data in a dedicated database, either
DB2 or MS SQL Server.
The following tables are applicable for environments in which the scans are run weekly, imports are run
daily, and 60 applications are installed per endpoint (on average).
Table 1. Processor and RAM requirements for Software Use Analysis installed with Microsoft SQL server
Environment size
Topology
Processor
Memory
Small
environment
1 server
IBM Endpoint Manager, Software Use Analysis,
and SQL Server
2-3 GHz, 4 cores
8 GB
2/3
servers**
IBM Endpoint Manager
2-3 GHz, 4 cores
16 GB
Software Use Analysis and SQL Server
2-3 GHz, 4 cores
12 - 24 GB
2/3
Large environment servers**
IBM Endpoint Manager
2-3 GHz, 4 - 16
cores
16 - 32 GB
More than 50 000
endpoints*
Software Use Analysis and SQL Server
2-3 GHz, 8 - 16
cores
32 - 64 GB
Up to 5 000
endpoints
Medium
environment
5 000 - 50 000
endpoints*
Table 2. Processor and RAM requirements for Software Use Analysis installed with DB2
Environment size
Topology
Processor
Memory
Small
environment
1 server
IBM Endpoint Manager, Software Use Analysis,
and DB2
2-3 GHz, 4 cores
8 GB
2/3
servers**
IBM Endpoint Manager
2-3 GHz, 4 cores
16 GB
Software Use Analysis and DB2
2-3 GHz, 4 cores
12 - 24 GB
2/3
Large environment servers**
IBM Endpoint Manager
2-3 GHz, 4 - 16
cores
16 - 32 GB
More than 50 000
endpoints*
Software Use Analysis and DB2
2-3 GHz, 8 - 16
cores
32 - 64 GB
Up to 5 000
endpoints
Medium
environment
5 000 - 50 000
endpoints*
6
* For environments with up to 35 000 endpoints, there is no requirement to create scan groups. If you have more than 35 000
endpoints in your infrastructure, you must create scan groups. For more information, see section “Dividing the infrastructure into
scan groups.”
** A distributed environment, where Software Use Analysis is separated from the database, is advisable.
Medium-size environments
You can use virtual environments for this deployments size, but it is advisable to have dedicated
resources for processor, memory, and virtual disk allocation. The virtual disk that is allocated for
the virtual machine should have dedicated RAID storage, with dedicated input-output bandwidth
for that virtual machine.
Large environments
For large deployments, use dedicated hardware. For optimum performance, use a database server
that is dedicated to Software Use Analysis and is not shared with Endpoint Manager or other
applications. Additionally, you might want to designate a separate disk that is attached to the
computer where the application database is installed to store the database transaction logs. You
might need to do some fine-tuning based on the provided recommendations.
Network connection and storage throughput
The Extract, Transform, Load (ETL) process extracts a large amount of scan data from the Endpoint
Manager server, processes it on the Software Use Analysis server, and saves it in the DB2® or MS SQL
database. The following two factors affect the time of the import to the Software Use Analysis server:
Gigabit network connection
Because of the nature of the ETL imports, you are advised to have at least a gigabit network
connection between the Endpoint Manager, Software Use Analysis, and database servers.
Disk storage throughput
For large deployments, you are advised to have dedicated storage, especially for the database
server. The minimum expected disk speed for writing data is approximately 400 MB/second.
Dividing the infrastructure into scan groups
It is critical for Software Use Analysis performance that you properly divide your environment into scan
groups and then accurately schedule scans in those scan groups. If the configuration is not well-balanced,
you might experience long import times.
For environments larger than 35 000 endpoints, divide your endpoints into separate scan groups. The
system administrator can then set a different scanning schedule for every scan group in your
environment.
Example
If you have 60 000 endpoints, you can create six scan groups (every group containing 10 000 endpoints).
The first scan group has the scanning schedule set to Monday, the second to Tuesday, and so on. Using
this configuration, every endpoint is scanned once a week. At the same time, the Endpoint Manager
server receives data only from 1/6 of your environment daily and for every daily import the Software
Use Analysis server needs to process data only from 10 000 endpoints (instead of 60 000 endpoints). This
environment configuration shortens the Software Use Analysis import time.
The image below presents a scan schedule for an infrastructure that is divided into six scan groups. You
might achieve such a schedule after you implement recommendations that are contained in this guide.
The assumption is that both software scans and imports of scan data to Software Use Analysis are
scheduled to take place at night, while uploads of scan data from the endpoints to the Endpoint Manager
server occur during the day.
Scalability Guidelines
7
If you have a powerful server computer and longer import time is acceptable, you can create fewer scan
groups with greater number of endpoints in the Endpoint Manager console. Remember to monitor the
import log to analyze the amount of data that is processed and the time it takes to process it.
For information how to create scan groups, see the topic Computer groups that is available in the
Endpoint Manager documentation.
Good practices for running scans and imports
After you enable the Software Use Analysis site in your Endpoint Manager console, carefully plan the
scanning activities and their schedule for your deployment.
Plan the scanning schedule
After you find the optimal size of the scan group, set the scanning schedule. It is the frequency of
software scan on an endpoint. The most common scanning schedule is weekly so that every endpoint is
scanned once a week. If your environment has more than 100 000 endpoints, consider performing scans
less frequently, for example monthly.
Avoid scanning when it is not needed
The frequency of scans depends both on how often software products change on the endpoints in your
environment and also on your reporting needs. If you have systems in your environment that have
dynamically-changing software, you can group such systems into a scan group (or groups) and set more
frequent scans, for example once a week. The remaining scan groups that contain computers with a more
stable set of software can be scanned less frequently, for example once a month.
Limit the number of computer properties that are to be gathered
during scans
By default, the Software Use Analysis server includes four primary computer properties from the
Endpoint Manager server that is configured as the data source: computer name, DNS name, IP address,
and operating system. Imports can be substantially longer if you specify more properties to be extracted
from the Endpoint Manager database and copied into the Software Use Analysis database during each
import. As a good practice, limit the number of computer properties to 10 (or fewer).
Limit the number of Software Use Analysis computer groups
Create as small as possible number of computer groups. Data import phase (ETL) gets longer with a
growing number of computer groups.
8
Ensure that scans and imports are scheduled to run at night
Some actions in the Software Use Analysis user interface cannot be processed when an import is running.
Thus, try to schedule imports when the application administrator and Software Asset Manager are not
using Software Use Analysis or after they finished their daily work.
Run the initial import
It is a good practice to run the first (initial) import before you schedule any software scans and activate
any analyses.
Examples of when imports can be run:
v The first import uploads the software catalog from the installation directory to the application and
extracts the basic data about the endpoints from the Endpoint Manager server.
v The second import can be run after the scan data from the first scan group is available in the Endpoint
Manager server.
v The third import should be started after the scans from the second scan group are finished, and so on.
Review import logs
Review the following INFO messages in the import log to check how much data was transferred during
an ETL.
Information
about
Items specified in the import log
Description
Infrastructure
Computer items
The total number of computers in your environment. A computer is a
system with an Endpoint Manager agent that provides data to Software
Use Analysis.
Software and
hardware
SAM::ScanFile items
The number of files that have input data for the following items:
v File system scan information (SAM::FileFact items)
v Catalog-based scan information (SAM::CitFact items)
v Software identification tag scan information (SAM::IsotagFact items)
Installed
packages
Software usage
SAM::FileFact items
The total count of information pieces about files from all computers in
your environment (contained in the processed scan files).
SAM::CitFact items
The total count of information pieces from catalog-based scans
(contained in the processed scan files).
SAM::IsotagFact items
The total count of information pieces from software identification tag
scans (contained in the processed scan files).
SAM::PackageFact items
The total count of information pieces about Windows packages that
have been gathered by the package data scan.
SAM::UnixPackageFact items
The total count of information pieces about UNIX packages that have
been gathered by the package data scan.
SAM::AppUsagePropertyValue items
The total number of processes that were captured during scans on the
systems in your infrastructure.
Example:
INFO:
INFO:
INFO:
INFO:
INFO:
INFO:
INFO:
INFO:
Computer items: 15000
SAM::AppUsagePropertyValue items: 4250
SAM::ScanFile items: 30000
SAM::FileFact items: 15735838
SAM::IsotagFact items: 0
SAM::CitFact items: 149496
SAM::PackageFact items: 406687
SAM::UnixPackageFact items: 1922564
Scalability Guidelines
9
Maintain frequent imports
After the installation, imports are scheduled to run once a day. Do not change this configuration.
However, you might want to change the hour when the import starts. If your import is longer than 24
hours, you can:
v Improve the scan groups configuration.
v Preserve the current daily import configuration because Software Use Analysis handles overlapping
imports gracefully. If an import is running, no other import is started.
Disable collection of usage data
Software usage data is gathered by the Application Usage Statistics analysis. If the analysis is activated,
usage data is gathered from all endpoints in your infrastructure. However, the data is uploaded to the
Endpoint Manager server only for the endpoints on which you run software scans. For the remaining
endpoints, the data is stored on the endpoint until you run the software scan.
About this task
If you do not need usage data or the deployment phase is not finished, do not activate the analysis. It
can be activated later on, if needed. If the analysis is already activated, but you decide that processing of
usage data takes too much time or you are not interested in usage statistics, disable the analysis.
Procedure
1. Log in to the Endpoint Manager console.
2. In the navigation tree, open the IBM Endpoint Manager for Software Use Analysis v9 > Analyses.
3. In the upper-right pane, right-click Application Usage Statistics, and click Deactivate.
Make room for end-of-scan-cycle activities
Plan to have a data export to other integrated solutions (i.e. SmartCloud Control Desk through IBM
Tivoli® Integration Composer) at the end of a 1- or 2-week cycle.
Configuring the application and its database for medium and large
environments
To avoid performance issues in medium and large environments, configure the location of the transaction
log and adjust the log size. If you are using MS SQL as Software Use Analysis database, you might want
to shrink the transaction log or update query optimization statistics.
Component
Configuration tasks
Software Use Analysis
server
“Increasing Java heap size” on page 11
DB2
“Configuring the transaction logs size for DB2” on page 11
“Configuring the transaction log location for DB2” on page 12
“Configuring swappiness in Linux hosting DB2 database server” on page 12
“Configuring the DB2_COMPATIBILITY_VECTOR variable for improved UI
performance” on page 13
Microsoft SQL server
“Configuring and maintaining MS SQL database” on page 13
“Optimizing the tempdb database in Microsoft SQL Server” on page 13
Note: In Microsoft SQL Server, the transaction log is increased automatically - no
further action is required.
10
Increasing Java heap size
The default settings for the Java heap size might not be sufficient for medium and large environments. If
your environment consists of more than 5000 endpoints, increase the memory available to Java client
processes by increasing the Java heap size.
Procedure
1. Go to the <INSTALL_DIR>/wlp/usr/servers/server1/ directory and edit the jvm.options file.
2. Set the maximum Java heap size (Xmx) to one of the following values, depending on the size of your
environment:
v For medium environments (5000 - 50 000 endpoints), set the heap size to 6144m.
v For large environments (over 50 000 endpoints), set the heap size to 8192m.
3. Restart the Software Use Analysis server.
Configuring the transaction logs size for DB2
For medium and large environments, increase the transaction logs size to improve performance. In
Microsoft SQL Server, the transaction log is increased automatically - no further action is required.
About this task
The transaction logs size can be configured through the LOGFILSIZ DB2 parameter that defines the size of
a single log file. To calculate the value that can be used for this parameter, you must first calculate the
total disk space that is required for transaction logs in your specific environment and then divide it, thus
obtaining the size of one transaction log. The required amount of disk space depends on the number of
endpoints in your environment and the number of endpoints for which new scan results are available
and processed during the data import.
Important: Use the provided formula to calculate the size of transaction logs that are generated during
the import of data. More space might be required for transaction logs that are generated when you
remove the data source.
Procedure
1. Use the following formula to calculate the disk space for your transaction logs:
<The number of endpoints> x 1 MB + <the number of endpoints
for which new scan results are imported> x 1 MB + 1 GB
2. Divide the result by 0.00054 to obtain the size of a single transaction log file.
3. Run the following command to update the transaction log size in your database. Substitute value with
the size of a single transaction log.
UPDATE DATABASE CONFIGURATION FOR SUADB USING LOGFILSIZ value
4. For the changes to take effect, restart the database. Run the following commands:
DEACTIVATE DB SUADB
DB2START
DB2STOP
ACTIVATE DB SUADB
5. Restart the Software Use Analysis server.
a. To stop the server, run the following command:
/etc/init.d/SUAserver start
b. To start the server, run the following command:
/etc/init.d/SUAserver stop
Example
v Calculating the single transaction log size for 100 000 endpoints and 15 000 scan results:
Scalability Guidelines
11
100 000 x 1 MB + 15 000 x 1 MB + 1 GB = 114 GB
114 / 0.00054 = 211111
Configuring the transaction log location for DB2
To increase database performance, move the DB2 transaction log to a file system that is separate from the
DB2 file system.
About this task
Medium environments:
Strongly advised
Large environments:
Required
Procedure
1. To move the DB2 transaction log to a file system that is separate from the DB2 file system, update the
DB2 NEWLOGPATH parameter for your Software Use Analysis database:
UPDATE DATABASE CONFIGURATION FOR SUADB USING NEWLOGPATH value
Where value is a directory on a separate disk (different from the disk where the DB2 database is
installed) where you want to keep the transaction logs. This configuration is strongly advised.
2. For the changes to take effect, restart the database. Run the following commands:
DEACTIVATE DB SUADB
DB2START
DB2STOP
ACTIVATE DB SUADB
3. Restart the Software Use Analysis server.
a. To stop the server, run the following command:
/etc/init.d/SUAserver start
b. To start the server, run the following command:
/etc/init.d/SUAserver stop
Configuring swappiness in Linux hosting DB2 database server
Swappiness determines how quickly processes are moved from RAM to hard disk to free memory. It can
assume the value 0 - 100. A low value means that your Linux system swaps out processes rarely while a
high value means that processes are written to disk immediately. Swapping out runtime processes should
be avoided on the DB2 server on Linux, so it is advisable to set the swappiness kernel parameter to a low
value or zero.
Procedure
1. Log in to the Linux system as root.
2. Set the swappiness parameter to a low value or 0.
v
a. Open the file /etc/sysctl.conf in a text editor and enter the vm.swappiness parameter of your
choice.
Example:
vm.swappiness = 0
b. Restart the operating system to load the changes.
v To change the value while the operating system operating, run the following command: sysctl -w
vm.swappiness=0.
12
Configuring the DB2_COMPATIBILITY_VECTOR variable for improved
UI performance
For environments with 5000 or more clients in your infrastructure, it is advisable to set the value of the
DB2_COMPATIBILITY_VECTOR variable to MYS. This change might result in a UI response time that is
significantly faster on some Software Use Analysis installations.
Procedure
For information about how to modify this registry variable, see DB2_COMPATIBILITY_VECTOR registry
variable in IBM Knowledge Center.
Configuring and maintaining MS SQL database
To avoid performance issues in large environments, properly configure and maintain the MS SQL
database. Review the following recommendations about shrinking the transaction log, configuring its
location, and updating query optimization statistics.
Shrinking the transaction log
Shrink the transaction log once a month. If the environment is large, it is advisable to shrink the log after
every data import. For more information, see How to: Shrink a File (SQL Server Management Studio).
Configuring the location of the database transaction log
For information how to configure the database transaction log, see Move the Database Transaction Log to
Another Drive.
Updating query optimization statistics
You must update query optimization statistics before every data import, in both small and large
environments. For more information, see sp_updatestats (Transact-SQL).
Example query:
USE TEMADB;
GO
EXEC sp_updatestats;
Optimizing the tempdb database in Microsoft SQL Server
tempdb is a system database in SQL Server whose main functions are to store temporary tables, cursors,
stored procedures, and other internal objects that are created by the database engine.
By default, the database size is set to 8 MB and it can grow by 10% automatically. In large environments,
its size can be as large as 15 GB. It is therefore important to optimize the tempdb database because the
location and size of this database can negatively affect the performance of the Software Use Analysis
server.
For information about how to set the database size and how to determine the optimal number of files,
see the TechNet article Optimizing tempdb Performance.
Backing up and restoring the database
Perform regular backups of the data that is stored in the database. It is advisable to back up the database
before updating the software catalog or upgrading the server to facilitate recovery in case of failure.
Scalability Guidelines
13
Backing up the DB2 database
You can save your database to a backup file.
Procedure
1. Stop the Software Use Analysis server.
2. Check which applications connect to the database and then close all active connections:
a. List all applications that connect to the database:
db2 list applications for database SUADB
b. Each connection has a handle number. Copy it and use in the following command to close the
connection:
db2 force application "( <handle_number> )"
3. Optional: If you activated the database before, deactivate it:
db2 deactivate db SUADB
4. Back up the database to a specified directory:
db2 backup database SUADB to <PATH>
Backing up the SQL Server database
You can make a copy of your database by saving it to a backup file. If you want, you can then move the
backup to another computer and restore it in a different Software Use Analysis instance.
Before you begin
v You can back up and restore the database only within the same version of Software Use Analysis.
v Software Use Analysis and Microsoft SQL Server Management Studio must be installed.
v Stop the SUAserver service. Open the command prompt and run net stop SUAserver.
Procedure
1. Log in to the computer that hosts the database that you want to back up.
2. Open Microsoft SQL Server Management Studio.
3. In the left navigation bar, expand Databases.
4. Right-click on the database that you want to back up and then click Tasks > Back Up.
14
5. Review the details of the backup and then click Add to specify the target location of the copy.
Scalability Guidelines
15
6. Click OK.
Results
If the database was backed up successfully, you can find the bak file in the location that you specified in
step 5.
What to do next
If you want to move the database to a different Software Use Analysis instance, copy the backup file to
the target computer and then restore the database.
Restoring the DB2 database
You can restore a damaged or corrupted database from a backup file.
Procedure
1. Stop the Software Use Analysis server.
2. Check which applications connect to the database and then close all active connections:
a. List all applications that connect to the database:
db2 list applications for database SUADB
16
b. Each connection has a handle number. Copy it and use in the following command to close the
connection:
db2 force application "( <handle_number> )"
3. Optional: If you activated the database before, deactivate it:
db2 deactivate db SUADB
4. Restore the database from a backup file:
db2 restore db SUADB from <PATH> taken at <TIMESTAMP> REPLACE EXISTING
Example:
db2 restore db SUADB from /home/db2inst1/
taken at 20131105055846 REPLACE EXISTING
Restoring the SQL Server database
If you encounter any problems with your database or if you want to move it between different instances
of Software Use Analysis, you can use a backup file to restore the database.
Before you begin
v You can back up and restore the database only within one version of Software Use Analysis.
v Software Use Analysis and Microsoft SQL Server Management Studio must be installed.
v Stop the SUAserver service. Open the command prompt and run net stop SUAserver.
Procedure
1. Log in to the computer on which you want to restore the database.
2. Open Microsoft SQL Server Management Studio.
3. In the left navigation bar, right-click on Databases and then click Restore Database.
4. In the Source section, select Device and browse for your backup file.
Scalability Guidelines
17
5. In the Destination section, choose a new name for the database.
Important: When restoring the database, its name must differ from the name of the already existing
database. You can rename your database later.
6. Click OK.
7. After the database is restored, you must change the database name to the one that you used when
configuring Software Use Analysis. In the left navigation bar, right-click on your database, and then
click Rename.
8. Go to installation_directory\wlp\usr\servers\server1\config and edit the database.yml file.
Update the content of the file so that it corresponds with the restored database.
Preventive actions
Turn off scans if the Software Use Analysis server is to be unavailable for a few days due to routine
maintenance or scheduled backups.
If imports of data from Endpoint Manager to Software Use Analysis are not running, the unprocessed
scan data is accumulated on the Endpoint Manager server. After you turn on the Software Use Analysis
server, a large amount of data will be processed leading to a long import time. To avoid prolonged
imports, turn off scans for the period when the Software Use Analysis server is not running.
Limiting the number of scanned signature extensions
The scanner scans the entire infrastructure for files with particular extensions. For some extensions, the
discovered files are matched against the software catalog before the scan results are uploaded to the
Endpoint Manager server. It ensures that only information about files that produce matches is uploaded.
For other extensions, the scan results are not matched against the software catalog on the side of the
endpoint. They are all uploaded to the Endpoint Manager server. Thus, you avoid rescanning the entire
infrastructure when you import a new catalog or add a custom signature. The new catalog is matched
against the information that is available on the server. However, such a behavior might cause that large
amounts of information about files that do not produce matches is uploaded to the server. It might in
turn lead to performance issues during the import.
To reduce the amount of information that is uploaded to the server, limit the list of file extensions that
are not matched against the software catalog on the side on the endpoint.
18
Procedure
1. Stop the Software Use Analysis server.
v
Linux
a. Run the following command: /etc/init.d/SUAserver stop.
v
Windows
a. Click Start > Administrative Tools > Services.
b. Right-click IBM Endpoint Manager for Software Use Analysis 9.2.0.0 service, and then click
Stop.
2. To limit the number of extensions that are not matched against the software catalog on the side of the
endpoint, remove the extensions that you want to be omitted by the scanner from the following files:
v file_names_all.txt
v file_names_unix.txt
v file_names_windows.txt
They are in the following directory:
v
Linux
SUA_install_dir/wlp/usr/servers/server1/apps/tema.war/WEB-INF/domains/sam/config
v
Windows
SUA_install_dir\wlp\usr\servers\server1\apps\tema.war\WEB-INF\domains\sam\config
Note: Do not remove file extensions that you used to create custom signatures. They are likely to
produce matches with the software catalog, so they can be uploaded to the Endpoint Manager server.
3. Start the Software Use Analysis server.
v
Linux
a. Run the following command: /etc/init.d/SUAserver start.
v
Windows
a. Click Start > Administrative Tools > Services.
b. Right-click IBM Endpoint Manager for Software Use Analysis 9.2.0.0 service, and then click
Start.
4. Run an import. During this import, performance might be lower because the software catalog is
imported.
Important: After the import, some software items might not be visible on the reports. It is an
expected behavior. Complete the remaining steps for the software inventory to be properly reported.
5. Wait for the scheduled software scan. Alternatively, if you have infrequent software scans, stop the
current scan and start a new one. It will allow you for using the optimized list of file extensions in a
shorter time.
a. Log in to the Endpoint Manager console and in the left navigation tree, click Actions.
b. In the upper-right pane, click Initiate Software Scan and then click Stop.
c. Initiate a new software scan.
6. Wait for the scheduled import or run it manually. From now on, the optimized list of file extensions is
used.
Recovering from accumulated scans
To recover from a situation when you have a large amount of accumulated scan data, you must apply a
recovery procedure on the Software Use Analysis server.
About this task
It is important to take a recovery action in the following two cases:
Scalability Guidelines
19
Software Use Analysis has been reinstalled and scans were run in the past.
With each reinstallation of Software Use Analysis, a new instance of Software Use Analysis
database is created. A fresh installation triggers the import of all historically collected scan files.
Coexistence of Software Use Analysis 9.x with Software Use Analysis V2.2, V1.3, License Metric Tool
V7.x or Tivoli Asset Discovery for Distributed V7.x
Different versions of License Metric Tool 9.x and Software Use Analysis 9.x can coexist with each
other and the earlier versions, including Tivoli Asset Discovery for Distributed. For each
additionally installed or reinstalled instance of the License Metric Tool 9.x or Software Use
Analysis 9.x server, a new database is created, which triggers the import of all historically
collected scan files.
For more information about coexistence scenarios, see the Software Use Analysis wiki page.
Procedure
1. Stop the Software Use Analysis server.
v
Linux
v
Windows
Run the following command: /etc/init.d/SUAserver stop.
a. Click Start > Administrative Tools > Services.
b. Right-click IBM Endpoint Manager for Software Use Analysis 9.2.0.0 service, and then click
Stop.
2. Modify the contents of line 37 in raw_datasource_file.rb by adding the string 0=1 and after the where
statement. Example (change in bold type):
line 36:
line 37:
line 38:
u.BaseDirectory = ua.BaseDirectory
where 0=1 and
u.BaseDirectory = 1
The file raw_datasource_file.rb is in the following directory:
v
Linux
SUA_install_dir/wlp/usr/servers/server1/apps/tema.war/WEB-INF/app/models
Windows
SUA_install_dir\wlp\usr\servers\server1\apps\tema.war\WEB-INF\app\models
v
3. Start the Software Use Analysis server.
v
Linux
v
Windows
Run the following command: /etc/init.d/SUAserver start.
a. Click Start > Administrative Tools > Services.
b. Right-click IBM Endpoint Manager for Software Use Analysis 9.2.0.0 service, and then click
Start.
4. Run a data import (ETL) to Software Use Analysis.
5. Stop the Software Use Analysis server.
6. Undo the changes in line 37 from the raw_datasource_file.rb by removing the string 0=1 and after
where statement. Example (change in bold type):
line 36:
line 37:
line 38:
u.BaseDirectory = ua.BaseDirectory
where
u.BaseDirectory = 1
7. Start the Software Use Analysis server.
8. Schedule incremental imports to Software Use Analysis.
9. Distribute scans across the days of the week.
What to do next
Complete the following steps:
1. Rescan computers.
20
2. Upload new scan files.
3. Run an import.
Repeat the procedure for every 10,000 computers.
Tip: Avoid importing more than 10,000 or 20,000 scans within one Software Use Analysis import. Such
an import takes 10 - 15 hours even on systems that meet the Software Use Analysis hardware
requirements.
IBM PVU considerations
If you need to generate PVU reports for an IBM compliance purposes, the best practice is to generate the
report at least monthly.
For organizations that span continents, from IBM compliance perspective, the License Metric Tool Regions
must be applied requiring separate Software Use Analysis deployments. For more information, see
Virtualization Capacity License Counting Rules.
Web user interface considerations
Data import is a computation-intensive task so be prepared to experience slower user interface response
times while you are using Software Use Analysis. Thus, it is better to schedule the imports to take place
at other times, when you are not likely to use the application web UI.
REST API considerations
You can use Software Use Analysis REST API to retrieve large amounts of data that is related to
computer systems, software instances, and license usage in your environment. Such information can then
be passed to other applications for further processing and analysis.
Although using single API requests to retrieve data only from a selected subset of computers does not
greatly impact the performance of Software Use Analysis, this is not true when retrieving data in bulk for
all your computer systems at the same time. Such an action requires the processing of large amounts of
data and it always influences the application performance.
In general, the API requests should not be used together with other performance intensive tasks, like
software scans or data imports. Each user that is logged in to the application, as well as the number of
actions that are performed in the web user interface during the REST API calls also decrease the
performance.
Important: Each time you want to retrieve data through REST API, ensure that the use of Software Use
Analysis at a moderate level, so that the extra workload resulting from REST API does not overload the
application and create performance problems.
When you retrieve data in bulk, you can also make several API requests and use the limit and offset
parameters to paginate your results instead of retrieving all the data at the same time:
v Use the limit parameter to specify the number of retrieved results:
https://hostname:port/api/sam/computer_systems?token=token&limit=100000
v If you limit the first request to 100 000 results, append the next request with the offset=100000
parameter to omit the records that you already retrieved:
https://hostname:port/api/sam/computer_systems?token=token&limit=100000&offset=100000
Scalability Guidelines
21
Note: The limit and offset parameters can be omitted if you are retrieving data from up to about 50
endpoints. For environments with approximately 200 000 endpoints, you are advised to retrieve data in
pages of 100 000 rows for computer systems, 200 000 rows for software instances, and 300 000 rows for
license usage.
Using relays to increase the performance of IBM Endpoint Manager
To take advantage of the speed and scalability that is offered by IBM Endpoint Manager, it is often
necessary to tune the settings of the Endpoint Manager deployment.
A relay is a client that is enhanced with a relay service. It performs all client actions to protect the host
computer, and in addition, delivers content and software downloads to child clients and relays. Instead of
requiring every networked computer to directly access the server, relays can be used to offload much of
the burden. Hundreds of clients can point to a relay for downloads, which in turn makes only a single
request to the server. Relays can connect to other relays as well, further increasing efficiency.
Reducing the Endpoint Manager server load
For all but the smallest Endpoint Manager deployments (< 500 Endpoint Manager clients), a primary
Endpoint Manager relay should be set for each Endpoint Manager client even if they are not in a remote
location.
The reason for this is that the Endpoint Manager server performs many tasks including:
v Gathering new Fixlet content from the Endpoint Manager server
v Distributing new Fixlet content to the clients
v Accepting and processing reports from the Endpoint Manager clients
v Providing data for the Endpoint Manager consoles
v Sending downloaded files (which can be large) to the Endpoint Manager client, and much more.
By using Endpoint Manager relays, the burden of communicating directly with every client is effectively
moved to a different computer (the Endpoint Manager relay computer), which frees the Endpoint
Manager server to do other tasks. If the relays are not used, you might observe that performance
degrades significantly when an action with a download is sent to the Endpoint Manager server.
Setting up Endpoint Manager relays in appropriate places and correctly configuring clients to use them is
the most important change that has highest impact on performance. To configure a relay, you can:
v Allow the clients to auto-select their closest Endpoint Manager relay.
v Manually configure the Endpoint Manager clients to use a specific relay.
For more information, see Managing relays.
22
Appendix. Executive summary
Table 3. Summary of the scalability best practices
Step
Activities
1.
Environment planning
Review the summary information that matches your environment size:
Small
v Up to 5 000 endpoints
v Software Use Analysis and DB2 or MS SQL Server installed on the same server
v Scan groups (optional)
Medium
v 5 000 - 50 000 endpoints
v Scan groups (advisable)
v Software Use Analysis and DB2 or MS SQL Server installed on separate computers
v It is possible to use virtual environments for this deployments size, however it is advisable to have dedicated
resources for processor, memory, and virtual disk allocation.
Large
v 50 000 - 250 000 endpoints
v Scan groups (required)
v Software Use Analysis and DB2 or MS SQL Server installed on separate computers, dedicated storage for the
database software
v Fine-tuning might be required
2
Good practices for creating scan groups
v Plan the scan group size
v Create a benchmark scan group
v Check the import time and decide whether it is satisfactory
v When you achieve an import time that is satisfactory, decide whether you want to have a shorter scan cycle.
v When you are satisfied with the performance of the benchmark scan group, create the remaining groups.
3
Good practices for running scans, imports and uploads
v Run the initial import before scanning the environment
v Plan the scanning schedule
v Avoid scanning when it is not needed
v Limit the number of computer properties to the ones that are relevant for software inventory management
v Limit the number of computer groups
v Ensure that scans and imports are scheduled to run at night
v Disable gathering of usage data in the initial rollout phase
v Carefully plan for gathering of usage data in large environments. Testing is required.
v Configure regular imports. Daily imports are advisable.
v Review import logs
v Maintain frequent imports
v Ensure that scans and imports are run at night
v Run imports once a day
v Configure upload schedule (daily)
© Copyright IBM Corp. 2002, 2015
23
Table 3. Summary of the scalability best practices (continued)
Step
Activities
4
End-of-scan-cycle activities
v Regularly import a new software catalog, for example monthly
v Periodically export data to SmartCloud Control Desk through IBM Tivoli® Integration Composer, for example at the end
of the 1 or 2-week cycle
v Generate audit snapshot
5
Maintenance and tuning
v Perform regular backups of the data that is stored in the database
v Configure the transaction logs size for DB2 (Linux)
v Configure the transaction log location for DB2 (Linux)
v Shrink the transaction log for MS SQL Server (Windows)
v Update query optimization statistics in MS SQL Server (Windows)
v Configure the location of the database transaction log for MS SQL Server Windows)
v Optimize the tempdb database in MS SQL Server (Windows)
v Limit the list of file extensions that are not matched against the software catalog on the side on the endpoint.
v Set up Endpoint Manager relays and configure the clients to increase the performance of the Endpoint Manager server
24
Notices
This information was developed for products and services that are offered in the USA.
IBM may not offer the products, services, or features discussed in this document in other countries.
Consult your local IBM representative for information on the products and services currently available in
your area. Any reference to an IBM product, program, or service is not intended to state or imply that
only that IBM product, program, or service may be used. Any functionally equivalent product, program,
or service that does not infringe any IBM intellectual property right may be used instead. However, it is
the user's responsibility to evaluate and verify the operation of any non-IBM product, program, or
service.
IBM may have patents or pending patent applications covering subject matter described in this
document. The furnishing of this document does not grant you any license to these patents. You can send
license inquiries, in writing, to:
IBM Director of Licensing
IBM Corporation
North Castle Drive, MD-NC119
Armonk, NY 10504-1785
United States of America
For license inquiries regarding double-byte character set (DBCS) information, contact the IBM Intellectual
Property Department in your country or send inquiries, in writing, to:
Intellectual Property Licensing
Legal and Intellectual Property Law
IBM Japan Ltd.
19-21, Nihonbashi-Hakozakicho, Chuo-ku
Tokyo 103-8510, Japan
The following paragraph does not apply to the United Kingdom or any other country where such
provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION
PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR
IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some
states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this
statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically
made to the information herein; these changes will be incorporated in new editions of the publication.
IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this
publication at any time without notice.
Any references in this information to non-IBM websites are provided for convenience only and do not in
any manner serve as an endorsement of those websites. The materials at those websites are not part of
the materials for this IBM product and use of those websites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes appropriate without
incurring any obligation to you.
© Copyright IBM Corp. 2002, 2015
25
Licensees of this program who wish to have information about it for the purpose of enabling: (i) the
exchange of information between independently created programs and other programs (including this
one) and (ii) the mutual use of the information which has been exchanged, should contact:
IBM Corporation
2Z4A/101
11400 Burnet Road
Austin, TX 78758 U.S.A.
Such information may be available, subject to appropriate terms and conditions, including in some cases,
payment of a fee.
The licensed program described in this document and all licensed material available for it are provided
by IBM under terms of the IBM Customer Agreement, IBM International Program License Agreement or
any equivalent agreement between us.
Any performance data contained herein was determined in a controlled environment. Therefore, the
results obtained in other operating environments may vary significantly. Some measurements may have
been made on development-level systems and there is no guarantee that these measurements will be the
same on generally available systems. Furthermore, some measurements may have been estimated through
extrapolation. Actual results may vary. Users of this document should verify the applicable data for their
specific environment.
Information concerning non-IBM products was obtained from the suppliers of those products, their
published announcements or other publicly available sources. IBM has not tested those products and
cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM
products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of
those products.
All statements regarding IBM's future direction or intent are subject to change or withdrawal without
notice, and represent goals and objectives only.
All IBM prices shown are IBM's suggested retail prices, are current and are subject to change without
notice. Dealer prices may vary.
This information is for planning purposes only. The information herein is subject to change before the
products described become available.
This information contains examples of data and reports used in daily business operations. To illustrate
them as completely as possible, the examples include the names of individuals, companies, brands, and
products. All of these names are fictitious and any similarity to the names and addresses used by an
actual business enterprise is entirely coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs
in any form without payment to IBM, for the purposes of developing, using, marketing or distributing
application programs conforming to the application programming interface for the operating platform for
which the sample programs are written. These examples have not been thoroughly tested under all
conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these
programs. The sample programs are provided "AS IS", without warranty of any kind. IBM shall not be
liable for any damages arising out of your use of the sample programs.
Each copy or any portion of these sample programs or any derivative work, must include a copyright
notice as follows:
26
Portions of this code are derived from IBM Corp. Sample Programs.
© Copyright IBM Corp. _enter the year or years_. All rights reserved.
Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business
Machines Corp., registered in many jurisdictions worldwide. Other product and service names might be
trademarks of IBM or other companies. A current list of IBM trademarks is available on the web at
www.ibm.com/legal/copytrade.shtml.
Adobe, Acrobat, PostScript and all Adobe-based trademarks are either registered trademarks or
trademarks of Adobe Systems Incorporated in the United States, other countries, or both.
IT Infrastructure Library is a registered trademark of the Central Computer and Telecommunications
Agency which is now part of the Office of Government Commerce.
Intel, Intel logo, Intel Inside, Intel Inside logo, Intel Centrino, Intel Centrino logo, Celeron, Intel Xeon,
Intel SpeedStep, Itanium, and Pentium are trademarks or registered trademarks of Intel Corporation or its
subsidiaries in the United States and other countries.
Linux is a trademark of Linus Torvalds in the United States, other countries, or both.
Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the
United States, other countries, or both.
ITIL is a registered trademark, and a registered community trademark of The Minister for the Cabinet
Office, and is registered in the U.S. Patent and Trademark Office.
UNIX is a registered trademark of The Open Group in the United States and other countries.
Java™ and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or
its affiliates.
Cell Broadband Engine is a trademark of Sony Computer Entertainment, Inc. in the United States, other
countries, or both and is used under license therefrom.
Linear Tape-Open, LTO, the LTO Logo, Ultrium, and the Ultrium logo are trademarks of HP, IBM Corp.
and Quantum in the U.S. and other countries.
Terms and conditions for product documentation
Permissions for the use of these publications are granted subject to the following terms and conditions.
Applicability
These terms and conditions are in addition to any terms of use for the IBM website.
Personal use
You may reproduce these publications for your personal, noncommercial use provided that all
proprietary notices are preserved. You may not distribute, display or make derivative work of these
publications, or any portion thereof, without the express consent of IBM.
Notices
27
Commercial use
You may reproduce, distribute and display these publications solely within your enterprise provided that
all proprietary notices are preserved. You may not make derivative works of these publications, or
reproduce, distribute or display these publications or any portion thereof outside your enterprise, without
the express consent of IBM.
Rights
Except as expressly granted in this permission, no other permissions, licenses or rights are granted, either
express or implied, to the publications or any information, data, software or other intellectual property
contained therein.
IBM reserves the right to withdraw the permissions granted herein whenever, in its discretion, the use of
the publications is detrimental to its interest or, as determined by IBM, the above instructions are not
being properly followed.
You may not download, export or re-export this information except in full compliance with all applicable
laws and regulations, including all United States export laws and regulations.
IBM MAKES NO GUARANTEE ABOUT THE CONTENT OF THESE PUBLICATIONS. THE
PUBLICATIONS ARE PROVIDED "AS-IS" AND WITHOUT WARRANTY OF ANY KIND, EITHER
EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO IMPLIED WARRANTIES OF
MERCHANTABILITY, NON-INFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE.
28
Printed in USA
Fly UP