Date: 2005/04/19 
Revision: 1.92 

NAME

      HP-UX WLM overview


DESCRIPTION

      HP-UX Workload Manager (WLM) is an automatic resource management tool
      used for goal-based workload management. A workload is a partition or
      a group of processes that are treated as a single unit for the
      purposes of resource management. For example, a virtual partition
      could be a single workload; also, a database application that consists
      of multiple cooperating processes could be considered a workload.

      HP-UX WLM provides automatic resource allocation and application
      performance management through the use of prioritized service-level
      objectives (SLOs). Multiple prioritized workloads can be managed on a
      single server, both within and across partitions, based on their
      reported performance levels.

      WLM manages workloads as defined in a configuration file. You assign
      applications and users to workload groups. WLM then manages each
      workload group's CPU, real memory, and disk bandwidth resources
      according to its SLOs in the current configuration. WLM automatically
      allocates CPU resources in order to achieve the desired SLO. WLM can
      also manage real memory, although not in response to SLO performance.
      Disk bandwidth resources can be statically assigned in the
      configuration file.

      When a workload group has no active SLOs, WLM reduces its resource
      shares. (You control when SLOs are active through the WLM
      configuration file.) For more information on these reductions, see the
      discussion of the transient_groups keyword in the wlmconf(4) man page.

      WLM automates many of the features of PRM (Process Resource Manager),
      processor sets (PSETs), HP-UX Virtual Partitions, and nPartitions.

      WLM can manage the following resources for your workload groups:

      CPU
	   Arbitrates CPU requests to ensure high-priority SLOs meet their
	   objectives. SLOs make CPU requests for workload groups. CPU is
	   allocated in shares, where a CPU share is 1/100 of each CPU on
	   the system or 1/100 of a single CPU, depending on WLM's mode of
	   operation. You can allocate CPU in:
	   + Time slices on several CPUs
	   + Whole CPUs used by PSET-based workload groups
	   + Whole CPUs used by virtual partition-based workload groups
	   + Whole CPUs used by nPartition-based workload groups (with each
	   nPartition using Instant Capacity, formerly known as iCOD)

	   A workload group may not achieve its CPU request because CPU is
	   oversubscribed and the workload group's SLOs are low priority.

      Disk bandwidth
	   Ensures that each workload group is allocated disk bandwidth
	   according the current WLM configuration.

      Memory
	   Ensures that each workload group is granted at least its minimum,
	   but (optionally) no more than its capped amount of real memory.

      In addition, WLM has an application manager that ensures specified
      applications and their child processes run in the appropriate workload
      groups.

    WLM COMMANDS
      WLM supports the commands listed below. For more information about a
      command, see its man page.

      wlmd
      Starts WLM and activates a configuration. Can also be used to validate
      WLM configuration files and to log data for performance tuning.

      wlminfo
      Provides various WLM data.

      wlmcw
      This graphical configuration wizard greatly simplifies the process of
      creating a WLM configuration.

      wlmgui
      This graphical interface allows you to create, modify, and deploy WLM
      configurations both locally and remotely. In addition, it provides
      monitoring capabilities.

      wlmpard
      Starts the WLM global arbiter for cross-partition management or
      management of Temporary Instant Capacity or Pay Per Use resources.

      wlmsend
      Sends metric values to a named rendezvous point for wlmrcvdc to
      forward to WLM. This tool provides a mechanism for writing data
      collectors in scripting languages such as sh, csh, perl, and others.
      Also, it is convenient for sending metric data from the command line.

      wlmrcvdc
      Receives metric values from a named rendezvous point and forwards them
      to the WLM daemon.  This tool is started by the WLM daemon as
      requested in the tune structures of the SLOs in the configuration
      file.

      wlmrcvdc can forward data from all sorts of commands to WLM. HP
      provides the following commands for use with wlmrcvdc to collect the
      specified types of data:

	   glance_app
		Retrieves data for applications defined in the GlancePlus
		file /var/opt/perf/parm.

	   glance_gbl
		Retrieves a global (system) metric.

	   glance_prm
		Retrieves general PRM data and PRM data for specific
		workload groups (also known as PRM groups).

	   glance_prm_byvg
		Retrieves PRM data regarding logical volumes.

	   glance_tt, glance_tt+
		Retrieves data on ARM (Application Response Measurement)
		transactions for applications registered through the ARM API
		function arm_init().

	   sg_pkg_active
		Checks the status of a Serviceguard package.

	   time_url_fetch
		Measures the response time for fetching a URL. You can use
		this command with the WLM Apache Toolkit (ApacheTK) to
		manage your Apache-based workloads.

	   wlmdurdc
		Helps manage the duration of processes in a workload group.

	   wlmoradc
		Produces an SQL value or an execution time (walltime) that
		results from executing SQL statements against an Oracle(R)
		database instance.

	   wlmwlsdc
		Retrieves metrics on BEA WebLogic Server instances. You can
		use this command with the WLM BEA WebLogic Server Toolkit
		(WebLogicTK) to manage your WebLogic workloads.

      wlmckcfg
      Validates WLM configuration files for integration with Servicecontrol
      Manager and HP Systems Insight Manager.

      wlmemsmon
      The WLM EMS monitor provides information on how well WLM and the
      managed workload groups are performing.  wlmemsmon monitors the WLM
      daemon wlmd and provides EMS resources that an EMS client can monitor.

      wlmcomd
      Services requests from the WLM graphical user interface.
      wlmcert
      Manages WLM's security certificates

      wlmprmconf
      Converts a PRM configuration file into a WLM configuration file.


HOW TO USE WLM

      The following steps show how to use WLM:

      1. Create a WLM configuration

	 The WLM configuration file is the main user interface for
	 controlling WLM. In a WLM configuration, you:

	   +  Define workloads

	   +  Place applications or users in workload groups (for workloads
	      based on PSETs or FSS groups)

	   +  Create one ore more SLOs for each workload (For information on
	      SLOs, see the section "SLO TYPES" below.)

	 WLM provides a number of example configurations in
	 /opt/wlm/examples/wlmconf/ that you can modify to fit your
	 environment. For an overview of these examples, see the section
	 "EXAMPLE CONFIGURATIONS" below. The WLM Toolkits also offer a
	 number of example configurations. For pointers to those files, see
	 the "EXAMPLES" section in the wlmtk(5) man page.

	 If you prefer not to work directly with a configuration file, you
	 can use the:

	   +  WLM Configuration Wizard
	      Invoke the wizard with the command /opt/wlm/bin/wlmcw.
	      (Because the wizard is an X-windows application, be sure to
	      set your DISPLAY environment variable before starting it.)

	      The wizard does not provide all the functionality available
	      through a configuration file, but it does greatly simplify the
	      process of creating a configuration. After creating a
	      configuration file using the wizard, you can view the file to
	      learn, and become more comfortable with, the syntax and
	      possibly create more complex configurations.

	   +  WLM GUI
	      Invoke the WLM GUI with the command /opt/wlm/bin/wlmgui. (Be
	      sure to set your DISPLAY environment variable before starting
	      the GUI.)

	      wlmgui does require you to be familiar with the WLM
	      configuration file syntax. However, it provides forms and
	      tooltips (visible when your mouse is fixed over a form field)
	      to simplify the configuration process.  (wlmgui) requires the
	      WLM communications daemon, wlmcomd, as explained in the
	      wlmgui(1M) man page.

      2. (Optional) Set up secure WLM communications

	 Follow the procedure HOW TO SECURE COMMUNICATIONS in the
	 wlmcert(1M) man page--skipping the step about starting/restarting
	 the WLM daemons. You will do that later in this procedure.

      3. Use the provided data collectors or create your own

	 Data collectors supply metrics to the WLM daemon. The daemon then
	 uses these metrics to:

	   +  Determine new resource allocations to enable the workload
	      groups to achieve their SLOs

	   +  Set shares-per-metric allocations

	   +  Enable or disable SLOs

	 You have a number of options when it comes to data collectors:

	   +  The easiest data collector to set up is the one for usage
	      goals. This data collector is automatically used when you
	      specify a usage goal.

	   +  The next easiest data collector to set up is wlmrcvdc using
	      the sg_pkg_active command, wlmoradc command, one of the
	      glance_* commands, or one of the other commands shown above in
	      the wlmrcvdc discussion.

	   +  You can also set up wlmrcvdc to forward the stdout of a data-
	      collecting command to WLM.

	   +  Combining wlmsend with wlmrcvdc, you can send data to WLM from
	      the command line, a shell script, or a perl program.

	   +  If you are writing a data collector in C, your program can
	      interface directly with WLM through the libwlm(3) API.

	 For an overview of data collectors, see the section "HOW
	 APPLICATIONS CAN MAKE METRICS AVAILABLE TO WLM" below.

	 NOTE: Data collectors invoked by WLM run as root and can pose a
	 security threat.  Hewlett-Packard makes no claims of any kind with
	 regard to the security of data collectors not provided by Hewlett-
	 Packard. Furthermore, Hewlett-Packard shall not be liable for any
	 security breaches resulting from the use of said data collectors.
	 For information on creating data collectors, see the white paper
	 "Writing a Better WLM Data Collector" available at
	 /opt/wlm/share/doc/howto/perfmon.html.

      4. Activate the configuration in passive mode if desired

	 WLM operates in "passive mode" when you include the -p option in
	 your command to activate a configuration. With passive mode, you
	 can see how WLM will approximately respond to a particular
	 configuration--without the configuration actually taking control of
	 your system. For more information on this mode, including its
	 limitations, see the PASSIVE MODE section below.

	 Activate the WLM file configfile in passive mode as follows:

	 wlmd -p -a configfile

	 To see how WLM responds to the configuration, use the WLM utility
	 wlminfo.

      5. Activate the configuration

	 Activate your configuration--putting WLM in control of your
	 system's resources--as follows:

	 wlmd -a configfile

	 To generate audit data (-t), secure communications (-s), and log
	 statistics (-l all), use the following command:

	 wlmd -t -s -a configfile -l all

	 Alternatively, you can set variables in /etc/rc.config.d/wlm to
	 automatically activate WLM, generate audit data, and log statistics
	 when the system boots. In this case, wlmd starts with a copy of the
	 last activated configfile.

      6. Monitor SLO compliance

	 Using wlminfo with its slo command, or its interactive mode (-i),
	 allows you to monitor your SLOs.

	 Also, the WLM EMS monitor makes various status data available to
	 EMS clients. You can check this data to verify SLO compliance.

      7. Monitor data collectors

	 Data collection is a critical link in the effective maintenance of
	 your configured service-level objectives. Consequently, you should
	 monitor your data collectors so you can be aware when one dies.

	 When using wlminfo slo, there are two columns that can indicate the
	 death of a data collector process: State and Concern. For more
	 information on these columns, see the wlminfo(1M) man page.

	 The WLM EMS monitor also tells you when a data collector dies
	 unexpectedly. You need to configure EMS monitoring requests that
	 notify you on the death of a data collector.

	 When a data collector dies, each SLO that uses the data from the
	 dead collector is affected.  As an indication of the problem, each
	 SLO's EMS resource:

	 /applications/wlm/slo_status/<SLONAME>

	 changes to:

	 WLM_SLO_COLLECTOR_DIED (5)

	 Use the EMS configuration interface (available in the SAM "Resource
	 Management" application group) to set up monitoring requests to
	 watch for this situation.

      8. Configure global arbitration across partitions

	 Besides controlling CPU allocations within a system or partitions,
	 WLM can migrate CPU resources between partitions. You can even
	 treat a partition as a workload unto itself by not using a prm
	 structure in the WLM configuration. (WLM can also provide CPU
	 control for a nested environment with FSS groups inside virtual
	 partitions inside nPartitions.)

	 NOTE: By default, WLM's global arbitration uses non-secured
	 communications. If this is a concern, use the wlmpard -s option to
	 secure communications--or use global arbitration only on trusted
	 local area networks.


SLO TYPES

      WLM supports the following types of SLOs:

	   +  Goal-based SLOs

	   +  Shares-based SLOs

    Goal-based SLOs
      These SLOs cause WLM to grant more CPU or take away CPU based on
      reported metrics. These SLOs have either metric goals or usage goals.
      Metric-goal-based SLOs are suitable for applications that can generate
      metrics. For example, online transaction processing (OLTP)
      applications are good candidates for metric-goal-based SLOs.


      A metric goal has the following form:

      goal = metric met > goal_value ;

      or

      goal = metric met < goal_value ;

      where you want the metric named met to be greater than or less than
      goal_value.

      Usage-goal-based SLOs specify CPU utilization goals for a workload
      group, indicating how much of its allocation the group must be using
      before the allocation is changed. With a usage goal, a workload
      group's CPU allocation is reduced if its workload is consuming too
      little of the current allocation, allowing other workloads to consume
      more CPU if needed. Similarly, if the workload is using a high
      percentage of its group's allocation, it is granted more CPU. (WLM
      tracks the metrics for usage goals internally; no data collector is
      needed.)

      A usage goal has the form:

      goal = usage _CPU [ low_util_bound [ high_util_bound ]];

      WLM automatically changes the CPU allocation for goal-based SLOs to
      better achieve their stated goals. The actual CPU allocation granted
      is based on the amount of CPU needed to meet the goal as determined by
      WLM, the request limits placed on the SLO, and the availability of CPU
      resources after the needs of all higher priority SLOs have been met.

    Shares-based SLOs
      This SLO type allows an administrator to specify a CPU allocation for
      a workload group without specifying a goal. The allocation can be
      fixed or shares-per-metric.

      To have a fixed allocation of x percentage of the CPU, use the
      cpushares keyword as follows:

      cpushares = x total;

      You can use this same keyword to specify a shares-per-metric
      allocation. With this type of allocation, the associated workload
      group receives a given amount of the CPU per metric. For example, with
      the following statement, a workload group would receive 5 shares of
      the CPU for each process an application has running in the group:

      cpushares = 5 total per metric application_procs;

      The actual CPU allocation granted to the workload group is subject to
      the availability of CPU resources after the needs of higher priority
      SLOs have been met.

      A workload group with a fixed-allocation SLO can coexist on a system
      with other workload groups that have goal-based SLOs. Moreover, this
      SLO type could be used to allocate resources to optional or
      discretionary work.

      WLM allows multiple SLOs--assuming they are fixed-allocation or goal-
      based SLOs--for workload groups that require more than one SLO to
      accommodate a "must meet" goal and optional, lower-priority stretch
      goals.

      For more information on slo structures, where you define your SLOs,
      see wlmconf(4).


HOW APPLICATIONS CAN MAKE METRICS AVAILABLE TO WLM

    Time metrics from instrumentable applications
      If the desired metric can be measured in units of time, and the
      application can be modified, we recommend using the ARM API provided
      by GlancePlus. WLM will then collect the ARM data from GlancePlus.

      Adding ARM calls to an application is as simple as specifying your
      application with an arm_init() call, marking the start of the time
      period to be measured with an arm_start() call, and marking the end of
      the time period with an arm_end() call.  For more information on ARM,
      see the arm(3) man page (if available on your system) or visit
      http://www.cmg.org/regions/cmgarmw.

    Other data collection techniques
      If your application cannot be modified to insert ARM calls, or if your
      metric does not have time units, then you should implement an external
      data collector. There are three types of external data collectors to
      consider:

	   +  Independent collectors

	   +  Stream collectors

	   +  Native collectors

      These collector types are explained below.

      Independent collectors
      Independent collectors use the wlmsend command to communicate a metric
      value to WLM. They are called "independent" because they are not
      started by the WLM daemon wlmd, and they are not required to run
      continuously.

      This type of collector is ideal if you want to convey event
      information to WLM, such as application startup or shutdown.

      One caveat of using this type of collector is that on start-up, HP-UX
      WLM has no value for the metric until the collector provides one. For
      this reason, the collector should be structured to report a value
      periodically, even if it has not changed.

      If your collector runs continuously, be careful if using pipes. The
      pipes may have internal buffering that must either be defeated or
      flushed to ensure the data is communicated in a timely manner.

      To configure an independent collector for a metric called metricIC,
      place the following tune structure in your configuration file:

	   tune metricIC {
	       coll_argv = wlmrcvdc ;
	   }

      Stream collectors
      Stream collectors convey their metric values to WLM by writing them to
      the stdout stream. WLM starts these data collectors when activating a
      configuration, and expects them to continue to run and provide metrics
      until notified of a WLM shutdown or restart.

      Use this type of collector if the metric is available in a file or
      through a command-line interface. In this case, the collector can
      simply be a script containing a loop that reads the file or executes
      the command, extracts the metric value, writes it on stdout, and
      sleeps for one WLM interval. (The current WLM interval length is
      available through the WLM_INTERVAL environment variable to data
      collectors started by WLM through a coll_argv statement in the WLM
      configuration.)

      Again, as with independent collectors, be careful if using pipes in
      the data collector. These pipes may have internal buffering that must
      either be defeated or flushed to ensure the data is communicated in a
      timely manner.

      Because they are started by a daemon process (wlmd), stream collectors
      do not have a stderr on which to communicate errors. However, WLM
      provides the coll_stderr tunable that allows you to log each
      collector's stderr to syslog (/var/adm/syslog/syslog.log) or another
      file. In addition, a stream data collector can communicate using
      either syslog(3C) or logger(1) with the daemon facility.

      To configure a stream collector for a metric called metricSC, place
      the following tune structure in your configuration file:

	   tune metricSC {
	       coll_argv = wlmrcvdc collector_path collector_args ;
	   }


      The sg_pkg_active data collector is an example of a stream collector,
      as are several of the collectors that come with WLM Toolkits, such as
      time_url_fetch (ApacheTK) and wlmwlsdc (WebLogicTK).

      Native collectors
      Native collectors use the WLM API to communicate directly with the WLM
      daemon. Like stream collectors, these collectors are started by WLM
      when activating a configuration. WLM expects them to continue to run
      and provide metrics until notified of a WLM shutdown or restart. For
      tips on writing your own data collectors, see the white paper at
      /opt/wlm/share/doc/howto/perfmon.html.

      This type of collector is appropriate if the desired metric values are
      obtained through calls to a C or C++ language API that is provided by
      the source of the metric. One example of such an API is the pstat(2)
      family of system calls used to obtain process statistics.

      This type of collector establishes a direct connection with WLM using
      the WLM API function wlm_mon_attach(). Then, executed in a loop, the
      collector calls the API functions necessary to obtain the metric
      value, followed by a call to the WLM API function wlm_mon_write() to
      pass the value on.

      Because they are started by a daemon process (wlmd), native
      collectors' output to stdout and stderr is discarded. However, WLM
      provides the coll_stderr tunable that allows you to log each
      collector's stderr to syslog (/var/adm/syslog/syslog.log) or another
      file. In addition, a native data collector can communicate using
      either syslog(3C) or logger(1) with the daemon facility.

      To configure a native collector for a metric called metricNC, place
      the following tune structure in your configuration file:

	   tune metricNC {
	       coll_argv = collector_path collector_args ;
	   }

      wlmrcvdc is an example of a native collector.


PASSIVE MODE

      WLM provides a passive mode that allows you to see how WLM will
      approximately respond to a given configuration--without putting WLM in
      charge of your system's resources. Using this mode, you can analyze
      your configuration's behavior--with minimal effect on the system.
      Besides being useful in understanding and experimenting with WLM,
      passive mode can be helpful in capacity-planning activities. A
      sampling of possible uses for passive mode are described below. These
      uses help you determine:

	   +  How does a condition statement work?

	      Activate your configuration in passive mode then start the
	      wlminfo utility.	Use wlmsend to update the metric that is
	      used in the condition statement.	Alternatively, wait for the
	      condition to change based on the date and time. Monitor the
	      behavior of the SLO in question in the wlminfo output. Is it
	      on or off?

	   NOTE: Always wait at least 60 seconds (the default WLM interval)
	   for WLM's changes to resource allocations to appear in the
	   wlminfo output. (Alternatively, you can adjust the interval using
	   the wlm_interval tunable in your WLM configuration file.)

	   +  How does a cpushares statement work?

	      Activate your configuration in passive mode then start the
	      wlminfo utility.	Use wlmsend to manipulate the metric used in
	      the cpushares statement. What is the resulting allocation
	      shown in the wlminfo output?

	   +  How do goals work? Is my goal set up correctly?

	      Activate your configuration and monitor the WLM behavior in
	      the wlminfo output. What is the range of values for a given
	      metric. Does WLM have the goal set to the level expected? Is
	      WLM adjusting the workload group's CPU allocation?

	   +  How might a particular cntl_convergence_rate value or the
	      values of other tunables affect allocation changes?

	      Create several configurations, each with a different value for
	      the tunable in question. Activate one of the configurations
	      and monitor the WLM behavior in the wlminfo output. Observe
	      how WLM behaves differently under each of the configurations.

	   +  How does a usage goal work?

	      In passive mode, a usage goal's behavior might not match what
	      would be seen in regular mode, but what is its basic behavior
	      if the application load for a particular workload group is
	      increased?

	      Activate your configuration and monitor the wlminfo output to
	      see how WLM adjusts the workload group's CPU allocation in
	      response to the group's usage.

	   +  Is my global configuration file set up as I wanted? If I used
	      global arbitration on my production system, what might happen
	      to the CPU layouts?

	      NOTE: You can run wlmpard in passive mode with each
	      partition's wlmd daemon running in regular mode. Thus, you can
	      run wlmpard experiments on a production system without
	      consequence.

      In addition, passive mode allows you to validate workload group,
      application, and user configuration. For example, with passive mode,
      you can determine:

	   +  Is a user's default workload group set up as I expected?

	   +  Can a user access a particular workload group?

	   +  When an application is run, which workload group does it run
	      in?

	   +  Can I run an application in a particular workload group?

	   +  Are the alternate names for an application set up correctly?

      Furthermore, using metrics collected with glance_prm, passive mode can
      be useful for capacity planning and trend analysis. For more
      information, see glance_prm(1M).

    PASSIVE MODE VERSUS ACTUAL WLM MANAGEMENT
      This section covers the following topics:

	   + The WLM feedback loop

	   + Effect of mincpu and maxcpu values

	   + Using wlminfo in passive mode

	   + The effect of passive mode on usage goals and metric goals

      The WLM feedback loop
      WLM's operations are based on a feedback loop: System activity
      typically affects WLM's arbitration of service-level objectives. This
      arbitration results in changes to CPU allocations for the workload
      groups, which can in turn affect system activity--completing the
      feedback loop.

      The diagram below shows WLM's normal operation, including the feedback
      loop.

						  Usage/metrics
	Normal operation:    System activity ---------------------> WLM
				    ^				     v
				    |				     |
				    +--<--<--<--<--<--<--<--<--<--<--+
					    Allocation changes


      In passive mode, however, the feedback loop is broken, as shown below.

						  Usage/metrics
	Passive operation:     System activity -------------------> WLM


      Thus, in passive mode, WLM takes in data on the workloads. It even
      forms a CPU request for each workload based on the data received.
      However, it does not change the CPU allocations for the workloads on
      the system.

      Effect of mincpu and maxcpu values
      In passive mode, WLM does use the values of the following keywords to
      form shares requests:

	   + mincpu/maxcpu

	   + gmincpu/gmaxcpu

	   + hmincpu/hmaxcpu

      However, because WLM does not adjust allocations in passive mode, it
      may appear that these values are not used.

      Using wlminfo in passive mode
      Use the wlminfo utility to monitor WLM in passive mode. Its output
      reflects WLM behavior and operation. It shows how much CPU WLM is
      requesting for a workload--given the workload's current performance.
      However, because WLM does not actually adjust CPU allocations in
      passive mode, WLM does not affect the workload's performance--as
      reported in usage values and metric values. Once you activate WLM in
      normal mode, it adjusts allocations and affects these values.

      NOTE: For the purposes of passive mode, WLM creates a PRM
      configuration with each of your workload groups allocated one CPU
      share, and the rest going to the reserved group PRM_SYS.	(If your
      configuration has PSET-based workload groups, the PSETs are created
      but with 0 CPUs.) In this configuration, CPU capping is not
      enforced--unlike in normal WLM operation. Furthermore, this
      configuration will be the only one used for the duration of the
      passive mode. WLM does not create new PRM configurations, as it does
      in normal operation, to change resource allocations. Consequently, you
      should not rely on prmlist or prmmonitor to observe changes when using
      passive mode. These utilities will display the configuration WLM used
      to create the passive mode. However, you can use prmmonitor to gather
      CPU usage data.

      The effect of passive mode on usage goals and metric goals
      As noted above, in passive mode, WLM's feedback loop is not in place.
      The lack of a feedback loop is most dramatic with usage goals. With
      usage goals, WLM changes a workload group's CPU allocation so that the
      group's actual CPU usage is a certain percentage of the allocation. In
      passive mode, WLM does not actually change CPU allocations. Thus, an
      SLO with a usage goal might be failing; however, that same SLO might
      easily be met if the feedback loop were in place. Similarly, an SLO
      that is passing might fail if the feedback loop were present. However,
      if you can suppress all the applications on the system except for the
      one with a usage goal, wlminfo should give you a good idea of how the
      usage goal would work under normal WLM operation.

      Passive mode can have an effect on SLOs with metric goals as well.
      Because an application is not constrained by WLM in passive mode, the
      application might produce metric values that are not typical for a
      normal WLM session. For example, a database application might be using
      most of a system. As a result, it would complete a high number of
      transactions per second. The database performance could be at the
      expense of other applications on the system. However, your WLM
      configuration, if it were controlling resource allocation, might scale
      back the database's access to resources to allow the other
      applications more resources. Thus, the wlminfo output would show WLM's
      efforts to reduce the database's CPU allocation.	Because passive mode
      prevents a reduction in the allocation, the database's number of
      transactions per seconds (and system use) remains high. WLM, believing
      the previous allocation reduction did not produce the desired result,
      again lowers the database's allocation. Thus, with the removal of the
      feedback loop, WLM's actions in passive mode do not always indicate
      what it would do normally.

      Because of these discrepancies, always be careful when using passive
      mode as an indicator of normal WLM operation. Use passive mode to see
      trends in WLM behavior--with the knowledge that the trends may be
      exaggerated because the feedback loop is not present.



EXAMPLE CONFIGURATIONS

      WLM comes with several example configuration files. These examples are
      in the directory /opt/wlm/examples/wlmconf/. Here is an overview of
      the examples:

      distribute_excess.wlm
	   Example configuration file demonstrating the use of the weight
	   and distribute_excess keywords.  This functionality is used to
	   manage the distribution of resources among workload groups after
	   honoring performance goals specified in slo structures.

      enabling_event.wlm
	   A configuration file demonstrating the use of WLM to enable or
	   disable a service-level objective (SLO) when a certain event
	   occurs.

      entitlement_per_process.wlm
	   A configuration file that demonstrates the use of a shares-per-
	   metric goal.	 A workload group's allocation, or entitlement, is
	   based directly on the number of currently active processes
	   running in the group.

      fixed_entitlement.wlm
	   This simple example configuration illustrates the use of WLM in
	   granting a fixed allocation (entitlement) to a particular group
	   of users.

      manual_entitlement.wlm
	   A configuration file to help a new WLM user characterize the
	   behavior of a workload.  The goal is to determine how a workload
	   responds to a series of allocations (entitlements). For a similar
	   configuration that changes the number of CPUs in the PSET upon
	   which a workload group is based, see
	   /opt/wlm/toolkits/weblogic/config/manual_cpucount.wlm.

      metric_condition.wlm
	   Configuration file to illustrate that an SLO can be enabled based
	   upon the value provided by a metric (in this case, the metric is
	   provided by a glance data collector provided with the WLM
	   product).  Metrics can be used in both the goal statement and the
	   condition statement of a single SLO.

      npar_icod_manual_allocation.wlm, npar_icod_manual_allocation.wlmpar
	   These configuration files demonstrate WLM's ability to resize
	   nPartitions--using Instant Capacity software. (Instant Capacity
	   was formerly known as iCOD) The resizing is accomplished by
	   deactivating CPUs on some nPartitions while activating CPUs on
	   other nPartitions on the system. The number of active CPUs
	   remains the same so no additional charge is incurred. Configure
	   WLM in each nPartition on the system using the .wlm file.
	   Configure the WLM global arbiter in one nPartition using the
	   .wlmpar file.

      performance_goal.template
	   This file has a different filename extension (.template vs.
	   .wlm).  That is simply because this file distinguishes between
	   configuration file special keywords and user-modifiable values by
	   placing the items that a user would need to customize within
	   square brackets ([]'s).  Because of the presence of the square
	   brackets, the sample file will not pass the syntax-checking mode
	   of wlmd (wlmd -c template).	All of the files with names ending
	   in .wlm will parse correctly.

      stretch_goal.wlm
	   Example configuration file to demonstrate how to use multiple
	   SLOs for the same workload (but at different priority levels) to
	   specify a stretch goal for a workload.  A stretch goal is one
	   that we'd like to have met if all other higher-priority SLOs are
	   being satisfied and there are additional CPU cycles available.
      time_activated.wlm
	   This configuration file demonstrates the use of WLM in granting a
	   fixed allocation (entitlement) to a particular group of users
	   only during a certain time period.

      transient_groups.wlm
	   This configuration file demonstrates how to minimize resource
	   consumption when workload groups have no active SLOs.

      twice_weekly_boost.wlm
	   A configuration file that demonstrates a conditional allocation
	   with a moderately complex condition.

      usage_goal.wlm
	   This configuration demonstrates the usage goal for service-level
	   objectives.	This type of goal is different from the typical
	   performance goal in that it does not require explicit metric
	   data.

      user_application_records.wlm
	   A configuration file that demonstrates the use of, and precedence
	   between, user and application records in placing processes in
	   workload groups.

      vpar_usage_goal.wlm, vpar_usage_goal.wlmpar
	   These configuration files demonstrate WLM's ability to resize
	   HP-UX Virtual Partitions, shifting CPUs between the virtual
	   partitions on a system. Configure WLM in each virtual partition
	   on the system using the .wlm file. Configure the WLM global
	   arbiter using the .wlmpar file.


INTEGRATION WITH OTHER PRODUCTS

      WLM integrates with various other products to provide greater
      functionality. Currently, these other products are:

	   +  Apache web server

	   +  nPartitions

	   +  OpenView Performance Agent for UNIX /
	      OpenView Performance Manager for UNIX

	   +  Oracle databases

	   +  Pay Per Use

	   +  Processor sets

	   +  SAS(R) Software


	   +  Security Containment

	   +  Serviceguard

	   +  HP-UX SNMP Agent

	   +  Systems Insight Manager / Servicecontrol Manager

	   +  Temporary Instant Capacity

	   +  Virtual partitions

	   +  BEA WebLogic Server

      The integration with these products is described below.

    Apache web server
      WLM can help you manage and prioritize Apache-based workloads through
      the use of the WLM Apache Toolkit (ApacheTK), which is part of the
      freely available product WLM Toolkits (WLMTK) available at
      /opt/wlm/toolkits/. WLM can be used with Apache processes, Tomcat, CGI
      scripts, and related tools using the HP-UX Apache-based Web Server.

      ApacheTK shows you how to:

	   +  Separate Apache from Oracle database instances

	   +  Separate Apache from batch work

	   +  Isolate a resource-intensive CGI workload

	   +  Isolate a resource-intensive servlet workload

	   +  Separate all Apache Tomcat workloads from other Apache
	      workloads

	   +  Separate two departments' applications using two Apache
	      instances

	   +  Separate module-based workloads with two Apache instances

	   +  Manage Apache CPU allocation by performance goal

      For more information, see
      /opt/wlm/toolkits/apache/doc/apache_wlm_howto.html.

    nPartitions
      You can run WLM within and across nPartitions. (WLM can even manage
      CPU resources for nPartitions containing virtual partitions containing
      FSS workload groups.) For systems with partitions using Instant
      Capacity software, WLM provides a global arbiter, wlmpard, that can
      take input from the WLM instances on the individual partitions. The
      global arbiter then "moves" CPUs between partitions, if needed, to
      better achieve the SLOs specified in the WLM configuration files that
      are active in the partitions. (This movement is achieved by
      deactivating a CPU in one nPartition, then activating a CPU in another
      nPartition. The total number of active CPUs remains constant--avoiding
      a charge for additionals CPUs.) For more information, see the
      wlmpard(1M) and wlmparconf(4) man pages.

    OpenView Performance Agent (OVPA) for UNIX
    OpenView Performance Manager (OVPM) for UNIX
      You can treat your workload groups as applications and then track
      their application metrics in OpenView Performance Agent for UNIX as
      well as in OpenView Performance Manager for UNIX.

      NOTE: If you complete the procedure below, OVPA/OVPM will track
      application metrics only for your workload groups; applications
      defined in the parm file will no longer be tracked. GlancePlus,
      however, will still track metrics for both workload groups and
      applications defined in your parm file.

      To track application metrics for your workload groups:

      1. Edit /var/opt/perf/parm

	 Edit your /var/opt/perf/parm file so that the "log" line includes
	 "application=prm" (without the quotes). For example:

	 log global application=prm process dev=disk,lvm transaction

      2. Restart the agent

	 With WLM running, execute the following command:

	 % mwa restart scope

	 NOTE: The WLM workload groups must be enabled at the time the
	 scopeux collector is restarted by the mwa restart scope command. If
	 WLM is not running, or transient_groups is set to 1 in your WLM
	 configuration, data for some--or all--workload groups may be absent
	 from OpenView graphs and reports. Also, it may affect alarms
	 defined in /var/opt/perf/alarmdefs.

      Now all the application metrics will be in terms of workload (PRM)
      groups. That is, your workload groups will be "applications" for the
      purposes of tracking metrics.

    Oracle databases
      HP-UX WLM Oracle Database Toolkit simplifies getting metrics on Oracle
      database instances into WLM. This allows you to better manage Oracle
      instances. Benefits include the ability to:

	   +  Keep response times for your transactions below a given level
	      by setting response-time SLOs

	   +  Increase an instance's available CPU when a particular user
	      connects to the instance

	   +  Increase an instance's available CPU when more than n users
	      are connected

	   +  Increase an instance's available CPU when a particular job is
	      active

	   +  Give an instance n CPU shares for each process in the instance

	   +  Give an instance n CPU shares for each user connection to the
	      instance

      For more information, see wlmoradc(1M).

    Pay Per Use (PPU)
      WLM allows you to take advantage of Pay Per Use v4 reserves to meet
      your service-level objectives. For more information, see the section
      "HOW TO USE wlmpard TO OPTIMIZE TEMPORARY INSTANT CAPACITY AND PAY PER
      USE SYSTEMS" in the wlmpard(1M) man page.

    Processor sets (PSETs)
      Processor sets allow you to group processors together, dedicating
      those CPUs to certain applications. WLM can automatically adjust the
      number of CPUs in a PSET-based workload group in response to SLO
      performance. Combining PSETs and WLM, you can dedicate CPU resources
      to a group without fear of the group's needing additional CPUs when
      activity peaks or concern that the group, when less busy, has
      resources that other groups could be using. For more information, see
      wlmconf(4).

    SAS Software
      The WLM Toolkit for SAS Software (SASTK) can be combined with the WLM
      Duration Management Toolkit (DMTK) to fine-tune duration management of
      SAS jobs. For more information, see hp_wlmtk_goals_report(1M) and
      wlmdurdc(1M).

    Security Containment
      Combining WLM and Security Containment (available starting with HP-UX
      11i v2), you can create "Secure Resource Partitions" that are based on
      your WLM workload groups. Secure Resource Partitions provide a level
      of security by protecting the processes and files in a given Secure
      Resource Partition from other processes on the system. For more
      information, see the scomp keyword in the wlmconf(4) man page.

    Serviceguard
      WLM provides the command sg_pkg_active, which allows you to activate
      and deactivate a Serviceguard package's SLOs along with the package.
      For more information, see sg_pkg_active(1M).

    Systems Insight Manager (SIM) / Servicecontrol
      Systems Insight Manager and Servicecontrol Manager provide a single
      point of administration for multiple HP-UX systems. The WLM
      integration with these products allows system administrators at the
      SIM / SCM Central Management Server (CMS) to perform the following
      activities on nodes in the SCM cluster that have WLM installed:

	   +  Enable HP-UX WLM

	   +  Disable HP-UX WLM

	   +  Start HP-UX WLM

	   +  Stop HP-UX WLM

	   +  Reconfigure HP-UX WLM

	   +  Distribute HP-UX WLM configuration files to the selected nodes

	   +  Retrieve currently active HP-UX WLM configuration files from
	      the nodes

	   +  Check the syntax of HP-UX WLM configuration files, on either
	      the CMS or the selected nodes

	   +  View, rotate, and truncate HP-UX WLM log files

      For more information, see the HP-UX Workload Manager User's Guide
      (/opt/wlm/share/doc/WLMug.pdf).

    HP-UX SNMP Agent
      WLM's SNMP Toolkit (SNMPTK) provides a WLM data collector called
      snmpdc, which fetches values from an SNMP agent for use as metrics in
      your WLM configuration. For more information, see snmpdc(1M).

    Temporary Instant Capacity
      WLM allows you to take advantage of Temporary Instant Capacity
      reserves to meet your service-level objectives. For more information,
      see the section "HOW TO USE wlmpard TO OPTIMIZE TEMPORARY INSTANT
      CAPACITY AND PAY PER USE SYSTEMS" in the wlmpard(1M) man page.

    Virtual partitions
      You can run WLM within and across virtual partitions. (WLM can even
      manage CPU resources for nPartitions containing virtual partitions
      containing FSS workload groups.) WLM provides a global arbiter,
      wlmpard, that can take input from the WLM instances on the individual
      partitions. The global arbiter then moves CPUs between partitions, if
      needed, to better achieve the SLOs specified in the WLM configuration
      files that are active in the partitions. For more information, see the
      wlmpard(1M) and wlmparconf(4) man pages.

    BEA WebLogic Server
      Using WLM with WebLogic you can move CPUs to or from WebLogic Server
      instances as needed to maintain acceptable performance. By managing
      the instances' CPU resources, the instances will tend to use less net
      CPU resources over time. You can then use the additional CPU resources
      for other computing tasks.

      As indicated above, WLM and WebLogicTK control CPU allocation to
      individual WebLogic instances. However, the latest version of the
      paper "Using HP-UX Workload Manager with BEA WebLogic" expands the
      methods for controlling instances to control WebLogic Server clusters.

      For more information, see
      /opt/wlm/toolkits/weblogic/doc/weblogic_wlm_howto.html.


TRUNCATING YOUR LOG FILES

      WLM has three log files: /var/opt/wlm/msglog for messages, the
      optional /var/opt/wlm/wlmdstats for statistics, and the optional
      /var/opt/wlm/wlmpardstats for partition statistics.

      From time to time, you should truncate your log files to regain disk
      space. To truncate the message log while wlmd is running, use the
      command:

	   cp /dev/null /var/opt/wlm/msglog

      If you wish to archive the contents of the message log prior to
      truncation, use the following sequence of commands:

	   cp /var/opt/wlm/msglog archive_path_name
	   cp /dev/null /var/opt/wlm/msglog

      You can use these same commands to truncate the optional
      /var/opt/wlm/wlmdstats log file. This log file is created when you use
      the -l option with wlmd.	For more information on this option, see
      wlmd(1M). For information on how to enable automatic trimming of the
      wlmdstats file, see the wlmdstats_size_limit tunable in the wlmconf(4)
      man page.

      You can also use these commands to truncate the optional
      /var/opt/wlm/wlmpardstats log file, which is created when you use the
      -l option with wlmpard.  For information on this option, see
      wlmpard(1M). For information on automatic trimming of the wlmpardstats
      file, see the wlmpardstats_size_limit keyword in the wlmparconf(4) man
      page.


SUPPORT AND PATCH POLICIES

      Visit http://www.hp.com/go/wlm for information on WLM's support policy
      and patch policy. These policies indicate the time periods for which
      this version of WLM is supported and patched. (Use /opt/wlm/bin/wlmd
      -V to print the version of your WLM.)


AUTHOR

      HP-UX WLM was developed by HP.


FEEDBACK

      If you would like to comment on the current HP-UX WLM functionality or
      make suggestions for future releases, please send email to:

      wlmfeedback@rsn.hp.com


FILES

      /opt/wlm/bin/wlmd		      Workload Manager daemon

      /opt/wlm/bin/wlmpard	      Workload Manager global arbiter daemon

      /opt/wlm/bin/wlmcomd	      WLM communications daemon (needed by
				      wlmgui)

      /opt/wlm/bin/wlmcw	      WLM Configuration Wizard

      /opt/wlm/bin/wlmgui	      WLM GUI (for monitoring and
				      configuring)

      /opt/wlm/bin/wlminfo	      Utility for displaying various data

      /opt/wlm/bin/wlmcert	      Utility for managing WLM security
				      certificates

      /var/opt/wlm/msglog	      WLM message log

      /opt/wlm/lbin/wlmemsmon	      EMS monitor utility

      /opt/wlm/lbin/wlmsend	      rendezvous point send utility

      /opt/wlm/lbin/wlmrcvdc	      rendezvous point receive utility

      /etc/rc.config.d/wlm	      system initialization directives

      /var/opt/wlm/wlmdstats	      optional statistics log

      /var/opt/wlm/wlmpardstats	      optional global arbiter statistics log

      /opt/wlm/examples/	      Example WLM configurations and other
				      items

      /opt/wlm/wlm.quickref.txt	      A WLM quick reference


      /opt/wlm/share/doc/howto/perfmon.html
				      white paper on writing data collectors

      /opt/wlm/share/doc/howto/	      directory with white papers on WLM
				      tasks

      /opt/wlm/share/doc/	      directory with HP-UX WLM user's guide
				      and release notes


SEE ALSO

      wlmd(1M), wlmcw(1M), wlmgui(1M), wlmpard(1M), wlmcomd(1M),
      wlminfo(1M), wlmcert(1M), wlmckcfg(1M), wlmemsmon(1M), libwlm(3),
      wlmconf(4), wlmparconf(4), wlmprmconf(1M), wlmrcvdc(1M), wlmsend(1M),
      glance_app(1M), glance_gbl(1M), glance_prm(1M), glance_prm_byvg(1M),
      glance_tt(1M), sg_pkg_active(1M), wlmoradc(1M), wlmwlsdc(1M)

      HP-UX Workload Manager User's Guide (/opt/wlm/share/doc/WLMug.pdf)

      HP-UX Workload Manager Toolkits User's Guide
      (/opt/wlm/toolkits/doc/WLMTKug.pdf)

      Using HP-UX Workload Manager with Apache-based Applications
      (/opt/wlm/toolkits/apache/doc/apache_wlm_howto.html)

      Using HP-UX Workload Manager with BEA WebLogic Server
      (/opt/wlm/toolkits/weblogic/doc/weblogic_wlm_howto.html)

      HP-UX Workload Manager homepage (http://www.hp.com/go/wlm)

      Application Response Measurement (ARM) API
      (http://www.cmg.org/regions/cmgarmw)