diff --git a/_data/nav/openvox-server_8x.yml b/_data/nav/openvox-server_8x.yml index 8b696beb6..a85afafe9 100644 --- a/_data/nav/openvox-server_8x.yml +++ b/_data/nav/openvox-server_8x.yml @@ -102,11 +102,17 @@ link: status-api/v1/services.html - text: Simple endpoint link: status-api/v1/simple.html -- text: Metrics API endpoints +- text: Metrics items: - - text: v1 metrics + - text: Monitoring OpenVox Server metrics + link: puppet_server_metrics.html + - text: HTTP client metrics + link: http_client_metrics.html + - text: Applying metrics to improve performance + link: puppet_server_metrics_performance.html + - text: v1 metrics API link: metrics-api/v1/metrics_api.html - - text: v2 (Jolokia) metrics + - text: v2 (Jolokia) metrics API link: metrics-api/v2/metrics_api.html - text: Developer information items: diff --git a/docs/_openvox-server_8x/http_client_metrics.markdown b/docs/_openvox-server_8x/http_client_metrics.markdown index 44bc8cfb0..94369fb6a 100644 --- a/docs/_openvox-server_8x/http_client_metrics.markdown +++ b/docs/_openvox-server_8x/http_client_metrics.markdown @@ -1,12 +1,11 @@ --- layout: default -title: "Puppet Server: HTTP Client Metrics" -canonical: "/puppetserver/latest/http_client_metrics.html" +title: "OpenVox Server: HTTP Client Metrics" --- [status API]: ./status-api/v1/services.html -HTTP client metrics available in Puppet Server 5 allows users to measure how long it takes for Puppet Server to make requests to and receive responses from other services, such as PuppetDB. +HTTP client metrics allow users to measure how long it takes for OpenVox Server to make requests to and receive responses from other services, such as OpenVoxDB. ## Determining metrics IDs @@ -16,19 +15,19 @@ All of these metrics are of the form `puppetlabs..http-client.experim > are joined together with periods. For instance, the metric ID of `[puppetdb resource search]` is `puppetdb.resource.search`, so the full metric name would be > `puppetlabs..http-client.experimental.with-metric-id.puppetdb.resource.search.full-response`. -You can configure PuppetDB to be a backend for [configuration files](https://puppet.com/docs/puppetdb/latest/connect_puppet_master.html#step-2-edit-configuration-files) (through the `storeconfigs` setting), and -you can configure Puppet Server to send reports to an external report processing service. If you configure either of these, then during the course of handling a Puppet agent run, Puppet Server makes several +You can configure OpenVoxDB to be a backend for configuration files (through the `storeconfigs` setting), and +you can configure OpenVox Server to send reports to an external report processing service. If you configure either of these, then during the course of handling an OpenVox agent run, OpenVox Server makes several calls to external services to retrieve or store information. -- During handling of a `/puppet/v3/node` request, Puppet Server issues: - - a `facts find` request to PuppetDB for facts about the node, if they aren't yet cached (typically the first time it requests facts for the node). **Metric ID:** `[puppetdb facts find]`. -- During handling of a `/puppet/v3/catalog` request, Puppet Server issues several requests: - - a PuppetDB `replace facts` request, to replace the facts for the agent in PuppetDB with the facts it received from the agent. **Metric ID:** `[puppetdb, command, replace_facts]`. - - a PuppetDB `resource search` request, to search for resources if exported resources are used. **Metric ID:** `[puppetdb, resource, search]`. - - a PuppetDB `query` request, if the `puppetdb_query` function is used in Puppet code. **Metric ID:** `[puppetdb, query]`. - - a PuppetDB `replace catalog` request, to replace the catalog for the agent in PuppetDB with the newly compiled catalog. **Metric ID:** `[puppetdb, command, replace_catalog]`. -- During handling of a `/puppet/v3/report` request, Puppet Server issues: - - a PuppetDB `store report` request, to store the submitted report. **Metric ID:** `[puppetdb command store_report]`. +- During handling of a `/puppet/v3/node` request, OpenVox Server issues: + - a `facts find` request to OpenVoxDB for facts about the node, if they aren't yet cached (typically the first time it requests facts for the node). **Metric ID:** `[puppetdb facts find]`. +- During handling of a `/puppet/v3/catalog` request, OpenVox Server issues several requests: + - an OpenVoxDB `replace facts` request, to replace the facts for the agent in OpenVoxDB with the facts it received from the agent. **Metric ID:** `[puppetdb, command, replace_facts]`. + - an OpenVoxDB `resource search` request, to search for resources if exported resources are used. **Metric ID:** `[puppetdb, resource, search]`. + - an OpenVoxDB `query` request, if the `puppetdb_query` function is used in Puppet code. **Metric ID:** `[puppetdb, query]`. + - an OpenVoxDB `replace catalog` request, to replace the catalog for the agent in OpenVoxDB with the newly compiled catalog. **Metric ID:** `[puppetdb, command, replace_catalog]`. +- During handling of a `/puppet/v3/report` request, OpenVox Server issues: + - an OpenVoxDB `store report` request, to store the submitted report. **Metric ID:** `[puppetdb command store_report]`. - a request to the configured `reports_url` to store the report, if the HTTP report processor is enabled. **Metric ID:** `[puppetdb report http]`. ## Configuring @@ -38,8 +37,8 @@ HTTP client metrics are enabled by default, but can be disabled by setting `metr These metrics also depend on the `server-id` setting in the `metrics` section of `puppetserver.conf`. This defaults to `localhost`, and while `localhost` can collect metrics, change this setting to something unique to avoid metric naming collisions when exporting metrics to an external tool, such as Graphite. -This data is all available via the [status API][] endpoint, at `https://:8140/status/v1/services/master?level=debug`. Puppet Server 5.0 adds a `http-client-metrics` keyword in the map. If -metrics are not enabled, or if Puppet Server has not issued any requests yet, then this array will be empty, like so: `"http-client-metrics": []`. +This data is all available via the [status API][] endpoint, at `https://:8140/status/v1/services/master?level=debug`. If +metrics are not enabled, or if OpenVox Server has not issued any requests yet, then this array will be empty, like so: `"http-client-metrics": []`. In the [sample Grafana dashboard](./sample-puppetserver-metrics-dashboard.json), the `External HTTP Communications` graph visualizes all of these metrics, and the tooltip describes each of them. diff --git a/docs/_openvox-server_8x/puppet_server_metrics.markdown b/docs/_openvox-server_8x/puppet_server_metrics.markdown index 9c90b5c83..c0b496edb 100644 --- a/docs/_openvox-server_8x/puppet_server_metrics.markdown +++ b/docs/_openvox-server_8x/puppet_server_metrics.markdown @@ -1,7 +1,6 @@ --- layout: default -title: "Monitoring Puppet Server metrics" -canonical: "/puppetserver/latest/puppet_server_metrics.html" +title: "Monitoring OpenVox Server metrics" --- [metrics API]: ./metrics-api/v1/metrics_api.html @@ -13,69 +12,67 @@ canonical: "/puppetserver/latest/puppet_server_metrics.html" [`grafanadash`]: https://forge.puppet.com/cprice404/grafanadash [`metrics.conf`]: ./config_file_metrics.html -Puppet Server tracks several advanced performance and health metrics, all of which take advantage of the [metrics API][]. You can track these metrics using: +OpenVox Server tracks several advanced performance and health metrics, all of which take advantage of the [metrics API][]. You can track these metrics using: - Customizable, networked [Graphite and Grafana instances](#getting-started-with-graphite) - [HTTP client metrics][] - [Metrics API][metrics API] endpoints -To visualize Puppet Server metrics, either: +To visualize OpenVox Server metrics, either: -- Export them to a Graphite installation. The [grafanadash](https://forge.puppet.com/puppetlabs/grafanadash) module helps you set up a Graphite instance, configure Puppet Server for exporting to it, and +- Use the [puppet-operational-dashboards](https://forge.puppet.com/puppetlabs/puppet_operational_dashboards) module. +- Export them to a Graphite installation. The [grafanadash](https://forge.puppet.com/puppetlabs/grafanadash) module helps you set up a Graphite instance, configure OpenVox Server for exporting to it, and visualize the output with Grafana. You can later integrate this with your Graphite installation. For more information, see Getting started with Graphite below. -- Use the [puppet-metrics-dashboard](https://forge.puppet.com/puppetlabs/puppet_metrics_dashboard) — this does not go through the Graphite exporting feature. The puppet-metrics-dashboard queries the metrics - HTTP API directly and saves the results to disk. It also includes a Docker image of Graphite and Grafana for easy visualization. For more information, see - [Puppet Metrics Collection](https://github.com/puppetlabs/best-practices/blob/master/puppet-enterprise-metrics-collection.md). -The puppet-metrics-dashboard is the recommended option for FOSS users, as it is an easier way to save and visualize Puppet Server metrics. The `grafanadash` is still useful for users exporting to their existing -Graphite installation. +The puppet-operational-dashboards module is the recommended option for FOSS users, as it is an easier way to save and visualize OpenVox Server metrics. The `grafanadash` module is still useful for users +exporting to their existing Graphite installation. -> **Note:** The `grafanadash` and `puppet-graphite` modules referenced in this document are _not_ Puppet-supported modules. They are provided as testing and demonstration purposes _only_. +> **Note:** The `grafanadash` and `puppet-graphite` modules referenced in this document are community modules, not OpenVox-supported. They are provided for testing and demonstration purposes _only_. ## Getting started with Graphite -[Graphite][] is a third-party monitoring application that stores real-time metrics and provides customizable ways to view them. Puppet Server can export many metrics to Graphite, and exports a set of metrics by -default that is designed to be immediately useful to Puppet administrators. +[Graphite][] is a third-party monitoring application that stores real-time metrics and provides customizable ways to view them. OpenVox Server can export many metrics to Graphite, and exports a set of metrics +by default that is designed to be immediately useful to administrators. -> **Note:** A Graphite setup is deeply customizable and can report many Puppet Server metrics on demand. However, it requires considerable configuration and additional server resources. To retrieve metrics +> **Note:** A Graphite setup is deeply customizable and can report many OpenVox Server metrics on demand. However, it requires considerable configuration and additional server resources. To retrieve metrics > through HTTP requests, see the metrics API. -To start using Graphite with Puppet Server, you must: +To start using Graphite with OpenVox Server, you must: - [Install and configure a Graphite server](https://graphite.readthedocs.io/en/latest/install.html). -- [Enable Puppet Server's Graphite support](#enabling-puppet-servers-graphite-support). +- [Enable OpenVox Server's Graphite support](#enabling-openvox-servers-graphite-support). [Grafana][] provides a web-based customizable dashboard that's compatible with Graphite, and the [`grafanadash`][] module installs and configures it by default. ### Using the `grafanadash` module to quickly set up a Graphite demo server -The [`grafanadash`][] Puppet module quickly installs and configures a basic test instance of [Graphite][] with the [Grafana][] extension. When installed on a dedicated Puppet agent, this module provides a quick -demonstration of how Graphite and Grafana can consume and display Puppet Server metrics. +The [`grafanadash`][] module quickly installs and configures a basic test instance of [Graphite][] with the [Grafana][] extension. When installed on a dedicated agent, this module provides a quick +demonstration of how Graphite and Grafana can consume and display OpenVox Server metrics. -> **WARNING:** The `grafanadash` module is _not_ a Puppet-supported module. It is designed for testing and demonstration purposes _only_, and tested against CentOS 6 only. +> **WARNING:** The `grafanadash` module is _not_ an OpenVox-supported module. It is designed for testing and demonstration purposes _only_, and tested against CentOS 6 only. > -> Also, install this module on a dedicated agent _only_. Do **not** install it on the node running Puppet Server, because the module makes security policy changes that are inappropriate for a Puppet master: +> Also, install this module on a dedicated agent _only_. Do **not** install it on the node running OpenVox Server, because the module makes security policy changes that are inappropriate for a server: > > - SELinux can cause issues with Graphite and Grafana, so the module temporarily disables SELinux. If you reboot the machine after using the module to install Graphite, you must disable SELinux again and > restart the Apache service to use Graphite and Grafana. > - The module disables the `iptables` firewall and enables cross-origin resource sharing on Apache, which are potential security risks. -#### Installing the `grafanadash` Puppet module +#### Installing the `grafanadash` module -Install the `grafanadash` Puppet module on a \*nix agent. The module's `grafanadash::dev` class installs and configures a Graphite server, the Grafana extension, and a default dashboard. +Install the `grafanadash` module on a \*nix agent. The module's `grafanadash::dev` class installs and configures a Graphite server, the Grafana extension, and a default dashboard. -1. [Install a \*nix Puppet agent](https://puppet.com/docs/puppet/latest/install_linux.html) to serve as the Graphite server. +1. Install a \*nix agent to serve as the Graphite server. -2. As root on the Puppet agent node, run `puppet module install puppetlabs-grafanadash`. +2. As root on the agent node, run `puppet module install puppetlabs-grafanadash`. -3. As root on the Puppet agent node, run `puppet apply -e 'include grafanadash::dev'`. +3. As root on the agent node, run `puppet apply -e 'include grafanadash::dev'`. #### Running Grafana -Grafana runs as a web dashboard, and the `grafanadash` module configures it to use port 10000 by default. To view Puppet metrics in Grafana, you must create a metrics dashboard, or edit and import a JSON-based -dashboard that includes Puppet metrics, such as the [sample Grafana dashboard][] that we provide. +Grafana runs as a web dashboard, and the `grafanadash` module configures it to use port 10000 by default. To view OpenVox Server metrics in Grafana, you must create a metrics dashboard, or edit and import a +JSON-based dashboard that includes OpenVox Server metrics, such as the [sample Grafana dashboard][] that we provide. -1. In a web browser on a computer that can reach the Puppet agent node running Grafana, navigate to `http://:10000`. +1. In a web browser on a computer that can reach the agent node running Grafana, navigate to `http://:10000`. There, you'll see a test screen that indicates whether Grafana can successfully connect to your Graphite server. @@ -86,7 +83,7 @@ dashboard that includes Puppet metrics, such as the [sample Grafana dashboard][] a. Open the `sample_metrics_dashboard.json` file in a text editor on the same computer you're using to access Grafana. - b. Throughout the file, replace our sample hostname of `master.example.com` with your Puppet Server's hostname. (**Note:** This value **must** be used as the `metrics_server_id` setting, as configured + b. Throughout the file, replace our sample hostname of `master.example.com` with your OpenVox Server's hostname. (**Note:** This value **must** be used as the `metrics_server_id` setting, as configured below.) c. Save the file. @@ -95,15 +92,15 @@ dashboard that includes Puppet metrics, such as the [sample Grafana dashboard][] 4. Navigate to and select the edited JSON file. -This loads a dashboard with nine graphs that display various metrics exported from the Puppet Server to the Graphite server. (For details, see -[Using the Grafana dashboard](#using-the-sample-grafana-dashboard).) However, these graphs will remain empty until you enable Puppet Server's Graphite metrics. +This loads a dashboard with nine graphs that display various metrics exported from OpenVox Server to the Graphite server. (For details, see +[Using the Grafana dashboard](#using-the-sample-grafana-dashboard).) However, these graphs will remain empty until you enable OpenVox Server's Graphite metrics. -> Note: If you want to integrate Puppet Server's Grafana exporting with your own infrastructure, use the `grafanadash` module. If you want visualization of metrics, use the `puppetlabs-puppet_metrics_dashboard` -> module. See [Puppet Metrics Collection](https://github.com/puppetlabs/best-practices/blob/master/puppet-enterprise-metrics-collection.md) for more information. +> Note: If you want to integrate OpenVox Server's Grafana exporting with your own infrastructure, use the `grafanadash` module. If you want visualization of metrics, use the +> `puppetlabs-puppet_metrics_dashboard` module. -### Enabling Puppet Server's Graphite support +### Enabling OpenVox Server's Graphite support -Configure Puppet Server's [`metrics.conf`](./config_file_metrics.html) file to enable and use the Graphite server. +Configure OpenVox Server's [`metrics.conf`](./config_file_metrics.html) file to enable and use the Graphite server. 1. Set the `enabled` parameter to true in `metrics.registries.puppetserver.reporters.graphite`: @@ -128,7 +125,7 @@ Configure Puppet Server's [`metrics.conf`](./config_file_metrics.html) file to e 2. Configure the Graphite host settings in `metrics.reporters.graphite`: - **host:** The Graphite host's IP address as a string. - **port:** The Graphite host's port number. - - **update-interval-seconds:** How frequently Puppet Server should send metrics to Graphite. + - **update-interval-seconds:** How frequently OpenVox Server should send metrics to Graphite. 3. Verify that `metrics.registries.puppetserver.reporters.jmx.enabled` is not set to false. Its default setting is true. @@ -138,49 +135,49 @@ Configure Puppet Server's [`metrics.conf`](./config_file_metrics.html) file to e The [sample Grafana dashboard][] provides what we think is an interesting starting point. You can click on the title of any graph, and then click **edit** to tweak the graphs as you see fit. -- **Active requests:** This graph serves as a "health check" for the Puppet Server. It shows a flat line that represents the number of CPUs you have in your system, a metric that indicates the total number of - HTTP requests actively being processed by the server at any moment in time, and a rolling average of the number of active requests. If the number of requests being processed exceeds the number of CPUs for any - significant length of time, your server might be receiving more requests than it can efficiently process. +- **Active requests:** This graph serves as a "health check" for OpenVox Server. It shows a flat line that represents the number of CPUs you have in your system, a metric that indicates the total number of + HTTP requests actively being processed by the server at any moment in time, and a rolling average of the number of active requests. If the number of requests being processed exceeds the number of CPUs for + any significant length of time, your server might be receiving more requests than it can efficiently process. -- **Request durations:** This graph breaks down the average response times for different types of requests made by Puppet agents. This indicates how expensive catalog and report requests are compared to the - other types of requests. It also provides a way to see changes in catalog compilation times when you modify your Puppet code. A sharp curve upward for all of the types of requests indicates an overloaded - server, and they should trend downward after reducing the load on the server. +- **Request durations:** This graph breaks down the average response times for different types of requests made by agents. This indicates how expensive catalog and report requests are compared to the other + types of requests. It also provides a way to see changes in catalog compilation times when you modify your Puppet code. A sharp curve upward for all of the types of requests indicates an overloaded server, + and they should trend downward after reducing the load on the server. -- **Request ratios:** This graph shows how many requests of each type that Puppet Server has handled. Under normal circumstances, you should see about the same number of catalog, node, or report requests, +- **Request ratios:** This graph shows how many requests of each type that OpenVox Server has handled. Under normal circumstances, you should see about the same number of catalog, node, or report requests, because these all happen one time per agent run. The number of file and file metadata requests correlate to how many remote file resources are in the agents' catalogs. -- **Communications with PuppetDB:** This graph tracks the amount of time it takes Puppet Server to send data and requests for common operations to, and receive responses from, PuppetDB. +- **Communications with OpenVoxDB:** This graph tracks the amount of time it takes OpenVox Server to send data and requests for common operations to, and receive responses from, OpenVoxDB. - **JRubies**: This graph tracks how many JRubies are in use, how many are free, the mean number of free JRubies, and the mean number of requested JRubies. - If the number of free JRubies is often less than one, or the mean number of free JRubies is less than one, Puppet Server is requesting and consuming more JRubies than are available. This overload reduces - Puppet Server's performance. While this might simply be a symptom of an under-resourced server, it can also be caused by poorly optimized Puppet code or bottlenecks in the server's communications with - PuppetDB if it is in use. + If the number of free JRubies is often less than one, or the mean number of free JRubies is less than one, OpenVox Server is requesting and consuming more JRubies than are available. This overload reduces + OpenVox Server's performance. While this might simply be a symptom of an under-resourced server, it can also be caused by poorly optimized Puppet code or bottlenecks in the server's communications with + OpenVoxDB if it is in use. - If catalog compilation times have increased but PuppetDB performance remains the same, examine your Puppet code for potentially unoptimized code. If PuppetDB communication times have increased, tune PuppetDB - for better performance or allocate more resources to it. + If catalog compilation times have increased but OpenVoxDB performance remains the same, examine your Puppet code for potentially unoptimized code. If OpenVoxDB communication times have increased, tune + OpenVoxDB for better performance or allocate more resources to it. - If neither catalog compilation nor PuppetDB communication times are degraded, the Puppet Server process might be under-resourced on your server. If you have available CPU time and memory, - [increase the number of JRuby instances](./tuning_guide.html) to allow it to allocate more JRubies. Otherwise, consider adding additional compile masters to distribute the catalog compilation load. + If neither catalog compilation nor OpenVoxDB communication times are degraded, the OpenVox Server process might be under-resourced on your server. If you have available CPU time and memory, + [increase the number of JRuby instances](./tuning_guide.html) to allow it to allocate more JRubies. Otherwise, consider adding additional compilers to distribute the catalog compilation load. - **JRuby Timers**: This graph tracks several JRuby pool metrics. - - The borrow time represents the mean amount of time that Puppet Server uses ("borrows") each JRuby from the pool. + - The borrow time represents the mean amount of time that OpenVox Server uses ("borrows") each JRuby from the pool. - - The wait time represents the total amount of time that Puppet Server waits for a free JRuby instance. + - The wait time represents the total amount of time that OpenVox Server waits for a free JRuby instance. - - The lock held time represents the amount of time that Puppet Server holds a lock on the pool, during which JRubies cannot be borrowed. + - The lock held time represents the amount of time that OpenVox Server holds a lock on the pool, during which JRubies cannot be borrowed. - - The lock wait time represents the amount of time that Puppet Server waits to acquire a lock on the pool. + - The lock wait time represents the amount of time that OpenVox Server waits to acquire a lock on the pool. These metrics help identify sources of potential JRuby allocation bottlenecks. -- **Memory Usage**: This graph tracks how much heap and non-heap memory that Puppet Server uses. +- **Memory Usage**: This graph tracks how much heap and non-heap memory that OpenVox Server uses. -- **Compilation:** This graph breaks catalog compilation down into various phases to show how expensive each phase is on the master. +- **Compilation:** This graph breaks catalog compilation down into various phases to show how expensive each phase is on the server. ### Example Grafana dashboard excerpt -The following example shows only the `targets` parameter of a dashboard to demonstrate the full names of Puppet's exported Graphite metrics (assuming the Puppet Server instance has a domain of +The following example shows only the `targets` parameter of a dashboard to demonstrate the full names of OpenVox Server's exported Graphite metrics (assuming the OpenVox Server instance has a domain of `master.example.com`) and a way to add targets directly to an exported Grafana dashboard's JSON content. ```json @@ -214,11 +211,11 @@ See the [sample Grafana dashboard][] for a detailed example of how a Grafana das ## Available Graphite metrics -The following HTTP and Puppet profiler metrics are available from the Puppet Server and can be added to your metrics reporting. Each metric is prefixed with `puppetlabs.`; for instance, the +The following HTTP and Puppet profiler metrics are available from OpenVox Server and can be added to your metrics reporting. Each metric is prefixed with `puppetlabs.`; for instance, the Grafana dashboard file refers to the `num-cpus` metric as `puppetlabs..num-cpus`. Additionally, metrics might be suffixed by fields, such as `count` or `mean`, that return more specific data points. For instance, the `puppetlabs..compiler.mean` metric returns only the mean -length of time it takes Puppet Server to compile a catalog. +length of time it takes OpenVox Server to compile a catalog. To aid with reference, metrics in the list below are segmented into three groups: @@ -243,12 +240,12 @@ To aid with reference, metrics in the list below are segmented into three groups - **Other:** Metrics that have unique sets of available fields. -> **Note:** Puppet Server can export many, many metrics -- so many that enabling all of them at large installations can overwhelm Grafana servers. To avoid this, Puppet Server exports only a subset of its +> **Note:** OpenVox Server can export many, many metrics -- so many that enabling all of them at large installations can overwhelm Grafana servers. To avoid this, OpenVox Server exports only a subset of its > available metrics by default. This default set is designed to report the most relevant metrics for administrators monitoring performance and stability. > -> To add to the default list of exported metrics, see [Modifying Puppet Server's exported metrics](#modifying-puppet-servers-exported-metrics). +> To add to the default list of exported metrics, see [Modifying OpenVox Server's exported metrics](#modifying-openvox-servers-exported-metrics). -Puppet Server exports each metric in the lists below by default. +OpenVox Server exports each metric in the lists below by default. ### Statistical metrics @@ -257,7 +254,7 @@ Puppet Server exports each metric in the lists below by default. - `puppetlabs..compiler`: The time spent compiling catalogs. This metric represents the sum of the `compiler.compile`, `static_compile`, `find_facts`, and `find_node` fields. - `puppetlabs..compiler.compile`: The total time spent compiling dynamic (non-static) catalogs. - To measure specific nodes and environments, see [Modifying Puppet Server's exported metrics](#modifying-puppet-servers-exported-metrics). + To measure specific nodes and environments, see [Modifying OpenVox Server's exported metrics](#modifying-openvox-servers-exported-metrics). - `puppetlabs..compiler.find_facts`: The time spent parsing facts. @@ -280,15 +277,14 @@ Puppet Server exports each metric in the lists below by default. - `puppetlabs..http.active-histo`: A histogram of active HTTP requests over time. -- `puppetlabs..http.puppet-v3-catalog-/*/-requests`: The time Puppet Server has spent handling catalog requests, including time spent waiting for an available JRuby instance. +- `puppetlabs..http.puppet-v3-catalog-/*/-requests`: The time OpenVox Server has spent handling catalog requests, including time spent waiting for an available JRuby instance. -- `puppetlabs..http.puppet-v3-environment-/*/-requests`: The time Puppet Server has spent handling environment requests, including time spent waiting for an available JRuby instance. +- `puppetlabs..http.puppet-v3-environment-/*/-requests`: The time OpenVox Server has spent handling environment requests, including time spent waiting for an available JRuby instance. -- `puppetlabs..http.puppet-v3-environment_classes-/*/-requests`: The time spent handling requests to the [`environment_classes` API endpoint](./puppet-api/v3/environment_classes.html), which - the Node Classifier uses to refresh classes. +- `puppetlabs..http.puppet-v3-environment_classes-/*/-requests`: The time spent handling requests to the + [`environment_classes` API endpoint](./puppet-api/v3/environment_classes.html), which the Node Classifier uses to refresh classes. -- `puppetlabs..http.puppet-v3-environments-requests`: The time spent handling requests to the - [`environments` API endpoint](https://puppet.com/docs/puppet/latest/http_api/http_environments.html) requests. +- `puppetlabs..http.puppet-v3-environments-requests`: The time spent handling requests to the `environments` API endpoint. - The following metrics measure the time spent handling file-related API endpoints: - `puppetlabs..http.puppet-v3-file_bucket_file-/*/-requests` @@ -300,38 +296,38 @@ Puppet Server exports each metric in the lists below by default. - `puppetlabs..http.puppet-v3-file_metadatas-/*/-requests` - `puppetlabs..http.puppet-v3-node-/*/-requests`: The time spent handling node requests, which are sent to the Node Classifier. A bottleneck here might indicate an issue with the Node - Classifier or PuppetDB. + Classifier or OpenVoxDB. -- `puppetlabs..http.puppet-v3-report-/*/-requests`: The time spent handling report requests. A bottleneck here might indicate an issue with PuppetDB. +- `puppetlabs..http.puppet-v3-report-/*/-requests`: The time spent handling report requests. A bottleneck here might indicate an issue with OpenVoxDB. -- `puppetlabs..http.puppet-v3-static_file_content-/*/-requests`: The time spent handling requests to the [`static_file_content` API endpoint](./puppet-api/v3/static_file_content.html) used by - Direct Puppet with file sync. +- `puppetlabs..http.puppet-v3-static_file_content-/*/-requests`: The time spent handling requests to the + [`static_file_content` API endpoint](./puppet-api/v3/static_file_content.html) used by Direct Puppet with file sync. #### JRuby metrics -Puppet Server uses an embedded JRuby interpreter to execute Ruby code. By default, JRuby spawns parallel instances known as JRubies to execute Ruby code, which occurs during most Puppet Server activities. When -`multithreaded` is set to `true`, a single JRuby is used instead to process a limited number of threads in parallel. For each of these metrics, they refer to JRuby instances by default and JRuby threads in -multithreaded mode. +OpenVox Server uses an embedded JRuby interpreter to execute Ruby code. By default, JRuby spawns parallel instances known as JRubies to execute Ruby code, which occurs during most OpenVox Server activities. +When `multithreaded` is set to `true`, a single JRuby is used instead to process a limited number of threads in parallel. For each of these metrics, they refer to JRuby instances by default and JRuby threads +in multithreaded mode. -See [Tuning JRuby on Puppet Server](./tuning_guide.html) for details on adjusting JRuby settings. +See [Tuning JRuby on OpenVox Server](./tuning_guide.html) for details on adjusting JRuby settings. - `puppetlabs..jruby.borrow-timer`: The time spent with a borrowed JRuby. -- `puppetlabs..jruby.free-jrubies-histo`: A histogram of free JRubies over time. This metric's average value should greater than 1; if it isn't, [more JRubies](./tuning_guide.html) or another - compile master might be needed to keep up with requests. +- `puppetlabs..jruby.free-jrubies-histo`: A histogram of free JRubies over time. This metric's average value should greater than 1; if it isn't, [more JRubies](./tuning_guide.html) or + another compiler might be needed to keep up with requests. - `puppetlabs..jruby.lock-held-timer`: The time spent holding the JRuby lock. - `puppetlabs..jruby.lock-wait-timer`: The time spent waiting to acquire the JRuby lock. -- `puppetlabs..jruby.requested-jrubies-histo`: A histogram of requested JRubies over time. This increases as the number of free JRubies, or the `free-jrubies-histo` metric, decreases, which can - suggest that the server's capacity is being depleted. +- `puppetlabs..jruby.requested-jrubies-histo`: A histogram of requested JRubies over time. This increases as the number of free JRubies, or the `free-jrubies-histo` metric, decreases, + which can suggest that the server's capacity is being depleted. - `puppetlabs..jruby.wait-timer`: The time spent waiting to borrow a JRuby. -#### PuppetDB metrics +#### OpenVoxDB metrics -The following metrics measure the time that Puppet Server spends sending or receiving data from PuppetDB. +The following metrics measure the time that OpenVox Server spends sending or receiving data from OpenVoxDB. The metric names use the `puppetdb` identifier for compatibility with existing tooling. - `puppetlabs..puppetdb.catalog.save` @@ -380,7 +376,7 @@ The following metrics measure the time that Puppet Server spends sending or rece - `puppetlabs..http.puppet-v3-status-/*/-percentage` -- `puppetlabs..http.total-requests`: The total requests handled by Puppet Server. +- `puppetlabs..http.total-requests`: The total requests handled by OpenVox Server. #### JRuby metrics @@ -396,11 +392,11 @@ The following metrics measure the time that Puppet Server spends sending or rece - `puppetlabs..jruby.return-count`: The number of JRubies successfully returned to the pool. -- `puppetlabs..jruby.num-free-jrubies`: The number of free JRuby instances. If this number is often 0, more requests are coming in than the server has available JRuby instances. To alleviate - this, increase the number of JRuby instances on the Server or add additional compile masters. +- `puppetlabs..jruby.num-free-jrubies`: The number of free JRuby instances. If this number is often 0, more requests are coming in than the server has available JRuby instances. To + alleviate this, increase the number of JRuby instances on the server or add additional compilers. -- `puppetlabs..jruby.num-jrubies`: The total number of JRuby instances on the server, governed by the `max-active-instances` setting. See [Tuning JRuby on Puppet Server](./tuning_guide.html) - for details. +- `puppetlabs..jruby.num-jrubies`: The total number of JRuby instances on the server, governed by the `max-active-instances` setting. See + [Tuning JRuby on OpenVox Server](./tuning_guide.html) for details. ### Other metrics @@ -408,7 +404,7 @@ These metrics measure raw resource availability and capacity. - `puppetlabs..num-cpus`: The number of available CPUs on the server. -- `puppetlabs..uptime`: The Puppet Server process's uptime. +- `puppetlabs..uptime`: The OpenVox Server process's uptime. - Total, heap, and non-heap memory that's committed (`committed`), initialized (`init`), and used (`used`), and the maximum amount of memory that can be used (`max`). - `puppetlabs..memory.total.committed` @@ -435,9 +431,9 @@ These metrics measure raw resource availability and capacity. - `puppetlabs..memory.non-heap.max` -For details about HTTP client metrics, which measure performance of Puppet Server's requests to other services, see [their documentation][HTTP client metrics]. +For details about HTTP client metrics, which measure performance of OpenVox Server's requests to other services, see [their documentation][HTTP client metrics]. -### Modifying Puppet Server's exported metrics +### Modifying OpenVox Server's exported metrics In addition to the above default metrics, you can also export metrics measuring specific environments and nodes. diff --git a/docs/_openvox-server_8x/puppet_server_metrics_performance.markdown b/docs/_openvox-server_8x/puppet_server_metrics_performance.markdown index 672f7c938..6263dee5e 100644 --- a/docs/_openvox-server_8x/puppet_server_metrics_performance.markdown +++ b/docs/_openvox-server_8x/puppet_server_metrics_performance.markdown @@ -1,7 +1,6 @@ --- layout: default title: "Applying metrics to improve performance" -canonical: "/puppetserver/latest/puppet_server_metrics_performance.html" --- [metrics]: ./puppet_server_metrics.html @@ -11,25 +10,18 @@ canonical: "/puppetserver/latest/puppet_server_metrics_performance.html" [puppetserver.conf]: ./config_file_puppetserver.html [HTTP client metrics]: ./http_client_metrics.html -Puppet Server produces [several types of metrics][metrics] that administrators can use to identify performance bottlenecks or capacity issues. Interpreting this data is largely up to you and depends on many -factors unique to your installation and usage, but there are some common trends in metrics that you can use to make Puppet Server function better. +OpenVox Server produces [several types of metrics][metrics] that administrators can use to identify performance bottlenecks or capacity issues. Interpreting this data is largely up to you and depends on many +factors unique to your installation and usage, but there are some common trends in metrics that you can use to make OpenVox Server function better. -> **Note:** This document assumes that you are already familiar with Puppet Server's [metrics tools][metrics], which report on relevant information, and its [tuning guide][], which provides instructions for -> modifying relevant settings. To put it another way, this guide attempts to explain questions about "why" Puppet Server performs the way it does for you, while your servers are the "who", Server [metrics][] +> **Note:** This document assumes that you are already familiar with OpenVox Server's [metrics tools][metrics], which report on relevant information, and its [tuning guide][], which provides instructions for +> modifying relevant settings. To put it another way, this guide attempts to explain questions about "why" OpenVox Server performs the way it does for you, while your servers are the "who", Server [metrics][] > help you track down exactly "what" is affecting performance, and the [tuning guide][] explains "how" you can improve performance. > -> **If you're using Puppet Enterprise (PE),** consult its documentation instead of this guide for PE-specific requirements, settings, and instructions: -> -> - [Large environment installations (LEI)](https://puppet.com/docs/pe/latest/installing/hardware_requirements.html#large-environment-hardware-requirements) -> - [Compile masters](https://puppet.com/docs/pe/latest/installing/installing_compile_masters.html) -> - [Load balancing](https://puppet.com/docs/pe/latest/installing/installing_compile_masters.html#using-load-balancers-with-compile-masters) -> - [High availability](https://puppet.com/docs/pe/latest/high_availability/high_availability_overview.html) - ## Measuring capacity with JRubies -Puppet Server uses JRuby, which rations server resources in the form of JRuby instances in default mode, and JRuby threads in multithreaded mode. Puppet Server consumes these as it handles requests. A simple -way of explaining Puppet Server performance is to remember that your Server infrastructure must be capable of providing enough JRuby instances or threads for the amount of activity it handles. Anything that -reduces or limits your server's capacity to produce JRubies also degrades Puppet Server's performance. +OpenVox Server uses JRuby, which rations server resources in the form of JRuby instances in default mode, and JRuby threads in multithreaded mode. OpenVox Server consumes these as it handles requests. A simple +way of explaining OpenVox Server performance is to remember that your Server infrastructure must be capable of providing enough JRuby instances or threads for the amount of activity it handles. Anything that +reduces or limits your server's capacity to produce JRubies also degrades OpenVox Server's performance. Several factors can limit your Server infrastructure's ability to produce JRubies. @@ -38,56 +30,56 @@ Several factors can limit your Server infrastructure's ability to produce JRubie > **Note:** These guidelines for interpreting metrics generally apply to both default and multithreaded mode. However, threads are much cheaper in terms of system resources, since they do not need to duplicate > all of Puppet's runtime, so you may have more vertical scalability in multithreaded mode. -If your free JRubies are 0 or fewer, your server is receiving more requests for JRubies than it can provide, which means it must queue those requests to wait until resources are available. Puppet Server +If your free JRubies are 0 or fewer, your server is receiving more requests for JRubies than it can provide, which means it must queue those requests to wait until resources are available. OpenVox Server performs best when the average number of free JRubies is above 1, which means Server always has enough resources to immediately handle incoming requests. -There are two indicators in Puppet Server's metrics that can help you identify a request-handling capacity issue: +There are two indicators in OpenVox Server's metrics that can help you identify a request-handling capacity issue: -- **Average JRuby Wait Time:** This refers to the amount of time Puppet Server has to wait for an available JRuby to become available, and increases when each JRuby is held for a longer period of time, which +- **Average JRuby Wait Time:** This refers to the amount of time OpenVox Server has to wait for an available JRuby to become available, and increases when each JRuby is held for a longer period of time, which reduces the overall number of free JRubies and forces new requests to wait longer for available resources. -- **Average JRuby Borrow Time:** This refers to the amount of time that Puppet Server "holds" a JRuby as a resource for a request, and increases because of other factors on the server. +- **Average JRuby Borrow Time:** This refers to the amount of time that OpenVox Server "holds" a JRuby as a resource for a request, and increases because of other factors on the server. If wait time increases but borrow time stays the same, your Server infrastructure might be serving too many agents. This indicates that Server can easily handle requests but is receiving too many at one time to keep up. -If both wait and borrow times are increasing, something else on your server is causing requests to take longer to process. The longer borrow times suggest that Puppet Server is struggling more than before to +If both wait and borrow times are increasing, something else on your server is causing requests to take longer to process. The longer borrow times suggest that OpenVox Server is struggling more than before to process requests, which has a cascading effect on wait times. Correlate borrow time increases with other events whenever possible to isolate what activities might cause them, such as a Puppet code change. -If you are setting up Puppet Server for the first time, start by increasing your Server infrastructure's capacity through additional JRubies (if your server has spare CPU and memory resources) or compile -masters until you have more than 0 free JRubies, and your average number of free JRubies are at least 1. After your system can handle its request volume, you can start looking into more specific performance +If you are setting up OpenVox Server for the first time, start by increasing your Server infrastructure's capacity through additional JRubies (if your server has spare CPU and memory resources) or compilers +until you have more than 0 free JRubies, and your average number of free JRubies are at least 1. After your system can handle its request volume, you can start looking into more specific performance improvements. #### Adding more JRubies -If you must add JRubies, remember that Puppet Server is tuned by default to use one fewer than your total number of CPUs, with a maximum of 4 CPUs, for the number of available JRubies. You can change this by +If you must add JRubies, remember that OpenVox Server is tuned by default to use one fewer than your total number of CPUs, with a maximum of 4 CPUs, for the number of available JRubies. You can change this by setting `max-active-instances` in [`puppetserver.conf`][puppetserver.conf], under the `jruby-puppet` section. In the default mode, increasing `max-active-instances` creates whole independent JRuby instances. In multithreaded mode, this setting instead controls the number of threads that the single JRuby instance will process concurrently, and therefore has different scaling characteristics. Tuning recommendations for -this mode are under development, see [SERVER-2823](https://tickets.puppetlabs.com/browse/SERVER-2823). +this mode are under development. When running in the default mode, follow these guidelines for allocating resources when adding JRubies: Each JRuby also has a certain amount of persistent memory overhead required in order to load both Puppet's Ruby code and your Puppet code. In other words, your available memory sets a baseline limit to how much -Puppet code you can process. Catalog compilation can consume more memory, and Puppet Server's total memory usage depends on the number of agents being served, how frequently those agents check in, how many +Puppet code you can process. Catalog compilation can consume more memory, and OpenVox Server's total memory usage depends on the number of agents being served, how frequently those agents check in, how many resources are being managed on each agent, and the complexity of the manifests and modules in use. With the `jruby-puppet.compile-mode` setting in [`puppetserver.conf`][puppetserver.conf] set to `off`, a JRuby requires at least 40MB of memory under JRuby 1.7 and at least 60MB under JRuby9k in order to compile a nearly empty catalog. This includes memory for the scripting container, Puppet's Ruby code and additional memory overhead. -For real-world catalogs, you can generally add an absolute minimum of 15MB for each additional JRuby. We calculated this amount by comparing a minimal catalog compilation to compiling a catalog for a -[basic role](https://github.com/puppetlabs/puppetlabs-puppetserver_perf_control/blob/production/site/role/manifests/by_size/small.pp) that installs Tomcat and Postgres servers. +For real-world catalogs, you can generally add an absolute minimum of 15MB for each additional JRuby, based on comparing a minimal catalog +compilation to compiling a catalog for a basic role that installs Tomcat and Postgres servers. Your Puppet-managed infrastructure is probably larger and more complex than that test scenario, and every complication adds more to each additional JRuby's memory requirements. (For instance, we recommend -assuming that Puppet Server will use [at least 512MB per JRuby](https://puppet.com/docs/pe/latest/configuring/config_puppetserver.html) while under load.) You can calculate a similar value unique to your +assuming that OpenVox Server will use [at least 512MB per JRuby](https://puppet.com/docs/pe/latest/configuring/config_puppetserver.html) while under load.) You can calculate a similar value unique to your infrastructure by measuring `puppetserver` memory usage during your infrastructure's catalog compilations and comparing it to compiling a minimal catalog for a similar number of nodes. The `jruby-metrics` section of the [status API][] endpoint also lists the `requested-instances`, which shows what requests have come in that are waiting to borrow a JRuby instance. This part of the status endpoint lists the lock's status, how many times it has been requested, and how long it has been held for. If it is currently being held and has been held for a while, you might see requests starting to stack up in the `requested-instances` section. -#### Adding compile masters +#### Adding compilers -If you don't have the additional capacity on your master to add more JRubies, you'll want to add another compile master to your Server infrastructure. See -[Scaling Puppet Server with compile masters](./scaling_puppet_server.html). +If you don't have the additional capacity on your server to add more JRubies, you'll want to add another compiler to your Server infrastructure. See +[Scaling OpenVox Server with compilers](./scaling_puppet_server.html). ### HTTP request delays @@ -104,21 +96,21 @@ things could cause catalog compilation lengthen JRuby borrow times. also list the lengthiest function calls (showing the top 10 and top 40, respectively) based on aggregate execution times. - Adding many file resources at one time. -In cases like these, there might be more efficient ways to author your Puppet code, you might be extending Puppet to the point where you need to add JRubies or compile masters even if you aren't adding more +In cases like these, there might be more efficient ways to author your Puppet code, you might be extending Puppet to the point where you need to add JRubies or compilers even if you aren't adding more agents. -Slowdowns in PuppetDB can also cause catalog compilations to take more time: if you use exported resources or the `puppetdb_query` function and PuppetDB has a problem, catalog compilation times will increase. +Slowdowns in OpenVoxDB can also cause catalog compilations to take more time: if you use exported resources or the `puppetdb_query` function and OpenVoxDB has a problem, catalog compilation times will increase. -Puppet Server also sends agents' facts and the compiled catalog to PuppetDB during catalog compilation. The [status API][] for the master service reports metrics for these operations under +OpenVox Server also sends agents' facts and the compiled catalog to OpenVoxDB during catalog compilation. The [status API][] for the master service reports metrics for these operations under [`http-client-metrics`][HTTP client metrics], and in the Grafana dashboard in the "External HTTP Communications" graph. -Puppet Server also requests facts as HTTP requests while handling a node request, and submits reports via HTTP requests while handling of a report request. If you have an HTTP report processor set up, the +OpenVox Server also requests facts as HTTP requests while handling a node request, and submits reports via HTTP requests while handling of a report request. If you have an HTTP report processor set up, the Grafana dashboard shows metrics for `Http report processor,` as does the [status API][] endpoint under `http-client-metrics` in the master service, for metric ID `['puppet', 'report', 'http']`. Delays in the -report processor are passed on to Puppet Server. +report processor are passed on to OpenVox Server. ### Memory leaks and usage -A memory leak or increased memory pressure can stress Puppet Server's available resources. In this case, the Java VM will spend more time doing garbage collection, causing the GC time and GC CPU % metrics to +A memory leak or increased memory pressure can stress OpenVox Server's available resources. In this case, the Java VM will spend more time doing garbage collection, causing the GC time and GC CPU % metrics to increase. These metrics are available from the [status API][] endpoint, as well as in the mbeans metrics available from both the [`/metrics/v1/mbeans`](./metrics-api/v1/metrics_api.html) or [`/metrics/v2/`](./metrics-api/v2/metrics_api.html) endpoints.