Run a test

To run tests, the user needs to have the Run permission. The users with the Team owner and Manage users permissions manage access. For more information, see Permissions.

To run a test, use one of the following methods:

  • Click Run test immediately after creating or editing it.

  • On the Tests page, click run_button for the required test. The Test overview step opens. Click Run test.

The test will run after some time.

Note

If there are no available resources, the current test will be queued.

View test details

The running test is displayed in the Test runs in progress pane of the Test Runs page:

../_images/um_running_test.en.png

To follow the testing, click Test Details. The Test run summary page opens that contains:

  • Settings. Type of load, a comment, and total test run duration:

    ../_images/um_run_test_parametres.en.png
  • Current stats. Average test run scores for the last 10 seconds:

    ../_images/um_run_current_performance.en.png
  • Throughput. Total requests per second and Errors per second plotted against Running VUsers.

    ../_images/um_run_throughput.en.png
  • Response Time. 95% response time of all requests plotted against Running VUsers. See transaction breakdown in Detailed stats.

    ../_images/um_run_response_time.en.png

Anomaly detection

An anomaly is a large deviation of key metrics from the average in a moving window during the test. Anomaly detection allows you to see non-standard behaviour of a system under test online and simplifies the analysis of metrics at the end of the test.

PFLB Platform analyzes response times for the entire test and individual transactions using AI model and detects anomalies:

  • Response Time deviation is extreme response time deviation from average values in a sliding window.

  • VUser and Response Time inverse correlation is the inverse correlation of response time as the number of virtual users changes. For example, response time decreases significantly as the number of virtual users increases.

To enable anomaly detection, follow these steps:

  1. In the upper-right corner of the window, click anomaly_detection_button. The dialog opens:

    ../_images/um_anomaly_detection.png
  2. Click the Detect anomalies within a test toggle.

  3. Set up the anomalies detection sensitivity for your test.

    Note

    Using an AI model doesn’t exclude the detection of false anomalies. Select the sensitivity to reduce the number of false anomalies.

The anomalies are displayed:

  • on the Response Time graph for the entire test:

    ../_images/um_response_time_anomaly.png
  • on the Response Time graph for the selected transaction, for example:

    ../_images/um_anomaly_detection_transaction1.png
    ../_images/um_anomaly_detection_transaction2.png
    ../_images/um_anomaly_detection_transaction3.png

To analyze anomalies in Grafana, click an anomaly point and click the Go to Grafana link.

Stop a test

To stop a test, use one of the following methods:

  • in the Test runs in progress pane of the Test Runs page, click stop_button.

  • In the details test page, click Stop.

The test stops after a while.

View results

After the test is completed, thе page has the following data:

  • JMeter LOG file.

  • The Throughput and Response Time graphics.

  • Grafana dashboard.

  • SLA execution status. You can also add SLA after the test is completed. For more information, see Configure SLA.

Note

If the completed test meets the conditions of the trend report, the trend report will be updated. For more information, see Trend reports.

For more information, see Load Test Analysis.

Import system metrics

You can import system metrics obtained from a third-party source, such as Grafana, to the test run. You can display the imported metrics in reports on the PFLB chart.

To import the metric to the test run, follow these steps:

  1. Export the metric data from the third-party source to a CSV file and convert it to the following format:

    <Unix time>,<Metric value>
    
    Time, System - Processes executing in kernel mode
    1749217420,0.018375
    1749217440,0.018236
    1749217460,0.018467
    1749217480,0.020124
    1749217500,0.021355
    ...
    
  2. In the System metrics pane, click Add.

  3. Click to upload CSV file and select the file. The file size shouldn’t exceed 10 MB.

    Note

    If you have multiple CSV files containing metrics, upload them one at a time.

  4. Click Upload. The metric settings appears:

    ../_images/um_import_system_metric.en.png
  5. Fill in the fields:

    • Name. The name of the metric. For example, Memory usage.

    • Type. The type of the metric. For example, RAM.

    • Measure. The unit of measurement.

    • Host/docker-container. The host of the tested system or name of the Docker container.

    • Field delimiter. The character or string used to separate the fields in the uploaded CSV file, for example ,.

    • The first row contains field’s names. If the first row of the CSV file contains a table header, click the toggle. The header will not be considered a metric value.

    • Time zone. Select the time zone where the third-party source collected the test run data. For example, specify -0600 for the Central Standard Time in North America, Central America.

  6. Click Import.

The System metric pane displays the number of imported metrics. To view the list of imported metrics, click chevron_button.

Run a debugging test

Debugging allow you to check the test. PFLB Platform overrides the user’s test settings:

  • The test runs with 1 VUser in each group.

  • The test runs for 5 minutes or 10 iterations of the test, whichever comes first.

To run a debugging test, on the Load profile tab, click the Debug run toggle and click Run.

View detailed data about requests and responses

The debug.jtl file contains detailed information about requests and responses. The link to the file becomes available after the debugging test is completed:

../_images/um_run_results_of_debug.en.png

For more information, see Apache JMeter Wiki.