Common and Important Terms in Performance Testing – JMeter – Part 2

Common and Important Terms in Performance Testing – JMeter – Part 2

HomeNaveen AutomationLabsCommon and Important Terms in Performance Testing – JMeter – Part 2
Common and Important Terms in Performance Testing – JMeter – Part 2
ChannelPublish DateThumbnail & View CountDownload Video
Channel AvatarPublish Date not found Thumbnail
0 Views
In this video I explained the most commonly used terms in performance testing.

How:

Connection time: Time for the connection from the client to the server
Response time: is a measure of how quickly an application or subsystem responds to a client request.
Throughput – indicates the number of transactions per second that an application can process, the amount of transactions generated during a test, requests per second, calls per day, hits per second, reports per year, etc.
Scenarios: In the context of performance testing, a scenario is a sequence of steps in your application. A scenario can represent a use case or a business function, such as browsing a product catalog, adding an item to a shopping cart, or placing an order.

Bottleneck – Used to describe a single part of a system that prevents further processing or significantly degrades the performance of the entire system.
Capacity – The degree to which a system can perform data processing before performance degrades. For example, the number of new customers added to a database
Concurrency – This usually refers to the number of concurrent virtual users transacting across user journeys in a given performance test scenario, but it can also refer to the number of synchronized transactions occurring at exactly the same time.
Key Performance Indicators (KPI) – The objectives that define the expected performance goals within the production system. These can include page response times, user concurrency, batch processing times, data throughput volumes, transaction error rates, and underlying infrastructure behavior (e.g., maximum average CPU used, minimum available free memory, remaining physical memory/disk usage thresholds, logging space, etc.).

Load testing – A type of performance testing used to evaluate the behavior of a system or component as the load on the system (via users and transactions) gradually increases to peak levels.
Non-functional requirements (NFRs) – requirements that do not relate to the functionality of the system, but to other aspects of the system such as reliability, usability and performance
Performance Engineering – Activities to ensure that a system is designed and implemented to meet specified non-functional requirements. Often occurs after the completion of testing activities that reveal weaknesses in the design and implementation.
Performance Test Plan – Typically a written document that details the objectives, scope, approach, deliverables, schedule, risks, data, and test environment requirements for testing in a specific project.
Performance testing – tests to determine the performance levels of a system

Reliability – In the context of stability, reliability is the degree to which a system under stress produces the same result for the same action over a period of time.
Scalability – The extent to which the performance and capacity of a system can be increased, typically by increasing the available hardware resources within a group of servers (vertical scaling) or by increasing the number of servers available to handle requests (horizontal scaling).
Soak Test – A type of performance test used to evaluate the behavior of a system or component when the system is subjected to expected stress for an extended period of time.
Spike Testing – A type of performance testing used to evaluate the behavior of a system or component during large, short-term changes in demand. Typically, this is used to test how the system responds to large spikes in demand, such as user logins, Black Friday-like sales events, etc.
Stability – The extent to which a system is susceptible to failures and errors under normal use. For example, errors may occur during registration of new users under heavy load.

Stress testing – A type of performance testing used to evaluate the behavior of a system or component when it is subjected to a load beyond the expected workload or when the resources available to the system, such as CPU or memory, are reduced.
Transaction Volume Model (TVM) – A document that details the user journeys to be simulated, the click path steps that make up the user journeys, and the associated load/transaction volume models to be tested. This should include information about the geographic location from which users are expected to interact with the system and the method of interaction, e.g. mobile or desktop.
User Journey – The path through the system under test that a set of virtual users takes to simulate real users. It is important to note that important journeys that impact performance/volume should be used, as it is impractical to test all possible user journeys for performance. A good rule of thumb is to use the 20% of user journeys that generate 80% of the volume.
Virtual User – A simulated user who performs actions like a real user while running a test.

Please take the opportunity to connect with your friends and family and share this video with them if you find it useful.