Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

This page pulls together sets of tips on how to perform load testing / performance testing on uPortal.  Generally load testing uPortal is the same as load testing any web application, but due to uPortal's nature of aggregating content and extensive caching there are a few differences compared to other web applications.

...

  • uPortal does extensive caching for the guest user, and also based on username.  Do not use a single (or a small number of) user accounts for your simulated users. How large is your user community? Use a number similar to that. In any case, use 10k or more.  Also if you have multiple types of users (students, staff, faculty), have a set of test accounts that include all user types ideally of equal proportion to your actual community.

  • Don't forget to include pauses (between actions) in the scripts. Hundreds of threads sending requests to the portal as fast as they possibly can is not a load test -- it's a Denial of Service Attack.  Use at least 5 seconds between actions.  Your users will typically be much higher than that; 5 is pretty aggressive.

  • If your using a load balancer that uses sticky connections (based on client IP address) you will need to use multiple injectors and configure your load balancer to route each injector to a different server. Consider by-passing the load balancer and configuring each injector to make requests directly to the real server (modifying the hosts file in the injector machines is a good way to avoid issues caused by redirects / certificates / cookies etc. e.g. CAS authentication service url). 

Tips preparing your plan

  • Typically the expensive thing is logging in. Most folks find that the concurrent user limit is constrained by the number of logins that can be performed.  A reasonable rate for a warmed up portal is 4 seconds between logins.  A well tuned and functioning portal on adequate hardware, including the dependent systems it accesses, can handle faster than that especially if you have more time between user actions.
    • Give the script a ramp-up time; e.g. the first pass through the script have it space the logins out a bit. In the scripts there is a ramp-up time (don't recall if that's what jmeter calls it) that you use to have it start script execution threads over a period of time rather than all at once. What I do is determine a login rate I want, say 1 login every 4 seconds, then calculate how many script execution threads I'd need based on the number of jmeter machines (you will likely need multiple) and the number of uPortal servers in the cluster.
    • Optionally configure a random variability in the time between user actions.  A common approach is to use 5 seconds plus 0-3 seconds.  This will cause a bit more randomness in the test but is more similar to the real world.
  • Using jMeter or similar tool you basically record the browser traffic to record the flow of steps you want to test. It is usually a good idea to clear browser cache before performing the recording to capture all requests if you are trying to simulate 'new user' vs. someone who has accessed the portal some time in the past (or to go for more of a 'worst case' vs. 'best cast' type of comparison). Whatever you do, record what your process is to get parity if you plan on comparing those numbers to previous results or some future results. jMeter has options for clearing cache between runs, but you have to start out with all the browser requests by clearing cache at script recording time if you are trying to get more of that 'worst case' scenario.
  • You have options in the jmeter configuration to have jmeter request external resources (images from institution web assets servers, etc.) or you can apply a filter to request only from a particular domain. Typically do the latter to minimize external dependencies that can add variability to results. It's not a bad idea to do one run with all external dependencies involved for comparison because real users will hit all external dependencies and it might indicate a downstream issue your real users might run into, but I wouldn't do it for all runs and its basically 'static' web assets that would be accessed which are generally not a big resource impact on the dependent systems.

...