Load Balancing
There are a number of techniques you can employ to achieve load balancing across a cluster of portal servers. Most of this is not specific to uPortal. While you should look for options and detailed information elsewhere, a number of examples are detailed below.
Software-based Load Balancing
Apache Configuration
Hardware-based Load Balancing
Nortel Networks Alteon 184
F5
Load Balancing general guidelines
Session Collision Risk
It is critical to correctly configure the sticky session and user IPs to avoid session collisions.
Session IDs are generated by combining the requester's IP and a small
random number. The design assumes that each user has a unique IP and the
random bits are there to distinguish between tabs/windows from the client.
If Tomcat receives the load-balancer IP for every client, the clients will share sessions and see other client's data!
Sticky Sessions
uPortal significantly caches data for a user session. Â For best performance a user must maintain a persistent connection with the same server for the duration of their session. Â This is often called sticky session.
- Some users have had problems using a load-balancer-assigned cookie and trying to use it for request routing. Â One approach that works well is to route based on the JSESSIONID cookie assigned by Tomcat.
Logging User IPs instead of Load Balancer IPs
One issue that may arise when configuring a load-balanced uPortal service is logs filled with the load balancer IPs. This is frustrating in that logs lose important, valuable information. This can be corrected if the load balancer has a mechanism to add the user's remote IP in a header. The header 'X-Forwarded-For' header is commonly used for this purpose. Once the load balancer is configured to add this header, Apache http can use a module or Tomcat can use a Valve to replace the IP address with the value of this header.
 <Valve className="org.apache.catalina.valves.RemoteIpValve" internalProxies="169\.236\.45\.28, 169\.236\.89\.28" remoteIpHeader="x-forwarded-for" remoteIpProxiesHeader="X-Forwarded-For" protocolHeader="x-forwarded-proto" />
In the example above, the load balancers have IPs of 169.236.45.28 and 169.236.89.28.Â
Apache httpd - See https://httpd.apache.org/docs/current/mod/mod_remoteip.html.
Tomcat - See https://tomcat.apache.org/tomcat-7.0-doc/api/org/apache/catalina/valves/RemoteIpValve.html or https://tomcat.apache.org/tomcat-8.0-doc/config/valve.html#Remote_IP_Valve.
Load Distribution
There are a number of algorithms for load distribution, none of them perfect. Â Refer to your load balancer documentation for supported methods and additional guidance, including using pool groups (clusters at different data centers, for example) or weighting multiple factors. Â
Some of the load balance algorithms at a high level are:
Least connections
- Assumes each connection has equivalent impact on a server. Â If your campus supports unauthenticated access, this mechanism does not take into account that guest access is heavily cached and a guest session generally has lower impact and requires less resources than an authenticated session.
Number of active HTTP sessions (retrieved from Tomcat)
- Can result in inbalances due to long HTTP session timeouts (30 minutes) and guest users or authenticated users that access the landing page and then branch off to other campus systems will appear to be active until the HTTP session times out. Â If your campus supports unauthenticated access, this mechanism does not take into account that guest access is heavily cached and a guest session generally has lower impact and requires less resources than an authenticated session.
Response time
- Distributing load based on response time of the Node operational health check or another test can provide a reasonable indication of performance of a node.
Target node metrics (such as avg CPU load)
- Assumes something like average CPU has a rough correlation to response time and load.
Round robin
- One of the least desirable algorithms as it does not take into account target node performance or load.
Regardless of which algorithm you choose, if your load balancer supports it configure the Slow Ramp-up time so a node that is just added into the cluster does not get hammered with many connections. uPortal has a heavy ramp-up time to initialize the system and the first few connections take a heavy hit on filling up some of the in-memory caches. Also user login is a very heavyweight operation as much computation and database activity occurs to create the user's authenticated environment and home page so you want to have logins spread out. Failure to configure a ramp-up time for a new node will typically result in poor performance for users on the new node until its behavior stabilizes.
Node operational health check
Currently uPortal does not have a health check page that the load balancer can use to validate a node is operational. Â In lieu of that, the following approaches will allow the load balancer to determine some level of operational capability on the uPortal servers.
GET /uPortal/layout.json
- Preferred approach. Â Returns HTTP 200 if layout is able to be returned. Â Returns HTTP 500 if uPortal is unable to connect to the database (by default reads occur from UP_MESSAGE table and render event writes occur to UP_RAW_EVENTS (unless event aggregation configuration has this disabled). Â Data (guest layout) is heavily cached and rarely pulled from the database so this is a moderately low load health check. Â There is still a fair bit of computation that occurs to generate the response so this can also provide a rough target system response time indication for load leveling.
If the load balancer has trouble following HTTP 302 redirects, configure it to send a fixed cookie value that Tomcat/Java would not create in the request. For example:
wget --header="Cookie: JSESSIONID=23485898E75DB49-LoadBalancer" http://localhost:8080/uPortal/layout.json
The first request will get HTTP 302 redirected through /Login as normal, but subsequent requests will return immediately with an HTTP 200. However it is better to follow HTTP 302 redirects (might need to enable Connection: keep-alive for this).
Configuring the load balancer to send a fixed cookie value will continue to re-use a single HTTP session and not create many unnecessary HTTP sessions in Tomcat just for a health check. Though not a big operational impact, this strategy is a useful optimization that helps minimize the heap memory impact of operational health checks.
Â
GET /uPortal/f/welcome/normal/render.uP
- More load-intensive approach that returns the uPortal guest page. Â Indicates greater level of uPortal operation than the layout.json file (verifies the guest page is rendered). Â However this URL returns multiple HTTP 302 redirects as part of the authentication process. Â The load balancer must be configured to automatically follow HTTP redirects.
 uPortal 4.2.0+: If your load balancer has trouble with the cookieCheck process, you can configure specific User-Agent strings that skip the cookie check and configure your load balancer to send that HTTP header. See bean 'remoteCookieCheckFilter' in uportal-war/src/main/resources/properties/contexts/mvcContext.xml.
Â
Â