public class WeightedRoundRobinRouter extends RouterImpl
RouterImpl.AnswerEntry, RouterImpl.RedirectEntryALL_APPLICATION, ALL_HOST, ALL_REALM, ALL_SESSION, ALL_USER, concurrentFactory, container, DONT_CACHE, isStopped, metaData, REALM_AND_APPLICATION, realmTable, REDIRECT_TABLE_SIZE, redirectTable, redirectTableLock, REQUEST_TABLE_CLEAR_SIZE, REQUEST_TABLE_SIZE, requestEntryMap, requestEntryTableLock| Modifier | Constructor and Description |
|---|---|
|
WeightedRoundRobinRouter(IContainer container,
IConcurrentFactory concurrentFactory,
IRealmTable realmTable,
Configuration config,
MetaData aMetaData) |
protected |
WeightedRoundRobinRouter(IRealmTable table,
Configuration config) |
| Modifier and Type | Method and Description |
|---|---|
protected int |
gcd(int a,
int b)
Return greatest common divisor for two integers
https://en.wikipedia.org/wiki/Greatest_common_divisor#Using_Euclid.27s_algorithm
|
IPeer |
selectPeer(List<IPeer> availablePeers)
Select peer by weighted round-robin scheduling
As documented in http://kb.linuxvirtualserver.org/wiki/Weighted_Round-Robin_Scheduling
|
destroy, garbageCollectRequestRouteInfo, getPeer, getPeerPredProcessing, getRealmTable, getRequestRouteInfo, loadConfiguration, processRedirectAnswer, registerRequestRouteInfo, start, stop, updateRouteprotected WeightedRoundRobinRouter(IRealmTable table, Configuration config)
public WeightedRoundRobinRouter(IContainer container, IConcurrentFactory concurrentFactory, IRealmTable realmTable, Configuration config, MetaData aMetaData)
public IPeer selectPeer(List<IPeer> availablePeers)
The weighted round-robin scheduling is designed to better handle servers with different processing capacities. Each server can be assigned a weight, an integer value that indicates the processing capacity. Servers with higher weights receive new connections first than those with less weights, and servers with higher weights get more connections than those with less weights and servers with equal weights get equal connections. The pseudo code of weighted round-robin scheduling is as follows:
Supposing that there is a server set S = {S0, S1, …, Sn-1}; W(Si) indicates the weight of Si; i indicates the server selected last time, and i is initialized with -1; cw is the current weight in scheduling, and cw is initialized with zero; max(S) is the maximum weight of all the servers in S; gcd(S) is the greatest common divisor of all server weights in S;
while (true) {
i = (i + 1) mod n;
if (i == 0) {
cw = cw - gcd(S);
if (cw <= 0) {
cw = max(S);
if (cw == 0)
return NULL;
}
}
if (W(Si) >= cw)
return Si;
}
For example, the real servers, A, B and C, have the weights, 4, 3, 2 respectively, a scheduling sequence will be AABABCABC in a scheduling period (mod sum(Wi)).
In an optimized implementation of the weighted round-robin scheduling, a scheduling sequence will be generated according to the server weights after the rules of IPVS are modified. The network connections are directed to the different real servers based on the scheduling sequence in a round-robin manner.
The weighted round-robin scheduling is better than the round-robin scheduling, when the processing capacity of real servers are different. However, it may lead to dynamic load imbalance among the real servers if the load of the requests vary highly. In short, there is the possibility that a majority of requests requiring large responses may be directed to the same real server.
Actually, the round-robin scheduling is a special instance of the weighted round-robin scheduling, in which all the weights are equal.
This method is internally synchronized due to concurrent modifications to lastSelectedPeer and currentWeight. Please consider this when relying on heavy throughput. Please note: if the list of availablePeers changes between calls (e.g. if a peer becomes active or inactive), the balancing algorithm is disturbed and might be distributed uneven. This is likely to happen if peers are flapping.
selectPeer in class RouterImplavailablePeers - list of peers that are in OKAY stateprotected int gcd(int a,
int b)
a - b - Copyright © 2016. All Rights Reserved.