Package jline.api.cache
Class Cache_ttl_treeKt
-
- All Implemented Interfaces:
public final class Cache_ttl_treeKt
-
-
Method Summary
-
-
Method Detail
-
cache_ttl_tree
final static Matrix cache_ttl_tree(Array<Matrix> lambda, Array<Array<Matrix>> R, Matrix m, Long seed)
Solves tree-based TTL (Time-To-Live) cache models using analytical approximation.
<p>This function analyzes hierarchical cache systems where items are organized in tree-structured lists with TTL-based eviction policies. The analysis computes steady-state probabilities for items at different cache levels using Discrete Time Markov Chain (DTMC) modeling combined with numerical optimization.</p><p>The TTL approximation assumes that items have exponentially distributed residence times in each cache level, with rates determined by the lambda parameters. Items transition between cache levels according to the routing matrices R, which define the tree structure.</p><h3>Algorithm Overview:</h3>
<ol> <li>Initializes optimization variables (list times) randomly</li> <li>Uses Powell optimization to solve the capacity constraint equations</li> <li>For each optimization iteration: <ul> <li>Constructs transition matrices for each item based on TTL rates</li> <li>Solves DTMC for steady-state probabilities</li> <li>Computes random-time probabilities (cache occupancy)</li> <li>Evaluates capacity constraint violations</li> </ul> </li> <li>Returns the steady-state probability matrix at optimum</li> </ol><h3>Input Matrix Formats:</h3>
<ul> <li><b>lambda[u]</b>: Matrix of size (n × h+1) where lambda[u][i,j] is the arrival rate of user u for item i at cache level j</li> <li><b>R[u][i]</b>: Matrix of size (h+1 × h+1) where R[u][i][j,k] is the routing probability from level j to level k for user u and item i</li> <li><b>m</b>: Matrix of size (1 × h) where m[0,j] is the capacity of cache level j+1</li> </ul><h3>Output Matrix Format:</h3>
<p>Returns a matrix of size (n × h+1) where result[i,j] represents the steady-state probability that item i is at cache level j. Level 0 represents items not cached (miss state).</p><h3>Example Usage:</h3>
<pre>{@code // 3 items, 2 cache levels, 1 user Matrix[] lambda = new Matrix[1]; lambda[0] = new Matrix(3, 3); // 3 items × (2 levels + 1 external) lambda[0].set(0, 0, 2.0); // Item 0, external arrival rate lambda[0].set(0, 1, 1.5); // Item 0, level 1 rate lambda[0].set(0, 2, 1.0); // Item 0, level 2 rate Matrix[][] R = new Matrix[1][3]; for (int i = 0; i < 3; i++) { R[0][i] = new Matrix(3, 3); R[0][i].set(0, 1, 0.7); // External → Level 1 R[0][i].set(0, 2, 0.3); // External → Level 2 R[0][i].set(1, 0, 1.0); // Level 1 → External (eviction) R[0][i].set(2, 0, 1.0); // Level 2 → External (eviction) } Matrix m = new Matrix(1, 2); m.set(0, 0, 2.0); // Level 1 capacity m.set(0, 1, 1.0); // Level 2 capacity Matrix result = cache_ttl_tree(lambda, R, m, 12345L); }</pre><h3>Theoretical Background:</h3>
<p>This implementation is based on the TTL approximation for cache analysis, where each item's residence time in a cache level follows an exponential distribution. The method extends traditional TTL analysis to tree-structured hierarchies, allowing for complex routing patterns between cache levels.</p><p>The optimization problem solved is: <br><code>min ||F(x)||²</code> <br>where <code>F(x) = m - c(x)</code> and c(x) is the computed cache occupancy for list times x.</p>- Parameters:
lambda- Array of matrices u where lambdau is (n × h+1) matrix of arrival rates for user u, with lambdai,j being the rate for item i at cache level j (j=0 is external)R- Array of routing matrices i where Ri is (h+1 × h+1) transition probability matrix for user u and item i, with Rij,k being probability of moving from level j to level km- Matrix of size (1 × h) containing cache capacities, where m0,j is the capacity of cache level j+1seed- Optional random seed for reproducible optimization initialization.- Returns:
Matrix of size (n × h+1) containing steady-state probabilities, where resulti,j is probability that item i is at cache level j
- Since:
3.0.1
-
-
-
-