None pagerank

Task 1

Subtask a.

In [20]:
import numpy as np
from numpy import linalg as la
np.set_printoptions(precision=3,suppress=True)
In [21]:
A1=np.array([[1,0,0,1/2,0,0],[0,0,1,1/2,1/3,1/2],[0,1,0,0,0,0],[0,0,0,0,1/3,0],[0,0,0,0,0,1/2],[0,0,0,0,1/3,0]])
print(A1)
la.det(A1)
[[1.    0.    0.    0.5   0.    0.   ]
 [0.    0.    1.    0.5   0.333 0.5  ]
 [0.    1.    0.    0.    0.    0.   ]
 [0.    0.    0.    0.    0.333 0.   ]
 [0.    0.    0.    0.    0.    0.5  ]
 [0.    0.    0.    0.    0.333 0.   ]]
Out[21]:
0.0

$A$ is a stochastic matrix because all its columns sum up to 1. We apply the Power method to see where the Markov chain converges.

  • if it starts at page 6 and takes an even number of iterations
In [22]:
# Power method with Euclidean (L_2 norm) Scaling:
x_0=np.array([0,0,0,0,0,1])
x_n=x_0
for i in range(14):
    x_n = np.dot(A1,x_n)
    x_n = x_n/np.linalg.norm(x_n,ord=2)
    print(x_n)
x_n/np.linalg.norm(x_n,1) # we mormalise to obtain again probabilities
[0.    0.707 0.    0.    0.707 0.   ]
[0.    0.289 0.866 0.289 0.    0.289]
[0.12  0.956 0.239 0.    0.12  0.   ]
[0.119 0.278 0.952 0.04  0.    0.04 ]
[0.134 0.954 0.267 0.    0.019 0.   ]
[0.133 0.273 0.953 0.006 0.    0.006]
[0.136 0.953 0.271 0.    0.003 0.   ]
[0.136 0.272 0.953 0.001 0.    0.001]
[0.136 0.953 0.272 0.    0.001 0.   ]
[0.136 0.272 0.953 0.    0.    0.   ]
[0.136 0.953 0.272 0.    0.    0.   ]
[0.136 0.272 0.953 0.    0.    0.   ]
[0.136 0.953 0.272 0.    0.    0.   ]
[0.136 0.272 0.953 0.    0.    0.   ]
Out[22]:
array([0.1, 0.2, 0.7, 0. , 0. , 0. ])
In [23]:
# Power method with sum (L_1 norm) Scaling:
x_0=np.array([0,0,0,0,0,1])
x_n=x_0
for i in range(14):
    x_n = np.dot(A1,x_n)
    x_n = x_n/np.linalg.norm(x_n,ord=1)
    print(x_n)
x_n # /np.linalg.norm(x_n,1) # we do not need to normalise 
[0.  0.5 0.  0.  0.5 0. ]
[0.    0.167 0.5   0.167 0.    0.167]
[0.083 0.667 0.167 0.    0.083 0.   ]
[0.083 0.194 0.667 0.028 0.    0.028]
[0.097 0.694 0.194 0.    0.014 0.   ]
[0.097 0.199 0.694 0.005 0.    0.005]
[0.1   0.699 0.199 0.    0.002 0.   ]
[0.1   0.2   0.699 0.001 0.    0.001]
[0.1 0.7 0.2 0.  0.  0. ]
[0.1 0.2 0.7 0.  0.  0. ]
[0.1 0.7 0.2 0.  0.  0. ]
[0.1 0.2 0.7 0.  0.  0. ]
[0.1 0.7 0.2 0.  0.  0. ]
[0.1 0.2 0.7 0.  0.  0. ]
Out[23]:
array([0.1, 0.2, 0.7, 0. , 0. , 0. ])
In [24]:
# Power method with maximum entry (L_\infty norm) Scaling:
x_0=np.array([0,0,0,0,0,1])
x_n=x_0
for i in range(14):
    x_n = np.dot(A1,x_n)
    x_n = x_n/np.linalg.norm(x_n,ord=np.inf)
    print(x_n)
x_n/np.linalg.norm(x_n,1) # we need to normalise to obtain probabilities
[0. 1. 0. 0. 1. 0.]
[0.    0.333 1.    0.333 0.    0.333]
[0.125 1.    0.25  0.    0.125 0.   ]
[0.125 0.292 1.    0.042 0.    0.042]
[0.14 1.   0.28 0.   0.02 0.  ]
[0.14  0.287 1.    0.007 0.    0.007]
[0.142 1.    0.285 0.    0.003 0.   ]
[0.142 0.286 1.    0.001 0.    0.001]
[0.143 1.    0.286 0.    0.001 0.   ]
[0.143 0.286 1.    0.    0.    0.   ]
[0.143 1.    0.286 0.    0.    0.   ]
[0.143 0.286 1.    0.    0.    0.   ]
[0.143 1.    0.286 0.    0.    0.   ]
[0.143 0.286 1.    0.    0.    0.   ]
Out[24]:
array([0.1, 0.2, 0.7, 0. , 0. , 0. ])

Normalization is useful to avoid numbers growing large and creating numerical problems. However, since the dominant eigenvalue of a stochastic matrix is never larger than one, we can also skip the normalization throughout the iterations in our specific case.

In [25]:
x_0=np.array([0,0,0,0,0,1])
x_n=x_0
for i in range(14):
    x_n = np.dot(A1,x_n)
    print(x_n)
x_n/np.linalg.norm(x_n,1) # we need to normalise to obtain probabilities
[0.  0.5 0.  0.  0.5 0. ]
[0.    0.167 0.5   0.167 0.    0.167]
[0.083 0.667 0.167 0.    0.083 0.   ]
[0.083 0.194 0.667 0.028 0.    0.028]
[0.097 0.694 0.194 0.    0.014 0.   ]
[0.097 0.199 0.694 0.005 0.    0.005]
[0.1   0.699 0.199 0.    0.002 0.   ]
[0.1   0.2   0.699 0.001 0.    0.001]
[0.1 0.7 0.2 0.  0.  0. ]
[0.1 0.2 0.7 0.  0.  0. ]
[0.1 0.7 0.2 0.  0.  0. ]
[0.1 0.2 0.7 0.  0.  0. ]
[0.1 0.7 0.2 0.  0.  0. ]
[0.1 0.2 0.7 0.  0.  0. ]
Out[25]:
array([0.1, 0.2, 0.7, 0. , 0. , 0. ])
In [26]:
def power_method(A, x_0, niter, scaling=2):
    x_n=x_0/np.linalg.norm(x_0,scaling)
    for i in range(niter):
        x_n = np.dot(A,x_n)
        x_n = x_n/np.linalg.norm(x_n,ord=scaling)
        print(x_n)
    x_n/np.linalg.norm(x_n,ord=1) # we mormalise to obtain again probabilities
    return x_n
  • if it starts at page 6 and takes an odd number of iterations
In [27]:
x_0=np.array([0,0,0,0,0,1])
power_method(A1,x_0,15)
[0.    0.707 0.    0.    0.707 0.   ]
[0.    0.289 0.866 0.289 0.    0.289]
[0.12  0.956 0.239 0.    0.12  0.   ]
[0.119 0.278 0.952 0.04  0.    0.04 ]
[0.134 0.954 0.267 0.    0.019 0.   ]
[0.133 0.273 0.953 0.006 0.    0.006]
[0.136 0.953 0.271 0.    0.003 0.   ]
[0.136 0.272 0.953 0.001 0.    0.001]
[0.136 0.953 0.272 0.    0.001 0.   ]
[0.136 0.272 0.953 0.    0.    0.   ]
[0.136 0.953 0.272 0.    0.    0.   ]
[0.136 0.272 0.953 0.    0.    0.   ]
[0.136 0.953 0.272 0.    0.    0.   ]
[0.136 0.272 0.953 0.    0.    0.   ]
[0.136 0.953 0.272 0.    0.    0.   ]
Out[27]:
array([0.136, 0.953, 0.272, 0.   , 0.   , 0.   ])
  • if it starts at page 4 and takes an even number of iterations
In [28]:
x_0=np.array([0,0,0,1,0,0])
power_method(A1, x_0, 14)
[0.707 0.707 0.    0.    0.    0.   ]
[0.707 0.    0.707 0.    0.    0.   ]
[0.707 0.707 0.    0.    0.    0.   ]
[0.707 0.    0.707 0.    0.    0.   ]
[0.707 0.707 0.    0.    0.    0.   ]
[0.707 0.    0.707 0.    0.    0.   ]
[0.707 0.707 0.    0.    0.    0.   ]
[0.707 0.    0.707 0.    0.    0.   ]
[0.707 0.707 0.    0.    0.    0.   ]
[0.707 0.    0.707 0.    0.    0.   ]
[0.707 0.707 0.    0.    0.    0.   ]
[0.707 0.    0.707 0.    0.    0.   ]
[0.707 0.707 0.    0.    0.    0.   ]
[0.707 0.    0.707 0.    0.    0.   ]
Out[28]:
array([0.707, 0.   , 0.707, 0.   , 0.   , 0.   ])
  • if it starts at page 4 and takes an odd number of iterations
In [29]:
x_0=np.array([0,0,0,1,0,0])
power_method(A1, x_0, 15)
[0.707 0.707 0.    0.    0.    0.   ]
[0.707 0.    0.707 0.    0.    0.   ]
[0.707 0.707 0.    0.    0.    0.   ]
[0.707 0.    0.707 0.    0.    0.   ]
[0.707 0.707 0.    0.    0.    0.   ]
[0.707 0.    0.707 0.    0.    0.   ]
[0.707 0.707 0.    0.    0.    0.   ]
[0.707 0.    0.707 0.    0.    0.   ]
[0.707 0.707 0.    0.    0.    0.   ]
[0.707 0.    0.707 0.    0.    0.   ]
[0.707 0.707 0.    0.    0.    0.   ]
[0.707 0.    0.707 0.    0.    0.   ]
[0.707 0.707 0.    0.    0.    0.   ]
[0.707 0.    0.707 0.    0.    0.   ]
[0.707 0.707 0.    0.    0.    0.   ]
Out[29]:
array([0.707, 0.707, 0.   , 0.   , 0.   , 0.   ])

So the Markov process does not converge.

Remember that: If a Markov chain converges to a steady-state vector $\vec x$, if $\lambda_1=1$ is a dominat eigenvalue of $A$.

Let's have a look at the eigenvalues of the transition matrix:

In [30]:
la.eig(A1)
Out[30]:
(array([ 1.   ,  1.   , -1.   ,  0.   ,  0.408, -0.408]),
 array([[ 1.   ,  0.   ,  0.   , -0.408, -0.308, -0.173],
        [ 0.   ,  0.707, -0.707,  0.   , -0.251,  0.141],
        [ 0.   ,  0.707,  0.707, -0.408, -0.615, -0.346],
        [ 0.   ,  0.   ,  0.   ,  0.816,  0.364,  0.487],
        [ 0.   ,  0.   ,  0.   ,  0.   ,  0.446, -0.597],
        [ 0.   ,  0.   ,  0.   ,  0.   ,  0.364,  0.487]]))

There are three eigenvalues with absolute value equal to 1, ie, $|\lambda_i|=1, i=1,2,3$. In general, a stochastic matrix has a dominant eigenvalue equal to one but not in this case.

A stochastic matrix has the following properties. The largest absolute value of a stochastic matrix is at most 1 by Gershgorin circle theorem (not discussed in class). Additionally, every stochastic matrix has an "obvious" column eigenvector associated to the eigenvalue 1: the vector ${\boldsymbol {1}}$, whose coordinates are all equal to 1.

On the other hand, Perron theorem applied to stochastic matrices tells us that if the stochastic matrix is positive then it has a dominant eigenvalue $\lambda = 1$. More generally, Frobenius theorem tells us that if the stochastic matrix is nonnegative and irreducible then again it has a dominant eigenvalue $\lambda = 1$. However, in general stochastic matrices need not be positive or irreducible.

In the next subtask we try to modify the transition matrix such that it has all entries positive (no zeros) such that Perron's theorem applies and the corresponding process converges.

A Markov process with transition matrix $A$ is said to be regular if all the entries of some power of $A$ are positive. It can be shown that if this happens the $A$ has a dominant eigenvalue equal to 1 as well.

Subtask b.

In [31]:
A2 = 1/6*np.ones((6,6))
A2
Out[31]:
array([[0.167, 0.167, 0.167, 0.167, 0.167, 0.167],
       [0.167, 0.167, 0.167, 0.167, 0.167, 0.167],
       [0.167, 0.167, 0.167, 0.167, 0.167, 0.167],
       [0.167, 0.167, 0.167, 0.167, 0.167, 0.167],
       [0.167, 0.167, 0.167, 0.167, 0.167, 0.167],
       [0.167, 0.167, 0.167, 0.167, 0.167, 0.167]])
In [32]:
x_0=np.array([0,0,0,1,0,0])
x=power_method(A2,x_0,niter=14)
x/np.linalg.norm(x,ord=1)
[0.408 0.408 0.408 0.408 0.408 0.408]
[0.408 0.408 0.408 0.408 0.408 0.408]
[0.408 0.408 0.408 0.408 0.408 0.408]
[0.408 0.408 0.408 0.408 0.408 0.408]
[0.408 0.408 0.408 0.408 0.408 0.408]
[0.408 0.408 0.408 0.408 0.408 0.408]
[0.408 0.408 0.408 0.408 0.408 0.408]
[0.408 0.408 0.408 0.408 0.408 0.408]
[0.408 0.408 0.408 0.408 0.408 0.408]
[0.408 0.408 0.408 0.408 0.408 0.408]
[0.408 0.408 0.408 0.408 0.408 0.408]
[0.408 0.408 0.408 0.408 0.408 0.408]
[0.408 0.408 0.408 0.408 0.408 0.408]
[0.408 0.408 0.408 0.408 0.408 0.408]
Out[32]:
array([0.167, 0.167, 0.167, 0.167, 0.167, 0.167])
In [33]:
ell, P = la.eig(A2)
ell, P
Out[33]:
(array([ 1.,  0., -0.,  0.,  0.,  0.]),
 array([[ 0.408,  0.   , -0.107, -0.   , -0.   , -0.   ],
        [ 0.408,  0.894,  0.91 , -0.082, -0.082, -0.082],
        [ 0.408, -0.224, -0.201, -0.478, -0.478, -0.478],
        [ 0.408, -0.224, -0.201,  0.85 , -0.146, -0.146],
        [ 0.408, -0.224, -0.201, -0.146,  0.85 , -0.146],
        [ 0.408, -0.224, -0.201, -0.146, -0.146,  0.85 ]]))

The matrix is positive hence it has a dominant eigenvalue equal to 1. The corresponding eigenvector is a vector of entries of 1 or normalized as in the matrix P above.

If we can calculate the eigenvalues because computationally feasibile as in these small examples, we can also find the steady-state vectors by applying the theory seen on slide 20.

In [34]:
P_inv = la.inv(P) # this is also computationally demanding
z_0 = P_inv @ x_0
x_n = z_0[0] * P[:,0]
print(x_n)
[0.167 0.167 0.167 0.167 0.167 0.167]

Subtask c.

In [35]:
A = 0.85 * A1 + 0.15 * A2
In [36]:
x_0=np.array([0,0,0,1,0,0])
power_method(A,x_0,niter=30)
[0.705 0.705 0.039 0.039 0.039 0.039]
[0.707 0.126 0.689 0.054 0.06  0.054]
[0.682 0.707 0.153 0.061 0.067 0.061]
[0.681 0.256 0.676 0.065 0.072 0.065]
[0.654 0.699 0.264 0.066 0.073 0.066]
[0.649 0.358 0.66  0.068 0.076 0.068]
[0.626 0.686 0.35  0.068 0.076 0.068]
[0.62  0.433 0.642 0.07  0.077 0.07 ]
[0.602 0.672 0.413 0.069 0.077 0.069]
[0.596 0.486 0.626 0.07  0.078 0.07 ]
[0.582 0.659 0.459 0.07  0.078 0.07 ]
[0.577 0.524 0.613 0.071 0.078 0.071]
[0.567 0.649 0.492 0.07  0.078 0.07 ]
[0.563 0.551 0.603 0.071 0.078 0.071]
[0.555 0.641 0.515 0.07  0.078 0.07 ]
[0.552 0.57  0.595 0.071 0.078 0.071]
[0.546 0.635 0.531 0.07  0.078 0.07 ]
[0.544 0.584 0.59  0.071 0.078 0.071]
[0.539 0.631 0.543 0.07  0.078 0.07 ]
[0.538 0.593 0.585 0.071 0.078 0.071]
[0.535 0.627 0.552 0.071 0.078 0.071]
[0.533 0.6   0.582 0.071 0.078 0.071]
[0.531 0.625 0.558 0.071 0.078 0.071]
[0.53  0.605 0.58  0.071 0.078 0.071]
[0.529 0.623 0.562 0.071 0.078 0.071]
[0.528 0.609 0.578 0.071 0.078 0.071]
[0.527 0.622 0.566 0.071 0.078 0.071]
[0.526 0.611 0.577 0.071 0.078 0.071]
[0.526 0.621 0.568 0.071 0.078 0.071]
[0.525 0.613 0.576 0.071 0.078 0.071]
Out[36]:
array([0.525, 0.613, 0.576, 0.071, 0.078, 0.071])

Hence page 2 has the highest ranking

In [37]:
ell, P = la.eig(A)
ell, P
Out[37]:
(array([ 1.   ,  0.85 , -0.85 ,  0.347, -0.347, -0.   ]),
 array([[ 0.522,  0.816, -0.   ,  0.308, -0.173,  0.408],
        [ 0.618, -0.408,  0.707,  0.251,  0.141, -0.   ],
        [ 0.574, -0.408, -0.707,  0.615, -0.346,  0.408],
        [ 0.071, -0.   , -0.   , -0.364,  0.487, -0.816],
        [ 0.078,  0.   ,  0.   , -0.446, -0.597, -0.   ],
        [ 0.071, -0.   , -0.   , -0.364,  0.487, -0.   ]]))
In [38]:
P_inv = la.inv(P) # this is also computationally demanding
z_0 = P_inv @ x_0
x_n = z_0[0] * P[:,0]
print(x_n)
[0.27  0.32  0.297 0.036 0.041 0.036]

The slight discrepancies might be due to rounding erros or to an insufficient number of iterations in the power method.

Task 2

In [39]:
with open("../assets/top250movies.txt", encoding="utf-8") as f:
    lines = f.readlines()
In [40]:
db = {} 
for line in lines:
    entries = line.strip().split("/")
    db[entries[0]] = entries[1:]

To handle the issue of weights due to repeated co-presences of actors in movies, we can first create a multi directed graph and then convert it in a directed graph with weights on arcs. A multi digraph is a graph that allows multiple arcs between nodes. Alternatively, as hinted by the text of the exercise we could create a adjacency dictionary where for every actor we list the actors that are reached by the first actor (ie, more expensive actors) allowing repeated entries. Then we can construct the digraph using the adjacency dictionary. However, graphs in networkx automatically construct adjacency lists and since library functions are to be preferred in Python because more efficient, we prefer to use the first alternative with multi digraph then converted in digraph.

In [41]:
import networkx as nx

MDG = nx.MultiDiGraph()
for k in db:
    for i in range(len(db[k])):
        actor = db[k][i]
        MDG.add_edges_from([(cheaper_actor,actor) for cheaper_actor in db[k][(i+1):]])
In [42]:
MDG.number_of_nodes(), MDG.number_of_edges()
Out[42]:
(14882, 886259)
In [43]:
DG = nx.DiGraph()
for node, outgoing_neighbors in MDG.adjacency():
    for neighbor, arc_dict in outgoing_neighbors.items():
        value = len(arc_dict.values())
        DG.add_edge(node, neighbor, weight = value)
In [44]:
DG.number_of_nodes(), DG.number_of_edges()
Out[44]:
(14882, 880639)
In [45]:
PR = nx.pagerank(DG, alpha=0.7)
In [46]:
sorted(PR, key=PR.get, reverse=True)[0:5]
Out[46]:
['Leonardo DiCaprio', 'Robert De Niro', 'Tom Hanks', 'Jamie Foxx', 'Al Pacino']