k
colorings of a graph $G$ with maximum degree
$\Delta$. Hence, we require $k \geq \Delta + 1$. To perform this sampling,
we use MCMC to sample from a markov chain whose states are $k$-colorings of $G$,
whose stationary distribution is the uniform distribution over valid colorings.
The issue with MCMC techniques is that we never know when our chain has reached
the stationary state. To ensure we receive uniformly distributed samples,
we built a "bounding chain" $W$ which has the following properties:
def process(S, T):
assert(issubset(T, S)) # precondition
s = np.random.choice(S) # uniform choice over S.
if s in T: return s # |T|/|S| prob. to enter `if`.
else:
# (1 - |T|/|S|) to enter `else`
Snext = S.remove(s); return process(Snext, T)
We claim that the probability that process(S, T) = t0
for a fixed t0
in T
is $1/|T|$. We create a new function indicator
to express this:
def indicator(t0, S, T):
assert(t0 in T) # precondition
assert(issubset(T, S)) # precondition
return t0 == process(S, T)
Let's push in t0 ==
into the definiton of process
after inling process
.
This gives us:
def indicator(t0, S, T):
assert(t0 in T) # precondition
assert(issubset(T, S)) # precondition
s = np.random.choice(S)
if s in T:
# T/S prob. for x in T
return t0 == s # 1/|T| prob. for t0 == x
else:
# (1 - |T|/|S|) to reach else branch.
Snext = S.remove(s); return process(Snext, T)
Now, we write down the recurrence for the probability that we are trying
to compute: $P(t0, S, T)$ is the probability that indicator(t0, S, T)
returns
True
. Alternatively, it's the probability that process(S, T)
returns t0
.
P(t0, S, T) = prob. that process(S, T) = t0
P(t0, T, T) = T/|T| * 1/|T| = 1/|T| [base case]
P(t0, S, T) = 1/|S| + (1 - |T|/|S|) * P(t0, |S|-1, T) [induction]
We assume for induction that P(t0, |S|-1, T) = 1/|T|
. On substitution into [induction]
,
we get:
P(t0, S, T)
= 1/|S| + (1 - |T|/|S|) * P(t0, |S|-1, T) [induction]
= 1/|S| + (1 - |T|/|S|) * 1/|T|
= 1/|S| + (1/|T| - 1/|S|)s
= 1/|T|
Which is indeed the same probability as (1):
1. Choosing an element uniformly from a subset $T$ = 1/|T|
.
def process(S, T):
s = np.random.choice(S)
if s in T: return s
else return process(S.remove(s), T)
Notice that the last return is tail-call. This program can be rewritten as:
def process(S, T):
while True:
s = np.random.choice(S)
if s in T: return s
S = S.remove(s)
As previously, we create an indicator
function and study its behaviour:
def indicator(t0, S, T):
assert(t0 in T) # precondition
assert(issubset(T, S)) # precondition
while True:
s = np.random.choice(S)
if s in T: return t0 == s
S = S.remove(s)
We know that this programs only returns a value from the line:
if s in T: return t0 == s
P(process(S, T) == t0)
.
Whatever the return value of indicator
, we can assume that it occured within
the if
condition. We can use this to compute:
P(indicator(t0, S, T) = True)
= P(t0 == s | s in T) [program only returns in this case]
= 1/|T|
def f():
return if cond1 then cond2 else cond3
will have probability:
P(cond1) * P(cond2|cond1) + P(not cond1) * P(cond3|not cond1)
Using this, we can analyze indicator
as:
P(indicator(t0, S, T) = True)
= P(s in T) * P(t0 == s |s in T) +
P(s not in T) * P(indicator(t0, S.remove(s), T) | s not in T)
= |T|/|S| * 1/|T| +
(|S|-|T|)/|S| * P(indicator(t0, S.remove(s), T))
= 1/|S| + (|S|-|T|)/|S| * 1/|T| [by induction]
= 1/|T|
w
. Note that this is an
over-approxmation : that is, there may be colors that are not blocked for v
, which we
believe to be blocked from w
's point of view.
O
was an over-approximation . That is:
A
is definitely a legal color for v
in x[n]
. v
in x[n]
that is not in A
. A
.
call this sequence $S$.
We will at worst pick $\Delta + 1$ elements for $S$, since the max-degree
of the graph is $\Delta$.
x
only cares about S[:i]
. W
cares about the entire sequence. C
in a sequence till we get a color that is legal for $v$ at index $i$
is the same as picking uniformly at random from the set of colors that are legal for
$v$.
X | p:2 --- q:4
W | p:{1, 2} --- q:{4, 5}
we are sampling q:
O = {1, 2}
A = {3, 4, 5}
S = [2, 1, 3]
X | p:1 -- q:2
W | p:{1, 2} -- q:{1, 2, 3}
If we analyze S = [2, 1, 3]
we notice that:
2: invalid for W(p:[1, 2]), invalid for X(p:2)
1: invalid for W, valid for X
3: valid for W, valid for X
So, what we are really sampling ix:
SX = [1]
(for Xs transition) SW = [2, 3]
(for Ws transition) X
, we can safely drop SW
. However, to transition W
correctly,
we generate more elements in the sequence, till we hit a "safe" element.
v
might have
more colors that are blocked than what w
sees.
C
till we get an element of A
, we can
sample colors from C/B
. We know that the colors in B
can never be used
by $X$, since the colors in B
are those that we know are blocked for sure .
This is used in the theoretical analysis of the paper.
-- | colors, vertices, edges
cs :: [C]; cs = ...
vs :: [V]; vs = ...
es :: [(V, V); es = ...
-- | definitely not allowed
blocked :: (V -> [C]) -> (V -> [C])
blocked v2cs v0 = concat [v2cs w | (v, w) <- es, v == v0, length (v2cs w) == 1]
-- | maybe not allowed
occluded :: (V -> [C]) -> (V -> [C])
occluded v2cs v0 = concat [v2cs w | (v, w) <- es, v == v0]
-- | definitely allowed from Ws point of view
allowed :: (V -> [C]) -> (V -> [C])
allowed v2cs = cs \\ occluded v2cs
-- | perturb the color function to another function
perturb :: (V -> [C]) -> Rand (V -> [C])
perturb v2cs = do $
randv <- uniform_rand vs
randc_for_v <- uniform_rand (allowed v2cs v)
return $ \v -> if v == randv then randc_for_v else v2cs v
-- | check if we are done
terminated :: (V -> [C]) -> Bool
terminated v2cs = all [length (v2cs v) == 1 | v <- vs]
-- | generate chain
chain :: (V -> [C]) -> Rand [V -> [C]]
chain f = do
if terminated f
then [] else do
f;' <- perturb f; fs <- chain f'; return (f:fs)
-- | return a sample
sample :: (V -> [C]) -> Rand (V -> [C])
sample = last . chain