§ Randomized SharpSAT
- First idea: take random assignments, and use this to estimate total number of models.
- Next idea: take deterministic partial assignments, then extend these partial assignments randomly. This will use the randomness on a 'smaller domain', and exploits linearity of expectation.
- Next refinement: A partial assignment is just a predicate A that we adjoin to our formula. Therefore, can we randomly pick such an A, such that if we can estimate ∣M⊨ϕ∧A∣, we can estimate ∣M⊨ϕ∣? Yes, pick symmetric A that cut down the number of models by a factor of 2 in expectation. For example, pick A to be something like xi=xj.
- More generally, can pick xi=xj with probability 1/2, and xi=¬xj with probability 1/2, and this will give us the same guarantee.
- This can be written as xi⊕xj=b, where b is a random bit.
- So, pick {Ak} of the form xi⊕xj=b, and then estimate ∣M⊨ϕ∧kAk∣ for the full set Ak by random sampling. This gives us an estimate of ∣M⊨ϕ∣ as 2k∗∣M⊨ϕ∧Ak∣!
- Why does this work so well in practice? I don't know! I should read Kuldeep Meel's paper to find out.