This reminds me of machine learning algorithms such as when you're trying to prune weights from a neural network, and you can try greedily removing weights that contribute the least to the overall output. Also here's a summary of the algorithm from the original paper:
"1. Compute the Q matrices for all the initial vertices.
2. Select all valid pairs.
3. Compute the optimal contraction target v¯ for each valid pair
(v1, v2 ). The error v¯T(Q1 +Q2 )v¯ of this target vertex becomes
the cost of contracting that pair.
4. Place all the pairs in a heap keyed on cost with the minimum
cost pair at the top.
5. Iteratively remove the pair (v1, v2 ) of least cost from the heap,
contract this pair, and update the costs of all valid pairs involving v1."
This reminds me of machine learning algorithms such as when you're trying to prune weights from a neural network, and you can try greedily removing weights that contribute the least to the overall output. Also here's a summary of the algorithm from the original paper: "1. Compute the Q matrices for all the initial vertices. 2. Select all valid pairs. 3. Compute the optimal contraction target v¯ for each valid pair (v1, v2 ). The error v¯T(Q1 +Q2 )v¯ of this target vertex becomes the cost of contracting that pair. 4. Place all the pairs in a heap keyed on cost with the minimum cost pair at the top. 5. Iteratively remove the pair (v1, v2 ) of least cost from the heap, contract this pair, and update the costs of all valid pairs involving v1."