Random generation of capacities

Michel Grabisch

University of Paris I Panthéon-Sorbonne and Paris School of Economics

October 20, 2022, 12:20 in S6


Capacities, introduced by Choquet in 1953, are monotone set functions vanishing on the empty set. They are widely used in decision making under risk and uncertainty, multicriteria decision making, game theory (monotone TU-games) and related to many combinatorial problems in Operations Research. In the phase of determining or learning a model based on capacities, it is often important to generate randomly capacities in a uniform way. The problem appears to be surprisingly difficult, due to the monotonicity constraints, and naive methods give poor results, with extremely biased distributions for some coefficients of the capacity. The theoretical exact solution to this problem is however known, since the set of capacities on a finite set N is an order polytope, associated to the Boolean lattice (2 N , ⊆), and generating uniformly random elements in an order polytope amounts to generating all linear extensions of the the underlying poset, that is, in our case, the Boolean lattice. Unfortunately, the number of linear extensions in (2 N , ⊆) grows tremendously with the size of N, and is even not known beyond |N | = 7. This obliges to resort to approximate methods. The literature has produced many such methods, among which the Markov chain approach seems to give the best results.

In this talk, after reviewing most representative methods, we propose a new approximate method, counting a limited number of linear extensions, eliminating linear extensions with very low probability. The basic idea for this is to concentrate on the two top and two bottom layers on the Boolean lattice, and to eliminate one by one maximal elements in the two top layers, and minimal elements in the two bottom layers. For this reason, the method is called 2-layer approximation method. In a second part, we explain how to measure the performance of a given random capacity generator. Our experiments show that our 2-layer approximation method performs equally well as the Markov chain method, while taking much less computation time.

(joint work with Christophe Labreuche and Peiqi Sun)