Two's Company, Three's a Crowd: Consensus-Halving for a Constant Number of Agents

We consider the ε-Consensus-Halving problem, in which a set of heterogeneous agents aim at dividing a continuous resource into two (not necessarily contiguous) portions that all of them simultaneously consider to be of approximately the same value (up to ε). This problem was recently shown to be PPA-complete, for n agents and n cuts, even for very simple valuation functions. In a quest to understand the root of the complexity of the problem, we consider the setting where there is only a constant number of agents, and we consider both the computational complexity and the query complexity of the problem. For agents with monotone valuation functions, we show a dichotomy: for two agents the problem is polynomial-time solvable, whereas for three or more agents it becomes PPA-complete. Similarly, we show that for two monotone agents the problem can be solved with polynomially-many queries, whereas for three or more agents, we provide exponential query complexity lower bounds. These results are enabled via an interesting connection to a monotone Borsuk-Ulam problem, which may be of independent interest. For agents with general valuations, we show that the problem is PPA-complete and admits exponential query complexity lower bounds, even for two agents.


Introduction
The topic of fair division, founded in the work of Steinhaus [64], has been in the centre of the literature in economics, mathematics, and more recently computer science and artificial intelligence.Classic examples include the well-known fair cake cutting problem for the division of divisible resources (e.g., see [43], [20] or [58]), as well as the fair division of indivisible resources (e.g., see [19]).The earlier works in economics and mathematics were mainly concerned with the questions of whether fair solutions exist, and whether they can be found constructively, i.e., via finite-time protocols.In the more recent literature in computer science, a plethora of works has been concerned with the computational complexity of finding such solutions, either by providing polynomial-time algorithms, or by proving computational hardness results.Currently, it would be no exaggeration to say that fair division is one of the most vibrant and important topics in the intersection of these areas.
Besides the classic fair division settings mentioned above, another well-known problem is the Consensus-Halving problem, whose origins date back to the 1940s and the work of Neyman [56].In this problem, a set of n agents with different and heterogeneous valuation functions over the unit interval [0, 1] aim at finding a partition of the interval into pieces labelled either "+" or "−" using at most n cuts, such that the total value of every agent for the portion labelled "+" and for the portion labelled "−" is the same.Very much like other well-known problems in fair division, the existence of a solution can be proven via fixed-point theorems, in particular the Borsuk-Ulam theorem [18], and can also be seen as a generalisation of the Hobby-Rice Theorem [48].
The problem has applications in the context of the well-known Necklace Splitting problem of Alon [3] and was studied in conjunction with this latter problem [5,44].Other applications were highlighted by Simmons and Su [63], who were the first to study the problem in isolation.For example, consider two families that are dividing a piece of land into two regions, such that every member of each family considers the split to be equal.Another example is the 1994 Convention of the Law of the Sea (see [20]), which regards the protection of developing countries in the event that an industrialised nation is planning to mine resources in international waters.In such cases, a representative of the developing nations reserves half of the seabed for future use by them, and a consensus-halving solution would correspond to a partition of the seabed into two portions that all the developing nations consider to be of equal value.
Simmons and Su [63] in fact studied the approximate version of Consensus-Halving, coined the ε-Consensus-Halving problem, where the requirement is that the total value of every agent for the portion labelled "+" and for the portion labelled "−" is approximately the same, up to an additive parameter ε.For this version, Simmons and Su [63] provided a constructive solution via an exponential-time algorithm.The ε-Consensus-Halving problem received considerable attention in the literature of computer science over the past few years, as it was proven to be the first "natural" PPA-complete problem [38], i.e., a problem that does not have a polynomial-sized circuit explicitly in its definition, answering a decadeold open question [47,57].Additionally, Filos-Ratsikas and Goldberg [39], reduced from this problem to establish the PPAcompleteness of Necklace Splitting with two thieves; these PPA-completeness results provided the first definitive evidence of intractability for these two classic problems, establishing for instance that solving them is at least as hard as finding a Nash equilibrium of a strategic game [26,29].Filos-Ratsikas et al. [41] improved on the results for the ε-Consensus-Halving problem, by showing that the problem remains PPA-complete, even if one restricts the attention to very small classes of agents' valuations, namely piecewise uniform valuations with only two valuation blocks.Very recently, the ε-Consensus-Halving problem was shown to be PPA-complete even for constant ε, namely any ε < 1/5 [32].This latter result falls under the general umbrella of imposing restrictions on the structure of the problem, to explore if the computational hardness persists or whether we can obtain polynomial-time algorithms.Filos-Ratsikas et al. [41] applied this approach along the axis of the valuation functions, while considering a general number of agents, similarly to [38,39].In this paper, we take a different approach, and we restrict the number of agents to be constant.This is in line with most of the theoretical work on fair division, which is also concerned with solutions for a small number of agents 1 and it is also quite relevant from a practical standpoint, as fair division among a few participants is quite common.We believe that such investigations are necessary in order to truly understand the computational complexity of the problem.To this end, we state our first main question: What is the computational complexity of ε-Consensus-Halving for a constant number of agents?
Since the number of agents is now fixed, any type of computational hardness must originate from the structure of the valuation functions.We remark that the existence results for ε-Consensus-Halving are fairly general, and in particular do not require assumptions like additivity or monotonicity of the valuation functions.For this reason, the sensible approach is to start from valuations that are as general as possible (for which hardness is easier to establish), and gradually constrain the domain to more specific classes, until eventually polynomial-time solvability becomes possible.Indeed, in a paper that is conceptually similar to ours, Deng et al. [34] studied the computational complexity of the contiguous envy-free cake-cutting problem 2 and proved that the problem is PPAD-complete, even for three agents, when agents have ordinal preferences over the possible pieces.These types of preferences induce no structure on the valuation functions and are therefore as general as possible.In contrast, the authors showed that for three agents and monotone valuations, the problem becomes polynomialtime solvable, leaving the case of four or more agents as an open problem.We adopt a similar approach in this paper for What is the query complexity of ε-Consensus-Halving for a constant number of agents?
We develop appropriate machinery that allows us to answer both of our main questions at the same time.In a nutshell, for the positive results, we design algorithms that run in polynomial time and can be recast as query-based algorithms that only use a polynomial number of queries.For the negative results, we construct reductions from "hard" computational problems which allow us to simultaneously obtain computational hardness results and query complexity lower bounds.

Our results
In this section, we list our main results regarding the computational complexity and the query complexity of the ε-Consensus-Halving problem.
Computational Complexity: We start from the computational complexity of the problem for a constant number of agents.We prove the following main results, parameterised by (a) the number of agents and (b) the structure of the valuation functions.
-For a single agent and general valuations, the problem is polynomial-time solvable.The same result applies to the case of any number of agents with identical general valuations.-For two or more agents and general valuations, the problem is PPA-complete.
-For two agents and monotone valuations, the problem is polynomial-time solvable.This result holds even if one of the two agents has a general valuation.-For three or more agents and monotone valuations, the problem is PPA-complete.
Finally, the ε-Consensus-Halving problem with 2 agents coincides with the well-known ε-Perfect Division problem for cake-cutting (e.g., see [21,22]), and thus naturally our results imply that ε-Perfect Division with 2 agents with monotone valuations can be done in polynomial time, whereas it becomes PPA-complete for 2 agents with general valuations.
Before we proceed, we offer a brief discussion on the different cases that are covered by our results.The distinction on the number of agents is straightforward.For the valuation functions, we consider mainly general valuations and monotone valuations.Note that neither of these functions is additive, meaning that the value that an agent has for the union of two disjoint intervals [a, b] and [c, d] is not necessarily the sum of her values for the two intervals.For monotone valuations, the requirement is that for any two subsets I and I of [0, 1] such that I ⊆ I , the agent values I at least as much as I , whereas for general valuations there is no such requirement.
We remark that for agents with piecewise constant valuations (i.e., the valuations used in [38] to obtain the PPAcompleteness of the problem for many agents), the problem can be solved rather straightforwardly in polynomial time for a constant number of agents, using linear programming (see Appendix A).In terms of the classification of the complexity of the problem in order of increasing generality of the valuation functions, this observation provides the "lower bound" whereas our PPA-hardness results for monotone valuations provide the "upper bound", see Fig. 1.While the precise point of the phase transition has not yet been identified, our results make considerable progress towards this goal.

Query Complexity:
Besides the computational complexity of the problem, we are also interested in its query complexity.In this setting, one can envision an algorithm which interacts with the agents via a set of queries, and aims to compute a solution to ε-Consensus-Halving using the minimum number of queries possible.In particular, a query is a question from the algorithm to an agent about a subset of [0, 1], who then responds with her value for that set.We provide the following results, where L denotes the Lipschitz parameter of the valuation functions: -For a single agent and general valuations, the query complexity of the problem is log L ε .The same result applies for any number of agents with identical general valuations.
-For n ≥ 2 agents and general valuations, the query complexity of the problem is -For two agents and monotone valuations, the query complexity of the problem is O log 2 L  ε .This result holds even if one of the two agents has a general valuation.
-For n ≥ 3 agents and monotone valuations, the query complexity of the problem is between To put these results into context, we remark that when studying the query complexity of the problem, the input consists of the error parameter ε and the Lipschitz parameter L, given by their binary representation.In that sense, for some constant k, a (log k (L/ε)) number of queries is polynomial in the size of the input.On the contrary, a (L/ε) number of queries is exponential in the size of the input.Not surprisingly, our PPA-hardness results give rise to exponential query complexity lower bounds, whereas our algorithms can be transformed into query-based algorithms of polynomial query complexity.We remark however that beyond this connection, our query complexity analysis is in fact quantitative, as we provide tight or almost tight bounds on the query complexity as a function of the number of agents n, for both general and monotone valuation functions.
Finally, for the case of monotone valuations, we consider a more expressive query model, which is an appropriate extension of the well-known Robertson-Webb query model [61,67], the predominant query model in the literature of fair cake-cutting [20]; we refer to this extension as the Generalised Robertson-Webb (GRW) query model.We show that our bounds extend to this model as well, up to logarithmic factors.

Related work
As we mentioned in the introduction, the origins of the Consensus-Halving problem can be traced back to the 1940s and the work of Neyman [56], who studied a generalisation of the problem with k labels instead of two ("+", "−"), and proved the existence of a solution when the valuation functions are probability measures and there is no constraint on the number of cuts used to obtain the solution.The existence theorem for two labels is known as the Hobby-Rice Theorem [48] and has been studied rather extensively in the context of the famous Necklace Splitting problem [3,5,44].In fact, most of the proofs of existence for Necklace Splitting (with two thieves) were established via the Consensus-Halving problem, which was at the time referred to as Continuous Necklace Splitting [3].The term "Consensus-Halving" was coined by Simmons and Su [63], who studied the continuous problem in isolation, and provided a constructive proof of existence which holds for very general valuation functions, including all of the valuation functions that we consider in this paper.Interestingly, the proof of Simmons and Su [63] reduces the problem to finding edges of complementary labels on a triangulated n-sphere, labelled as prescribed by Tucker's lemma, a fundamental result in topology.
While not strictly a reduction in the sense of computational complexity, the ideas of [63] certainly paved the way for subsequent work on the problem in computer science.The first computational results were obtained by Filos-Ratsikas et al. [40], who proved that the associated computational problem, ε-Consensus-Halving, lies in PPA (adapting the constructive proof of Simmons and Su [63]) and that the problem is PPAD-hard, for n agents with piecewise constant valuation functions.Filos-Ratsikas and Goldberg [38] proved that the problem is in fact PPA-complete, establishing for the first time the existence of a "natural" problem complete for PPA, i.e., a problem that does not contain a polynomial-sized circuit explicitly in its definition, answering a long-standing open question of Papadimitriou [57].In a follow-up paper, [39] used the PPA-completeness of Consensus-Halving to prove that the Necklace Splitting problem with two thieves is also PPA-complete.Very recently, Filos-Ratsikas et al. [41] strengthened the PPA-hardness result to the case of very simple valuation functions, namely piecewise constant valuations with at most two blocks of value.Deligkas et al. [31] studied the computational complexity of the exact version of the problem, and obtained among other results its membership in a newly introduced class BU (for "Borsuk-Ulam" [18]) and its computational hardness for the well-known class FIXP of Etessami and Yannakakis [36].Batziou et al. [12] showed that the corresponding strong approximation problem (with measures represented by algebraic circuits) is complete for BU.A version of Consensus-Halving with divisible items was studied by Goldberg et al. [46], who proved that the problem is polynomial-time solvable for additive utilities, but PPAD-hard for slightly more general utilities.Very recently, Deligkas et al. [30] showed the PPA-completeness of the related Pizza Sharing problem [50], via a reduction from Consensus-Halving.
Importantly, none of the aforementioned results apply to the case of a constant number of agents, which was prior to this paper completely unexplored.Additionally, none of these works consider the query complexity of the problem.A recent work [4] studies ε-Consensus-Halving in a hybrid computational model (see the full version of their paper) which includes query access to the valuations, but contrary to our paper, their investigations are not targeted towards a constant number of agents, and the agents have additive valuation functions.
A relevant line of work is concerned with the query complexity of fair cake-cutting [20,59], a related but markedly different fair-division problem.Conceptually closest to ours is the paper by Deng et al. [34], who study both the computational complexity and the query complexity of contiguous envy-free cake-cutting, for agents with either general 3 or monotone valuations.For the latter case, the authors obtain a polynomial-time algorithm for three agents, and leave open the complexity of the problem for four or more agents.In our case, for ε-Consensus-Halving, we completely settle the computational complexity of the problem for agents with monotone valuations.
In the literature of fair cake-cutting, most of the related research (e.g., see [6,8,9,20,21]) has focused on the well-known Robertson-Webb (RW) query model, in which agents interact with the protocol via two types of queries, evaluation queries (eval) and cut queries (cut).As the name suggests, this query model is due to Robertson and Webb [60,61], but the work of Woeginger and Sgall [67] has been rather instrumental in formalising it in the form that it is being used today.Given the conceptual similarity of fair cake-cutting with Consensus-Halving, it certainly makes sense to study the latter problem under this query model as well, and in fact, the queries used by Alon and Graur [4] are essentially RW queries.As we show in Section 6, our bounds are qualitatively robust when migrating to this more expressive model, i.e., they are preserved up to logarithmic factors.
Related to our investigation is also the work of Brânzei and Nisan [21], who among other settings study the problem of ε-Perfect Division, which stipulates a partition of the cake into n pieces, such that each of the n agents interprets all pieces to be of approximate equal value (up to ε).For the case of n = 2, this problem coincides with ε-Consensus-Halving, and thus one can interpret our results for n = 2 as extensions of the results in [21] (which are only for additive valuations) to the case of monotone valuations (for which the problem is solvable with polynomially-many queries) and to the case of general valuations (for which the problem admits exponential query complexity lower bounds).Besides the aforementioned results, there is a plethora of works in computer science and artificial intelligence related to computational aspects of fair cake cutting and fair division in general, e.g., see [2,10,[14][15][16][17]23,27,35,45,49,52,62].

Preliminaries
In the ε-Consensus-Halving problem, there is a set of n agents with valuation functions v i (or simply valuations) over the interval [0, 1], and the goal is to find a partition of the interval into subintervals labelled either "+" or "−", using at most n cuts.This partition should satisfy that for every agent i, the total value for the union of subintervals I + labelled "+" and the total value for the union of subintervals I − labelled "−" is the same up to ε, i.e., |v i (I In this paper we will assume n to be a constant and therefore the inputs to the problem will only be ε and the valuation functions We will be interested in fairly general valuation functions; intuitively, these will be functions mapping measurable subsets A ⊆ [0, 1] to non-negative real numbers.Formally, let ([0, 1]) denote the set of Lebesgue-measurable subsets of the interval [0, 1] and λ : ([0, 1]) → [0, 1] the Lebesgue measure.We consider valuation functions v i : ([0, 1]) → R ≥0 , with the interpretation that agent i has value v i (A) for the subset A ∈ ([0, 1]) of the resource.Similarly to [4,11,21,22,34], we also require that the valuation functions be Lipschitz-continuous.Following [34], a valuation function v i is said to be Lipschitzcontinuous with Lipschitz parameter L ≥ 0, if for all A, B ∈ ([0, 1]), it holds that |v i (A) Here denotes the symmetric difference, i.e., A B = (A \ B) ∪ (B \ A).
Valuation Classes: We will be particularly interested in the following three valuation classes, in terms of decreasing generality: -General valuations, in which there is no further restriction to the functions v i .
-Monotone valuations, in which v i (A) ≤ v i (A ) for any two Lebesgue-measurable subsets A and A such that A ⊆ A .
Intuitively, for this type of function, when comparing two sets such that one is a subset of the other, an agent cannot have a smaller value for the set that contains the other.
-Additive valuations, in which v i is a function from individual intervals in [0, 1] to R ≥0 and for a set of intervals I, it holds that v i (I) = I∈I v i (I).Note that if v i is an additive valuation function, then it is in fact a measure.We will not prove any results for this type of valuation function in this paper, but we define them for reference and comparison (e.g., see Fig. 1).
Normalisation: We will also be interested in valuation functions that satisfy some standard normalisation properties.A valuation function v i is normalised, if the following properties hold: In other words, we require the agents' values to lie in [0, 1] and that their value for the whole interval is normalised to 1.These are the standard assumptions in the literature of the problem for additive valuations [3], as well as in the related problem of fair cake-cutting [59].We will only consider normalised valuation functions for our lower bounds and hardness results, whereas for the upper bounds and polynomial-time algorithms we will not impose any normalisation; this only makes both sets of results even stronger.
With regard to the valuation classes defined above, we will often be referring to their normalised versions as well, e.g., normalised general valuations or normalised monotone valuations.

Input models:
Given the fairly general nature of the valuation functions, we need to specify the manner in which they will be accessed by an algorithm for ε-Consensus-Halving.Since we are interested in both the computational complexity and the query complexity of the problem, we will assume the following standard two ways of accessing these functions.
-In the black-box model, the valuation functions v i can be arbitrary functions, and are accessed via queries (sometimes also referred to as oracle calls).A query to the function v i inputs a Lebesgue-measurable subset A (intuitively a set of subintervals) of [0, 1] and outputs v i (A) ∈ R ≥0 .This input model is appropriate for studying the query complexity of the problem, where the complexity is measured as the number of queries to the valuation function v i .We will also consider the following weaker version of the black-box model, which we will use in our query complexity upper bounds, thus making them stronger: In the weak black-box model the input to a valuation function v i is some set I of intervals, obtained by using at most n cuts, where n is the number of agents.
-In the white-box model, the valuation functions v i are polynomial-time algorithms, mapping sets of intervals to nonnegative rational numbers.These polynomial-time algorithms are given explicitly as part of the input, including the Lipschitz parameter L. 4 This input model is appropriate for studying the computational complexity of the problem, where the complexity is measured as usual by the running time of the algorithm.
We now provide the formal definitions of the problem in the black-box and the white-box model.Note that the Lipschitzparameter L is part of the input of the problem and thus will appear in the bounds we obtain.Some of the previous works take L to be bounded by a constant, and as a result it does not appear in their bounds.

Definition 1 (ε-Consensus-Halving (black-box model)).
For any constant n ≥ 1, the problem ε-Consensus-Halving with n agents is defined as follows: -Input: ε > 0, the Lipschitz parameter L, query access to the functions v i .
-Output: A partition of [0, 1] into two sets of intervals I + and I − such that for each agent i, it holds that |v i (I + ) − v i (I − )| ≤ ε, using at most n cuts.
-Output: A partition of [0, 1] into two sets of intervals I + and I − such that for each agent i, it holds that |v i (I + ) − v i (I − )| ≤ ε, using at most n cuts.
Terminology: When the valuation functions are normalised, we will refer to the problem as ε-Consensus-Halving with n normalised agents.When the valuation functions are monotone, we will refer to the problem as ε-Consensus-Halving with n monotone agents.If both conditions are true, we will use the term ε-Consensus-Halving with n normalised monotone agents.

Borsuk-Ulam and Tucker
For our PPA-hardness results and query complexity lower bounds, we will reduce from a well-known problem, the computational version of the Borsuk-Ulam Theorem [18], which states that for any continuous function F from S n to R n there is a pair of antipodal points (i.e., x, −x) which are mapped to the same point.There are various equivalent versions of the problem (e.g., see [53]); we will provide a definition that is most appropriate for our purposes.In fact, we will include several "optional" properties of the function F in our definition, which will map to properties of the valuation functions v i when we construct our reductions in subsequent sections.Specifically, we will impose conditions for normalisation and monotonicity, which will correspond to normalised valuation functions for our lower bounds/hardness results of Section 4, and to normalised monotone valuation functions for our lower bounds/hardness results of Section 5. Let B n = [−1, 1] n and let ∂(B n ) denote its boundary.As before, we will require that the functions we consider be Lipschitz-continuous.We say that F : Definition 3 (nD-Borsuk-Ulam).For any constant n ≥ 1, the problem nD-Borsuk-Ulam is defined as follows: - -Optional Properties: Normalisation: Monotonicity: , where "≤" denotes coordinate-wise comparison.
In the normalised nD-Borsuk-Ulam problem, where F is normalised, we instead ask for a point We will also use the term normalised monotone nD-Borsuk-Ulam to refer to the problem when both the normalisation and monotonicity properties are satisfied for F .
In the black-box version of nD-Borsuk-Ulam, we can query the value of the function F at any point x ∈ B n+1 .In the white-box version of this problem, we are given a polynomial-time algorithm that computes F .Since the number of inputs of F is fixed, we can assume that we are given an arithmetic circuit with n + 1 inputs and n outputs that computes F .Following the related literature [28], we will consider circuits that use the arithmetic gates +, −, ×, max, min, < and rational constants. 5nother related problem that will be of interest to us is the computational version of Tucker's Lemma [66].Tucker's lemma is a discrete analogue of the Borsuk-Ulam theorem, and its computational counterpart, nD-Tucker, is defined below.
In the black-box version of this problem, we can query the labelling function for any point p ∈ [N] n and retrieve its label.In the white-box version, is given in the form of a Boolean circuit with the usual gates ∧, ∨, ¬.
In the white-box model, nD-Tucker was recently proven to be PPA-hard for any n ≥ 2, by Aisenberg et al. [1]; the membership of the problem in PPA was known by [57].
The computational class PPA was defined by Papadimitriou [57], among several subclasses of the class TFNP [54], the class of problems with a guaranteed solution which is verifiable in polynomial time.PPA is defined with respect to a graph of exponential size, which is given implicitly as input, via the use of a circuit that outputs the neighbours of a given vertex, and the goal is to find a vertex of odd degree, given another such vertex as input.
Given the close connection between Tucker's Lemma and the Borsuk-Ulam Theorem, the PPA-hardness of some appropriate computational version of Borsuk-Ulam follows as well [1].However, this does not apply to the version of nD-Borsuk-Ulam defined above, especially when one considers the additional properties of the function F required for normalisation and monotonicity, as discussed earlier.We will reduce from nD-Tucker to our version of nD-Borsuk-Ulam, to obtain its PPAhardness (even for the normalised monotone version), which will then imply the PPA-hardness of ε-Consensus-Halving, via our main reduction in Section 3.
In the black-box model, Deng et al. [33], building on the results of Chen and Deng [25], proved both query complexity lower bounds and upper bounds for nD-Tucker.

Theorem 2.2 ([33]
).For any constant n ≥ 2, the query complexity of nD-Tucker is We remark that Deng et al. [33] use a version of nD-Tucker that is slightly different from the one that we defined above, but their results apply to this version as well.
Remark 1.Some connections between the aforementioned problems, namely nD-Borsuk-Ulam, nD-Tucker, and ε-Consensus-Halving are known from the previous literature.First, nD-Borsuk-Ulam and nD-Tucker are known to be computationally equivalent due to Papadimitriou [57] and Aisenberg et al. [1], although, technically speaking, none of the aforementioned papers proved this result formally, or even defined nD-Borsuk-Ulam formally.Yet, even the implicit reduction between those problems is insufficient for our purposes, because, in order to achieve our results for Consensus-Halving, we need a version of nD-Borsuk-Ulam that exhibits several properties, as described in Definition 3 (namely, normalisation and primarily monotonicity).Indeed, proving the PPA-completeness of monotone nD-Borsuk-Ulam is the main technical result of our work.
In terms of the completeness of Consensus-Halving, the works of Filos-Ratsikas and Goldberg [38,39] indeed establish reductions from nD-Tucker, but crucially, these reductions are white-box, i.e., they have access to the Boolean circuit that encodes the labelling function.On the contrary, our reductions are black-box (see Section 2.2 below), which allows us to obtain both computational complexity results and query complexity bounds at the same time.Also, quite importantly, all the previous reductions do not work when there is a constant number of agents.

Efficient black-box reductions
The reductions that we will construct (from nD-Tucker to nD-Borsuk-Ulam to ε-Consensus-Halving) will be black-box reductions, and therefore they will also allow us to obtain query complexity lower bounds for ε-Consensus-Halving in the black-box model, given the corresponding lower bounds of Theorem 2.2.For the upper bounds, we will reduce directly from ε-Consensus-Halving to nD-Tucker, again via a black-box reduction.
Roughly speaking,6 a black-box reduction from Problem A to Problem B is a procedure by which we can answer oracle calls (queries) for an instance of Problem B by using an oracle for some instance of Problem A, such that a solution to the instance of Problem B yields a solution to the instance of Problem A. For example, a black-box reduction from nD-Borsuk-Ulam to ε-Consensus-Halving is a procedure that simulates an instance for the latter problem by accessing the function F of the former problem a number of times, and such that a solution of ε-Consensus-Halving can easily be translated to a solution of nD-Borsuk-Ulam.The name "black-box" comes from the fact that this type of reduction does not need to know the structure of the functions v i of ε-Consensus-Halving or F of nD-Borsuk-Ulam.
In order to prove lower bounds on the query complexity of some Problem B, it suffices to construct a black-box reduction from some Problem A, for which query complexity lower bounds are known; the obtained bounds will depend on the number k of oracle calls to the input of Problem A that are needed to answer an oracle call to the input of Problem B. A black-box reduction is efficient if k is a constant, and therefore the query complexity lower bounds of Problems A and B are of the same asymptotic order.To obtain upper bounds on the query complexity, we can construct a reduction in the opposite direction (from Problem B to Problem A), assuming that query complexity upper bounds for Problem A are known.
Ideally, we would like to use the same reduction to also obtain computational complexity results in the white-box model.For this to be possible, the procedure described above should actually be a polynomial-time algorithm.Slightly abusing terminology, we will use the term "efficient" to describe such a reduction in the white-box model as well.
Definition 5. We say that a black-box reduction from Problem A to Problem B is efficient if: -in the black-box model, it uses a constant number of queries (oracle calls) to the function (oracle) of Problem A, for each query (oracle call) to the function of Problem B; -in the white-box model, the condition above holds, and the reduction is also a polynomial-time algorithm.
Concretely for our case, all of our reductions will be efficient black-box reductions, thus allowing us to obtain both PPAcompleteness results and query complexity bounds matching those of the problems that we reduce from/to.We remark that the reductions constructed for proving the PPA-hardness of the problem in previous works (for a non-constant number of agents) [38,39,41] are not black-box reductions, and therefore have no implications on the query complexity of the problem.

Black-box reductions to and from consensus-halving
In this section we develop our main machinery for proving both PPA-completeness results and query complexity upper and lower bounds for ε-Consensus-Halving.We summarise our general approach for obtaining positive and negative results below.
For our impossibility results (i.e., computational hardness results in the white-box model and query complexity lower bounds in the black-box model), we will construct an efficient black-box reduction from nD-Borsuk-Ulam to ε-Consensus- Halving with n agents (Proposition 3.1).This reduction will preserve the optional properties of Definition 3, meaning that if the instance of nD-Borsuk-Ulam is normalised (respectively monotone), the valuation functions of the corresponding instance of ε-Consensus-Halving will be normalised (respectively monotone) as well.This will allow us in subsequent sections to reduce the problem of proving impossibility results for ε-Consensus-Halving to proving impossibility results for the versions of nD-Borsuk-Ulam with those properties.We will obtain these latter results via reductions from nD-Tucker, which for n ≥ 2 is known to be PPA-hard (Theorem 2.1) and admit exponential query complexity lower bounds (Theorem 2.2).For our positive results (i.e., membership in PPA in the white-box model and query complexity upper bounds in the blackbox model), we will construct an efficient black-box reduction from ε-Consensus-Halving to nD-Tucker (Proposition 3.2).
We remark here that a similar reduction already exists in the related literature [42], but only applied to the case of additive valuation functions.The extension to the case of general valuations follows along the same lines, and we provide it here for completeness.We also note that some of our positive results, namely the results for one general agent and two monotone agents, will not be obtained via reductions, but rather directly via the design of polynomial-time algorithms in the white-box model or algorithms of polynomial query complexity in the black-box model.Related to the discussion above, we have the following two propositions.Their proofs are presented in the sections below.

Proposition 3.2.
There is an efficient black-box reduction from ε-Consensus-Halving to nD-Tucker.

Proof of Proposition 3.1
Description of the reduction.Let n ≥ 1 be a fixed integer.Let ε > 0 and let F : B n+1 → B n be a Lipschitz-continuous function with Lipschitz parameter L. We now construct valuation functions v 1 , . . ., v n for a Consensus-Halving instance.
Let For i ∈ [n], the valuation function v i of the ith agent is defined as

Lipschitz-continuity. For any
(For interpretation of the colours in the figure(s), the reader is referred to the web version of this article.) Thus, y is a solution to the original nD-Borsuk-Ulam instance.
White-box model.This reduction yields a polynomial-time many-one reduction from nD-Borsuk-Ulam to ε-Consensus- Halving with n agents.Thus, if we show that nD-Borsuk-Ulam is PPA-hard for some n, then we immediately obtain that ε-Consensus-Halving with n agents is also PPA-hard.
Black-box model.It is easy to see that this is a black-box reduction.It can be formulated as follows: given access to an oracle for an instance of nD-Borsuk-Ulam with parameters (ε, L) we can simulate an oracle for an instance of ε-Consensus- Halving (with n agents) with parameters (ε/2, (n + 1)L) such that any solution of the latter yields a solution to the former.
Furthermore, in order to answer a query to some v i , we only need to perform a single query to F .Thus, we obtain the following query lower bound: solving an instance of ε-Consensus-Halving (with n agents) with parameters (ε, L) requires at least as many queries as solving an instance of nD-Borsuk-Ulam with parameters (ε , L ) = (2ε, L n+1 ).This means that if nD-Borsuk-Ulam has a query lower bound of (( L ε ) n−1 ) for some n, then ε-Consensus-Halving (with n agents) has a query lower bound of (( L 2ε(n+1) Additional properties of the reduction.Some properties of the Borsuk-Ulam function F carry over to the valuation functions v 1 , . . ., v n .In particular, the following properties are of interest to us: - ).Thus, it remains to prove that v i (∅) = 0 and v i ([0, 1]) = 1.It is easy to see that x([0, 1]) = (1, 1, . . ., 1) and thus F (x([0, 1])) = (1, 1, . . ., 1) since F is normalised, which yields v i ([0, 1]) = 1.On the other hand, we have Here we also used the fact F is an odd function, since it is normalised.In fact, since F is odd, we also obtain that This can be shown by noting that x( A c ) = −x(A) (by using the same argument as for I + and I − above) and then using the fact that This means that if we are able to show (white-or black-box) hardness of nD-Borsuk-Ulam where F has additional properties, then the hardness will also hold for ε-Consensus-Halving with n agents that have the corresponding properties.
Furthermore, note that if F is a normalised nD-Borsuk-Ulam function, then an ε-approximate Consensus-Halving for v 1 , . . . ,v n (i.e., |v i (I + ) − v i (I − )| ≤ ε), yields an ε-approximate solution to F , in the sense that F (x(I + )) ∞ ≤ ε.This is due to the fact that, by definition, F is an odd function, if it is normalised.

Proof of Proposition 3.2
The reduction presented in this section is based on a proof of existence for a generalization of Consensus-Halving with general valuations given in [42].This existence proof was also used in [42] to provide a reduction for additive valuations.It can easily be extended to work for general valuations as well.We include the full reduction here for completeness.
Description of the reduction.Consider an instance of ε-Consensus-Halving with n agents with parameters ε, L. Let v 1 , . . ., v n denote the valuations of the agents.We consider the domain K n m , where K m = {−1, −(m − 1)/m, . . ., −1/m, 0, 1/m, 2/m, . . ., (m − 1)/m, 1}, for m = 2nL/ε .A point in K n m corresponds to a way to partition the interval [0, 1] into two sets I + , I − using at most n cuts.A very similar encoding was also used by Meunier [55] for the Necklace Splitting problem.A point x ∈ K n m corresponds to the partition I + (x), I − (x) obtained as follows.
Note that subsequent assignments of a label to an interval, "overwrite" previous assignments.One way of thinking about it, is that we are applying a coat of paint on the interval [0, 1].Initially the whole interval is painted with colour "+", and as the procedure is executed, various subintervals will be painted over with colour "−" or "+".It is easy to check that the final partition into I + (x), I − (x) that is obtained, uses at most n cuts.Furthermore, for any x ∈ ∂ K n m , the partition I + (−x), I − (−x) obtained from −x corresponds to the partition I + (x), I − (x) with labels "+" and "−" switched.In other words, I + (−x) = I − (x) and I − (−x) = I + (x).For a more formal definition of this encoding, see [42].
We define a labelling : K n m → {±1, ±2, . . ., ±n} as follows.For any x ∈ K n m : 1. Let i ∈ [n] be the agent that sees the largest difference between v i (I + (x)) and v i ( , where we break ties by picking the smallest such i. 2. Pick a sign s ∈ {+, −} as follows. With this definition, it is easy to check that (−x) = − (x) for all x ∈ ∂(K n m ).By re-interpreting K n m as a grid [N] n with N = 2m + 1, we thus obtain an instance : [N] n → {±1, ±2, . . ., ±n} of nD-Tucker.In particular, note that is antipodally anti-symmetric on the boundary, as required.
Correctness.Any solution to the nD-Tucker instance yields x, y ∈ K n m with x − y ∞ ≤ 1/m and (x) = − (y).Without loss of generality, assume that (x) = +i for some i ∈ and the same bound also holds for λ(I − (x) I − (y)).Since v i is Lipschitz-continuous with parameter L, it follows that For the sake of contradiction, let us assume that v i (I + (x)) > v i (I − (x)) + ε.Then, it follows that But this contradicts the fact that (y) = −i.Thus, it must hold that |v i ( This means that I + (x), I − (x) yields a solution to the original ε-Consensus-Halving instance.
Note that the reduction uses N = 2m + 1 ≤ 4nL/ε + 3 for the nD-Tucker instance.Furthermore, any query to the labelling function can be answered by performing 2n queries to the valuation functions v 1 , . . ., v n .

General valuations
We are now ready to prove our main results for the ε-Consensus-Halving problem, starting from the case of general valuations.First, for a single agent with a general valuation function, a simple binary search procedure is sufficient to solve ε-Consensus-Halving with a polynomial number of queries and in polynomial time, therefore obtaining an efficient algorithm both in the white-box and in the black-box model.We have the following theorem.

Theorem 4.1. For one agent with a general valuation function (or multiple agents with identical general valuations), ε-Consensus-
Halving is solvable in polynomial time and has query complexity log L ε .
Proof.We will prove the theorem for the case of n = 1, as a solution to ε-Consensus-Halving for this case is also straightforwardly a solution to the problem with multiple agents with identical valuations.Our algorithm essentially simulates binary search.We say that the label of a cut x ∈ [0 In any other case, the label of the cut is 0; if a cut has label 0, then it is a solution to ε-Consensus-Halving. Observe that in order for an interval [a, b] ⊆ [0, 1] to contain a solution, it suffices that the label of a be "−" and the label of b be "+" (or vice-versa); then there is definitely a point x ∈ [a, b] where the label is 0 (by continuity of v).
Now let a = 0 and b = 1.If a or b has label 0, then we have immediately found a solution.Otherwise, note that if a has label "−", then b must have label "+", and vice-versa.For convenience, in what follows, we assume that a has label "−" and b has label "+".Our algorithm proceeds as follows in every iteration.Given an interval [a, b] with label "−" for a and label "+" for b, it computes the label of a+b 2 .This can be done via two eval queries.Then, if the label of a+b 2 is "+", it sets b = a+b 2 ; if the label is "−", it sets a = a+b 2 ; and if the label is 0 it outputs this cut.We claim that the algorithm will always find a cut with label 0 after at most log L ε iterations.For the sake of contradiction, assume that there is no such cut after log L ε iterations.Observe that the length of [a, b] in this case will be ε L .In addition, we know the labels of a and b.Cut a has label "−", thus v([a, 1]) > v([0, a]) + ε, and cut b has label "+", i.e., which contradicts the assumption that cut b has label "+".
Since a polynomial-time algorithm which queries the polynomial-time algorithm of the input O log L ε times is a polynomial-time algorithm, we immediately obtain the polynomial-time upper bound for the white-box model.
For the black-box model, the algorithm immediately gives us the upper bound, whereas the lower bound follows from our general reduction from nD-Borsuk-Ulam (Proposition 3.1), and the query lower bounds for the latter problem obtained through Lemma 4.2 below.In more detail, 1D-Borsuk-Ulam (and thus ε-Consensus-Halving with a single agent) inherits its query complexity lower bounds from 1D-Tucker, which can be easily seen to require at least (log N) queries in the worst-case.The latter bound naturally translates to a (log(L/ε)) bound for ε-Consensus-Halving.We also remark that the upper bound holds for any version of the problem with general valuations, even in the weak black-box model, whereas the lower bound holds even for normalised general valuations and for the standard black-box model.
We now move to our results for two or more agents with general valuations.Here we obtain a PPA-completeness result for ε-Consensus-Halving, as well as exponential bounds on the query complexity of the problem.Our results demonstrate that for general valuations, even in the case of two agents, the problem is intractable in both the black-box and the whitebox model.The main technical result of the section is the following pivotal lemma, proved at the end of this section.

Lemma 4.2. For any constant n ≥ 1, nD-Tucker reduces to normalised nD-Borsuk-Ulam, via an efficient black-box reduction.
Now we state our main theorem about the computational/query complexity of the ε-Consensus-Halving problem, as well as a corresponding theorem for nD-Borsuk-Ulam.The proofs follow from Theorems 2.1 and 2.2 characterising the complexity of nD-Tucker and the following chain of reductions (where "≤" denotes an efficient black-box reduction from the problem on the left-hand side to the problem on the right-hand side).
In both cases, the lower bounds hold even for the normalised versions of the problems, while the upper bounds hold even for the more general, non-normalised, versions.
Let δ = min{2ε, 1}.Note that δ ∈ (0, 1] and ε < δ < 2ε.Without loss of generality, we can assume that for p = (N, N, . . ., N) it holds (p) = +1.Indeed, it is easy to see that we can rename the labels to achieve this, without introducing any new solutions.The remainder of the proof will proceed in three steps: In Step 1, we will interpolate the nD-Tucker instance, to obtain a continuous function on [−1/2, 1/2] n , in Step 2, we extend this function to the whole domain [−1, 1] n , and in Step 3 we further extend to [−1, 1] n+1 , to obtain an instance of the normalised nD-Borsuk-Ulam problem.
Step 1: Interpolating the nD-TUCKER instance.The first step is to embed the nD-Tucker the value of the function at every grid point according to the labelling function and then interpolate to obtain a continuous Note that antipodal grid points exactly correspond to antipodal points in [−1/2, 1/2] n .In other words, p and q are antipodal on the grid, if and only if p = − q.
Next we define the value of the function • n] at the embedded grid points as follows for all p ∈ [N] n .For i ∈ [n], e +i denotes the ith unit vector in R n , and e −i := −e +i .We then use Kuhn's triangulation on the embedded grid to interpolate between these values and obtain a function for more details).We obtain: • f is antipodally anti-symmetric on the boundary of It is not hard to see that g is also antipodally anti-symmetric, Lipschitz-continuous with Lipschitz parameter 2n 2 (N − 1)δ and g(1/2, 1/2, . . ., and thus x again yields a solution to the nD-Tucker instance. Step 2: Extending to [−1, 1] n .The goal of the next step is to define a function h : [−1, 1] n → [−1, 1] n that extends g and ensures that h(1, 1, . . ., 1) = (1, 1, . . ., 1), while maintaining its other properties.For otherwise where 1 ∈ R n denotes the all-ones vector, i.e., 1 = (1, 1, . . ., 1).Clearly, it holds that h(1, 1, . . ., 1) = 1.It is also easy to see that h(−x) = −h(x) for all x ∈ ∂([−1, 1] n ), in particular because T (−x) = −T (x).Furthermore, if h(x) ∞ ≤ ε, then it must be that g(T (x)) ∞ ≤ ε, which yields a solution to the nD-Tucker instance.Indeed, if x i ≥ 1/2 for all i, then As a result, h is Lipschitz-continuous on [−1, 1] n with Lipschitz parameter max{2, 2n 2 (N − 1)δ}.Indeed, consider any x, y ∈ [−1, 1] n .If x i ≥ 1/2 and y i ≤ −1/2 for all i, then x − y ∞ ≥ 1, and thus h(x) − h( y) ∞ ≤ 2 ≤ 2 x − y ∞ .By symmetry, the only remaining case that we need to check is when x i ≥ 1/2 for all i, and y is such that there exists i with y i < 1/2 and there exists i with y i > −1/2.In that case, we consider the segment [x, y] from x to y, and let z ∈ [x, y] be the point that is the furthest away from x but such that z i ≥ 1/2 for all i.Note that there must exist i such that z i = 1/2.This means h(z) = g(T (z)) and thus h(z) − h( y) ∞ ≤ 2n 2 (N − 1)δ z − y ∞ .On the other hand, we have Putting these two expressions together, we obtain that h Here we used the fact that z ∈ [x, y], which means that there exists t ∈ [0, 1] such that z = x + t( y − x) and thus Since F is an odd function, we can assume that x n+1 ≥ 0 (otherwise just use −x instead of x).If x n+1 = 1, then F (x , x n+1 ) = h(x ), and thus h(x ) ∞ ≤ ε, which yields a solution to the nD-Tucker instance.If in this case too.
Finally, let us determine the Lipschitz parameter of F .Let x, y ∈ [−1, 1] n+1 .We have and also Putting these two expressions together, it follows that In the black-box model, in order to answer one query to F , we have to answer two queries to h, i.e., two queries to g.
In order to answer a query to g, we have to answer one query to f , i.e., n + 1 queries to the labelling function (in order to interpolate).Thus, one query to F requires 2(n + 1) queries to .Since n is a constant, the query lower bounds from nD-Tucker carry over to normalised nD-Borsuk-Ulam.
In the white-box model, the reduction actually gives us a way to construct an arithmetic circuit that computes F , if we are given a Boolean circuit that computes .Indeed, using standard techniques [26,29], the execution of the Boolean circuit on some input can be simulated by the arithmetic circuit.Furthermore, the input bits for the Boolean circuit can be obtained by using the < gate.All the other operations that we used to construct F can be computed by the arithmetic gates +, −, ×, max, min, < and rational constants.Thus, we obtain a polynomial-time many-one reduction from nD-Tucker to normalised nD-Borsuk-Ulam for all n ≥ 1.

Monotone valuations
In this section, we present our results for agents with monotone valuations.In contrast to the results of Section 4, here we prove that for two agents with monotone valuations, the problem is solvable in polynomial time and with a polynomial number of queries, and in fact this result holds even if only one of the two agent has a monotone valuation and the other has a general valuation.For three or more agents however, the problem becomes PPA-complete once again, and we obtain a corresponding exponential lower bound on its query complexity.

An efficient algorithm for two monotone agents
We start with our efficient algorithm for the case of two agents, which is a polynomial-time algorithm in the white-box model, as well as an algorithm of polynomial query complexity in the black-box model; see Algorithm 1.The algorithm is based on a nested binary search procedure.At the higher level, we are performing a binary search on the position of the left cut of a solution.At the lower level, for any fixed position for the left cut, we perform another binary search in order to find a right cut such that the pair of cuts forms a solution for the first agent; as we have already seen this can be efficiently done if the agent has monotone valuation.Intuitively, we are moving on the "indifference curve" of the valuation function of the agent with the monotone valuation (see the red zig-zag line in Fig. 3) until we reach a solution.We decide how to move on this curve by checking the preferences of the second agent.
Before we proceed, we draw an interesting connection with Austin's moving knife procedure [7], an Exact-Division procedure for two agents with general valuations.The procedure is based on two moving knifes which one of the two agents simultaneously and continuously slides across the cake, maintaining that the corresponding cut positions ensure that she is satisfied with the partition.At some point during this process, the other agent becomes satisfied, which is guaranteed by the intermediate value theorem.Our algorithm can be interpreted as a discrete time implementation of this idea and quite interestingly, it results in a polynomial-time algorithm when one of the two agents has a monotone valuation, whereas it is computationally hard when both agents have general valuations, as shown in Section 4. On a more fundamental level, this demonstrates the intricacies of transforming moving-knife fair division protocols into discrete algorithms.
The main theorem of this section is the following.

Theorem 5.1. For two agents with monotone valuation functions, ε-Consensus-Halving is solvable in polynomial time and has query complexity O log 2 L ε , even in the weak black-box model. This result holds even if one of the two agents has a general valuation.
Before we proceed with the description of our algorithm and its analysis, let us begin with some conventions that will make the presentation easier.Since we have to make two cuts, we denote x the position of the leftmost cut and y the position of the rightmost cut.So, 0 ≤ x ≤ y ≤ 1.In addition, the labels of the corresponding intervals are as follows: intervals [0, x] and [y, 1] have label "+", forming the positive piece, and interval [x, y] has label "−", forming the negative piece.Given a pair of cuts (x, y), we say that agent i: Note that we can efficiently compute p using Theorem 4.1.The final assumption we need to make is regarding the preferences of Agent 2 for the two special pairs of (0, p ) and (p , 1).Observe that both pairs of cuts create the same pieces over [0, 1] and only change the labels of the pieces.Hence, if Agent 2 weakly prefers the positive piece under the pair of cuts (0, p ), he has to weakly prefer the negative piece under the pair of cuts (p , 1).In the description of the algorithm we will assume that this is indeed the case, i.e., he weakly prefers the positive piece under (0, p ) and the negative piece under (p , 1).(The other case can be handled analogously.)Using the above notation and assumptions we can now state Algorithm 1.
ALGORITHM 1: ε-Consensus-Halving for two agents with monotone valuations.Proof of Theorem 5.1.To prove the correctness of Algorithm 1 we will prove that the following invariants are maintained through all iterations of the algorithm.
1. Agent 1 is indifferent under the pair of cuts (x , y ), and also under the pair of cuts (x r , y r ).
2. Agent 2 weakly prefers the positive piece under the pair of cuts (x , y ), and weakly prefers the negative piece under the pair of cuts (x r , y r ).
Assuming that Invariants 1 and 2 hold, it follows that the algorithm outputs a correct solution.Indeed, Agent 1 is indifferent under the pair of cuts (x , y ) by Invariant 1.By Invariant 2, Agent 2 weakly prefers the positive piece under the pair of cuts (x , y ), i.e., v 2 ([0 By Invariant 2, Agent 2 weakly prefers the negative piece under the pair of cuts (x r , y r ), i.e., v 2 ([0, Next, we prove that Invariants 1 and 2 hold.First of all, note that they hold at the start of the algorithm by the choice of p and from the fact that Agent 2 weakly prefers the positive piece under (0, p ) and the negative piece under (p , 1).Both invariants are then automatically maintained by construction of the algorithm.We just have to argue that Steps 4 and 12 are possible, i.e., that such y (respectively x) exists.Consider Step 4 first.We apply the intermediate value theorem as follows: for the pair of cuts ( x +x r 2 , y), Agent 1 weakly prefers the positive piece when y = y , and weakly prefers the negative piece when y = y r .Thus, by continuity of the valuation, there exists y ∈ [y , y r ] such that Agent 1 is indifferent between the two pieces.For the first point, note that Agent 1 is indifferent for the pair of cuts (x , y ) (by Invariant 1 in the previous iteration), and thus weakly prefers the positive piece for the pair of cuts ( x +x r 2 , y ) by monotonicity, since the positive piece has increased.For the second point, note that Agent 1 is indifferent for the pair of cuts (x r , y r ) (again by Invariant 1 in the previous iteration), and thus weakly prefers the negative piece for the pair of cuts ( x +x r 2 , y r ) by monotonicity, since the negative piece has increased.The same argument also applies to Step 12. Finally, observe that we have not made use of the monotonicity of Agent 2's valuation anywhere in our proof.Thus, the algorithm also works if Agent 2 has a general valuation.
In order to bound the running time of Algorithm 1, we need to bound the running time of: the number of iterations the algorithm needs in each while loop, and the time we need to perform Step 4 and Step 12. Firstly, observe that Steps 4 and 12 can be tackled using the algorithm from Theorem 4.1.This is because we can view the problem as a special case where we have one agent and we need to place a single cut in a specific subinterval where we know that a solution exists.Thus, each of these steps requires O (log L ε ) time.In addition, observe in every while loop we essentially perform a binary search.Thus, after O (log L ε ) iterations we get that x r − x ≤ ε 8L , since in every iteration the distance between x r and x decreases by a factor of 2. Similarly, after O (log L ε ) iterations we get that |y r − y | ≤ ε 8L as well.Hence, every while loop requires O (log 2 L ε ) time.So, Algorithm 1 terminates in O (log 2 L ε ) time.Finally, observe that our algorithm just requires the preferences of Agent 2 for specific pairs of cuts which can be done via two evaluations of v 2 .
Next we argue that we can implement Algorithm 1 using O (log 2 L ε ) queries in the black-box model.Observe that Steps 4 and 12 can be simulated with O (log L ε ) queries each, as we have already explained in Theorem 4.1.In addition, observe that every time we ask Agent 2 for his preferences we only need two evaluation queries.Since we need O (log L ε ) iterations in every while loop, Algorithm 1 needs O (log 2 L  ε ) queries in total.

Results for three or more monotone agents
We now move on to the case of three or more monotone agents, for which we manage to show that the problem becomes computationally hard and has exponential query complexity.Our results thus show a clear dichotomy on the complexity of ε-Consensus-Halving with monotone agents, between the case of two agents and the case of three or more agents.
Again we employ our general approach, but this time we need to prove computational and query-complexity hardness of the monotone nD-Borsuk-Ulam problem; the corresponding impossibility results for ε-Consensus-Halving with agents with monotone valuations then follow from our property-preserving reduction (Proposition 3.1).To this end, we in fact construct an efficient black-box reduction from (n − 1)D-Tucker to monotone nD-Borsuk-Ulam, i.e., we reduce from the corresponding version of nD-Tucker of one lower dimension.In order to achieve this, we once again interpolate the (n − 1)D-Tucker instance to obtain a continuous function, but, this time, we embed it in a very specific lower dimensional subset of the nD-Borsuk-Ulam domain.We then show that the function can be extended to a monotone function on the whole domain.
The "drop in dimension" which is featured in our reduction has the following effects: -Since 1D-Tucker is solvable in polynomial-time, we can only obtain the PPA-hardness of monotone nD-Borsuk-Ulam for n ≥ 3, and therefore the PPA-hardness of ε-Consensus-Halving for three or more monotone agents.
-The query complexity lower bounds that we "inherit" from (n − 1)D-Tucker do not exactly match our upper bounds, obtained via the reduction from ε-Consensus-Halving to nD-Tucker (Proposition 3.2).
The main technical contribution of this section is the following lemma, proved at the end of this section.Similarly to Section 4, we then obtain the following two theorems.
The proofs of the theorems follow from Theorems 2.1 and 2.
Step 3: Extending to a function [−1, 1] n+1 → R n−1 .In the next step, we extend H to a function ∈ D denote the orthogonal projection onto D.Here •, • denotes the scalar product in R n+1 , and 1 n+1 ∈ R n+1 is the all-ones vector.F is defined as follows: where C = 1 + 2(n + 1) 4 Nε.It is easy to check that F is an odd function by using the fact that S(−x) = −S(x), (−x) = − (x) and H(−x) = −H(x).
It is easy to see that F is continuous.Let us determine an upper bound on its Lipschitz parameter.For any x, y ∈ [−1, 1] n+1 we have Let us now show that F is monotone.For this, it is enough to show that F (x) ≤ F (y) for all x, y ∈ [−1, 1] n+1 with x ≤ y and S(x) ≥ 0, S( y) ≥ 0. Indeed, since F is odd, this implies that the statement also holds if S(x) ≤ 0 and S( y) ≤ 0. Finally, if S(x) ≤ 0 and S( y) ≥ 0, then there exists z with x ≤ z ≤ y and S(z) = 0, which implies that F (x) ≤ F (z) ≤ F (y).
In order to show that (x) is a solution to H , it remains to prove that there exists i ∈ If j < n + 1, then let i := j.Otherwise, if j = n + 1, then, since S( (x)) = 0, there necessarily exists i ∈ [n] such that Thus, from any x ∈ ∂([−1, 1] n+1 ) with F (x) ∞ ≤ ε, we can obtain a solution to the original (n − 1)D-Tucker instance.
Note that the reduction is polynomial-time and we have only used operations allowed by the gates of the arithmetic circuit.
In particular, we have only used division by constants, which can be performed by multiplying by the inverse of that constant.
The reduction is black-box and any query to F can be answered by at most 2n queries to the labelling function of the original (n − 1)D-Tucker instance.Note that this is a constant, since n is constant.

Relations to the Robertson-Webb query model
The black-box query model that we used in the previous sections is the standard model used in the related literature of query complexity, where the nature of the input functions depends on the specific problems at hand.For example, for nD-Borsuk-Ulam the function F inputs points on the domain and returns other points, whereas in nD-Tucker the function inputs points and outputs their labels.At the same time, in the literature of the cake-cutting problem, the predominant query model is in fact a more expressive query model, known as the Robertson-Webb model (RW) [61,67].The RW model has been defined only for the case of additive valuations, and consists of the following two types of queries: eval queries, where the agent is given an interval [a, b] and she returns her value for that interval, and cut queries, where the agent is given a point x ∈ [0, 1] and a real number α, and they designate the smallest interval [x, y], for which their value is exactly α.
In fact, in the literature of envy-free cake-cutting, the query complexity in the RW model has been one of the most important open problems [20,59], with breakthrough results coming from the literature of computer science fairly recently [8,9].Since ε-Consensus-Halving and ε-fair cake-cutting [21] are conceptually closely related, it would make sense to consider the query complexity of the former problem in the RW model as well. 7 potential hurdle in this investigation is that the RW model has not been defined for valuation functions beyond the additive case.To this end, we propose the following generalisation of the RW model that we call Generalised Robertson-Webb model (GRW)), which is appropriate for monotone valuation functions that are not necessarily additive.Intuitively, in the GRW model the agent essentially is given sets of intervals A rather than single intervals, and the queries are defined accordingly (see also Fig. 4).
Definition 6 (Generalised Robertson-Webb (GRW) Query Model).In the GRW query model, there are two types of queries: eval queries, where agent i is given any Lebesgue-measurable subset A of [0, 1] and she returns her value v i (A) for that set, and cut queries, where agent i is given two disjoint Lebesgue-measurable subsets A 1 and A 2 of [0, 1], an interval I = [a, b], disjoint from A 1 and A 2 , and a real number γ ≥ 0, and she designates some Let us discuss why this model is the most appropriate generalisation of the RW model.First, the definition of eval queries is in fact the natural extension, as the agent needs to specify her value for sets of intervals; note that in the additive case, it suffices to elicit an agent's value for only single intervals, as her value for unions of intervals is then simply the sum of the elicited values.This is not the case in general for monotone valuations, and therefore we need a more expressive eval query.We also remark that the eval query is exactly the same as a query in the black-box model, as defined in Section 2, and therefore the GRW model is stronger than the black-box query model.Brânzei and Nisan [21] in fact studied the restriction of the RW model (for the cake-cutting problem and for additive valuations), for which only eval queries are allowed, and they coined this the R W − query model.To put our results into context, we offer the following definition of the G RW − query model, which is, as discussed, equivalent to the black-box query model of Section 2. By the discussion above, all of our query complexity bounds in Section 4 and Section 5 apply verbatim to the GRW − query model.Definition 7 (Generalised Robertson-Webb − (GRW − ) query model).In the GRW − query model, only eval queries are allowed; there agent i is given a Lebesgue-measurable subset A of [0, 1] and she returns her value v i (A) for that set.
While the extension of eval queries from the RW model to the GRW model is relatively straightforward, the generalisation of cut queries is somewhat more intricate.Upon closer inspection of a cut query in the (standard) RW model for additive valuations, it is clear that one can equivalently define this query as This is because one can easily find the value of the agent for [a, b] with one eval query, and then for any value of α used in the standard definition of a cut query, there is an appropriate value of γ in the modified definition above, which will result in exactly the same position x of the cut and vice-versa.The simplicity of the cut queries in the RW model is enabled by the fact that for additive valuations, the value of any agent i for an interval I does not depend on how the remainder of the interval [0, 1] has been cut.This is no longer the case for monotone valuations, as now the agent needs to specify a different value for sets of intervals.We believe that our definition of the cut query in the GRW model is the appropriate generalisation, which captures the essence of the original cut queries in RW, but also allows for enough expressiveness to make this type of query useful for monotone valuations beyond the additive case.Finally, we remark that for general valuations (beyond monotone), any sensible definition of cut queries seems to be too strong, in the sense that it conveys unrealistically too much information (in contrast to the RW and GRW models, where the cut queries are intuitively "shortcuts" for binary search).For example, assume that the agent is asked to place a cut at some point x in an interval [a, b], for which (a) if the cut is placed at a the agent "sees" an excess of "+" and (b) if the cut is placed at b, the agent still "sees" an excess of "+".By the boundary conditions of the interval, there is no guarantee that a cut that "satisfies" the agent exists within that interval, and we would need to exhaustively search through the whole interval to find such a cut position, if it exists, meaning that binary search does not help us here.On the other hand, a single cut query would either find the position or return that there is no such x within the interval.
We are now ready to state our results for the section, which we summarise in the following theorem.Qualitatively, we prove that ε-Consensus-Halving with three normalised monotone agents still has an exponential query complexity in the GRW model (with logarithmic savings compared to the black-box model), whereas for two normalised monotone agents, the problem becomes "easier" by a logarithmic factor.Theorem 6.1.In the Generalised Robertson-Webb model: ε-Consensus-Halving with n ≥ 3 normalised monotone agents requires ε-Consensus-Halving with n = 2 monotone agents can be solved with O (log(L/ε)) queries.
Proof.The upper bound for n = 2 can be obtained relatively easily, by observing that in the proof of Theorem 5.1, Step 4 and Step 12 of Algorithm 1 were obtained via binary search, using O (log(L/ε)) queries, which resulted in a query complexity of O (log 2 (L/ε)), since these steps were executed O (log(L/ε)) times.In the GRW model, we can simply replace each of those binary searches by a single cut query (as these only apply to the monotone agents) and obtain a query complexity of O (log(L/ε)).For example, Step 4 can be simulated by a cut query where γ For the lower bound when n ≥ 3, we will show how to construct an instance of ε-Consensus-Halving with n normalised monotone agents, such that the ((L/ε) n−2 ) lower bound for eval queries still holds, but we can additionally answer any cut query by performing at most O (log(L/ε)) eval queries.We will again use our reduction from normalised monotone nD-Borsuk-Ulam to ε-Consensus-Halving with n normalised monotone agents (Proposition 3.1).
Given a cut query (A ) .Note that we are looking for t * ∈ I such that φ(t * ) = γ and that φ can be evaluated by using two eval queries.Furthermore, since v i is monotone, φ is non-decreasing.We begin by checking that ≥ γ .If one of these two conditions does not hold, then we can immediately answer that there is no t ∈ I that satisfies the query.In what follows, we assume that these two conditions hold.In that case, we can query φ(t) for some t ∈ I to determine whether the solution t * lies in [a, t] or in [t, b].We denote by J ⊆ I the current interval for which we know that t * ∈ J .At the beginning, we have J := I .Using at most log 2 (n + 1) + 2 queries to φ we can shrink J such that J ⊆ R j for some j ∈ [n + and some ε ∈ (0, 1).We first discretize the domain to be K n+1 m := {−1, −(m − 1)/m, . . ., −1/m, 0, 1/m, 2/m, . . ., (m − 1)/m, 1} n+1 where m = 2nL/ε .We let f : K n+1 m → [−1, 1] n be defined by f (x) = F (x).Note that f is antipodally antisymmetric ( f (−x) = − f (x) for all x ∈ K n+1 m ), monotone ( f (x) ≤ f (y) whenever x ≤ y) and f (1, 1, . . ., 1) = (1, 1, . . ., 1).Furthermore, any x ∈ ∂(K n+1 m ) with f (x) ∞ ≤ ε yields a solution to the original instance F .We extend f back to a function f : [−1, 1] n+1 → [−1, 1] n by using Kuhn's triangulation on the grid K n+1 m and interpolating (see Appendix C for a description of the triangulation and interpolation).By the arguments presented in Appendix C, it holds that f is a continuous, monotone, odd function, and f (1, 1, . . ., 1) = (1, 1, . . ., 1).Furthermore, if x i is fixed for all i ∈ [n + 1] \ { j} and x j is constrained such that x lies in some fixed simplex σ of Kuhn's triangulation, then f (x) can be expressed as a linear affine function of x j .
Since the parameters for f are L = nL and ε = ε/2, and n is constant, the query lower bound for F carries over to f .

Conclusion and future directions
In this paper, we managed to completely settle the computational complexity of the ε-Consensus-Halving problem for a constant number of agents with either general or monotone valuation functions.We also studied the query complexity of the problem and we provided exponential lower bounds corresponding to our hardness results, and polynomial upper bounds corresponding to our polynomial-time algorithms.We also defined an appropriate generalisation of the Robertson-Webb query model for monotone valuations and we showed that our bounds are qualitatively robust to the added expressiveness of this model.The main open problem associated with our work is the following.
What is the computational complexity and the query complexity of ε-Consensus-Halving with a constant number of agents and additive valuations?
Our approach in this paper prescribes a formula for answering this question: One can construct a black-box reduction to this version of ε-Consensus-Halving from a computationally-hard problem like nD-Tucker, for which we also know query complexity lower bounds, and obtain answers to both questions at the same time.Alternatively, one might be able to construct polynomial-time algorithms for solving this problem; concretely, attempting to do that for three agents with additive valuations would be the natural starting step, as this is the first case for which the problem becomes computationally hard for agents with monotone valuations.It is unclear whether one should expect the problem to remain hard for additive valuations.
Another line of work would be to study the query complexity of the related fair cake-cutting problem using the GRW model that we propose.In fact, while the fundamental existence results for the problem (e.g., see [65]) actually apply to quite general valuation functions, most of the work in computer science has restricted its attention to the case of additive valuations only, with a few notable exceptions [24,34].We believe that our work can therefore spark some interest in the study of the query complexity of fair cake-cutting with valuations beyond the additive case.

Declaration of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
It is easy to check that the function f : [0, 1] n → [−M, M] thus obtained is continuous.Indeed, if x lies on a common face of two Kuhn simplices, then the value f (x) obtained by interpolating in either simplex is the same.It can be shown that f is Lipschitz-continuous with Lipschitz parameter 2M • n • m with respect to the ∞ -norm.Furthermore, if f is antipodally anti-symmetric, i.e., f (x) = − f (x) for all x ∈ D n m , then so is f , i.e., f (x) = − f (x) for all x ∈ [0, 1] n .Finally, if f is monotone, i.e., f (x) ≤ f (y) for all x, y ∈ D n m with x ≤ y, then so is f , i.e., f (x) ≤ f (y) for all x, y ∈ [0, 1] n with x ≤ y.Consider any point x ∈ [0, 1] n that lies in some simplex σ = {y 0 , y 1 , . . ., y n }.Then for any j ∈ [n] and t ≥ 0 such that x + t • e j lies in the simplex σ , we have f (x + t • e j ) − f (x) = tmf (y j ) − tmf (y j−1 ) ≥ 0 since y j ≥ y j−1 and f is monotone.Using this it is easy to show that f is monotone within any simplex σ , since for any x ≤ y in σ we can construct a path that goes from x to y (and lies in σ ) that only uses the positive cardinal directions.
Since monotonicity holds for any segment of the path, it also holds for x and y.Finally, for any x ≤ y that lie in different simplices, we can just use the straight path that goes from x to y, and the fact that f is monotone in each simplex that we traverse.

Fig. 1 .
Fig. 1.A classification of ε-Consensus-Halving for a constant number n of agents, in terms of increasing generality of the valuation functions.

Fig. 3 .
Fig. 3. Visualisation of Algorithm 1.(a) depicts our initial assumptions.The red line shows where Agent 1 is indifferent.The blue signs on (0, p ) and (p , 1) show the (weak) preferences of Agent 2 under these pairs of cuts.(b) shows a possible position for the cuts x r − x ≤ ε 8L .The arrows show how the difference between the values of the positive piece and the negative piece change between the four possible combinations of pairs of cuts.(c) depicts the actual cuts on the cake: the green parts have label "+" and the yellow parts have label "−".

Fig. 4 .
Fig. 4. Visualisation of cut and eval queries.(a) The input A to an eval query is denoted by the green intervals.(b) The inputs A 1 and A 2 to the cut query are denoted by the green and yellow intervals respectively, and the interval I = [a, b] is denoted in blue.The agent places a cut (if possible) at a position x ∈ I such that her value for A 1 ∪ [a, x] and her value for A 2 ∪ [x, b] are in a specified proportion.

1
Set x ← 0 and x r ← p 2 Set y ← p and y r ← 1 3 while x r − x > ε Find y ∈ [y , y r ] such that Agent 1 is indifferent under the pair of cuts ( x +xr 2 , y)Find x ∈ [x ,x r ] such that Agent 1 is indifferent under the pair of cuts (x, y +yr 19 Output (x , y ) 1]. Recall that [x(A)] i = 2(n + 1) • λ(A ∩ R i ) − 1 for any i ∈ [n + 1] and A ∈ ([0, 1]).It follows that for any t ∈ J , [x(A 1 ∪ [a, t])] i and [x(A 2 ∪ [t, b])] i are fixed for all i ∈ [n + 1] \ { j}.Furthermore, with an additional log 2 m + 2 queries, we can ensure that for all t∈ J , [x(A 1 ∪ [a, t])] j ∈ [k/m, (k + 1)/m] and [x(A 2 ∪ [t, b])] j ∈ [ /m, ( + 1)/m]for some k, ∈ Z. Next, with an additional 2 log 2 (n + 1) queries, we can shrink J , so that for all t ∈ J , x( A 1 ∪ [a, t]) and x( A 2 ∪ [t, b]) each lie in some fixed simplex of Kuhn's triangulation of the domain K n+1 m (defined below).In that case, by our construction below, it will hold thatv i (x(A 1 ∪ [a, t])) and v i (x(A 2 ∪ [t, b])) can be expressed as an affine function of t ∈ J and thus we can exactly determine the value of t * .In order for this to hold, we will ensure that our normalised monotone nD-Borsuk-Ulam function is piecewise linear.Furthermore, we will pick m = 2nL/ε , and thus we have used 2(3 log 2 (n + 1) + 2 + log 2 ( 2nL/ε ) + 2) eval queries to answer one cut query.Note that this expression is O (log(L/ε)), since n is constant.Consider a normalised monotone nD-Borsuk-Ulam function F : [−1, 1] n+1 → [−1, 1] n with Lipschitz parameter L ≥ 3