Class EPEC

Inheritance Relationships

Base Type

Derived Type

Class Documentation

class EPEC : public Game::AbstractGame<Data::EPEC::DataObject>

Class to handle a Nash game between leaders of Stackelberg games.

Subclassed by Models::EPEC::EPEC

Public Functions

inline EPEC(GRBEnv *env)

The standard constructor. Initialize an empty instance.

Parameters

env – The Gurobi environment pointer.

void finalize()

Finalizes the creation of a Game::EPEC object.

Performs a bunch of job after all data for a Game::EPEC object are given, namely. Models::EPEC::computeLeaderLocations - Adds the required dummy variables to each leader’s problem so that a game among the leaders can be defined. Calls Game::EPEC::addDummyLead

  • Makes the market clearing constraint in each country. Calls

virtual void findNashEq() override

Computes Nash equilibrium using the Algorithm set in Game::EPEC::Algorithm. Checks the value of Game::EPEC::Algorithm and delegates the task to appropriate Algorithm wrappers.

virtual bool isSolved(double tol = 1e-5) const override

Call the delegated Algorithm method and return true if there exist a feasible equilibrium.

Parameters

tol – A numerical tolerance

Returns

True if the incumbent solution is an equilibrium

virtual bool isPureStrategy(double tol = 1e-5) const override

Call the delegated Algorithm method and return true if the equilibrium is pure.

Parameters

tol – A numerical tolerance

Returns

True if the incumbent equilibrium is pure

std::unique_ptr<GRBModel> bestResponseProgram(const unsigned int i, const arma::vec &x, MathOpt::PolyLCP *customLCP = nullptr) const

Given a player id i, the incumbent solution x (and optionally a custom player LCP customLCP), this method returns the model corresponding to the best response of i given the other players’ decisions in x.

Parameters
  • i – The player id

  • x – The incumbent solution

  • customLCP – An optional parameter with a custom LCP

Returns

A pointer to the (solved) Gurobi model for the best response

double bestResponse(arma::vec &sol, unsigned int player, const arma::vec &x, const arma::vec &prevDev = {}, MathOpt::PolyLCP *customLCP = nullptr) const

Returns the best-response value for the player player given the decision x of all other players.

Calls Game::EPEC::respond and obtains the std::unique_ptr to GRBModel of best response by player player. Then solves the model and returns the appropriate objective value.

Parameters
  • sol – The optimal response for player

  • player – The player id

  • x – The solution vector

  • prevDev – An optional previous deviation encountered. This is useful to normalize any unbounded best response

  • customLCP – An optional pointer to a custom player LCP.

Returns

The optimal objective value for the player player.

inline const arma::vec getX() const

Getter for the incumbent solution (x)

Returns

The incumbent x

inline const arma::vec getZ() const

Getter for the incumbent solution (z)

Returns

The incumbent z

void setAlgorithm(Data::EPEC::Algorithms algorithm)

Decides the Algorithm to be used for solving the given instance of the problem. The choice of algorithms are documented in Game::EPECalgorithm.

Parameters

algorithm – The input algorithm

void setRecoverStrategy(Data::EPEC::RecoverStrategy strategy)

Decides the Algorithm to be used for recovering a PNE out of the InnerApproximation procedure.

Parameters

strategy – Input ecovery strategy

void setBranchingStrategy(Data::EPEC::BranchingStrategy strategy)

Sets the branching strategy for the CutAndPlay.

Parameters

strategy – The input strategy

inline void setAggressiveness(unsigned int a)
inline void setAddPolyMethod(Data::LCP::PolyhedraStrategy add)

Set the Data::LCP::PolyhedraStrategy for the inner approximation.

unsigned int getPositionLeadFoll(unsigned int i, unsigned int j) const

Gets the position of the j-th variable in the i-th leader Querying Game::EPEC::LCPModel for x[return-value] variable gives the appropriate variable.

Parameters
  • i – Leader number

  • j – Follower number

Returns

The queried position

unsigned int getPositionLeadLead(unsigned int i, unsigned int j) const

Gets the position of the j-th Follower variable in the i-th leader Querying Game::EPEC::LCPModel for x[return-value] variable gives the appropriate variable.

Parameters
  • i – Leader number

  • j – Follower number

Returns

The queried position

double getValLeadFoll(unsigned int i, unsigned int j) const

Gets the value of the j-th variable in i-th leader.

Parameters
  • i – Leader number

  • j – Follower number

Returns

The queried position

double getValLeadLead(unsigned int i, unsigned int j) const

Gets the value of the j-th non-follower variable in i-th leader.

Parameters
  • i – Leader number

  • j – Follower number

Returns

The queried position

double getValProbab(unsigned int i, unsigned int k)

Returns the probability associated with the k -th polyhedron of the i -th leader.

Parameters
  • i – Leader index

  • k – The polyhedron index

Returns

The queried attribute

double getValLeadFollPoly(unsigned int i, unsigned int j, unsigned int k, double tol = 1e-5) const

For the i -th leader, gets the k -th pure strategy at position j.

Parameters
  • i – The leader index

  • j – The position index

  • k – The pure strategy index

  • tol – A numerical tolerance

Returns

The queried attribute

double getValLeadLeadPoly(unsigned int i, unsigned int j, unsigned int k, double tol = 1e-5) const

For the i -th leader, gets the k -th pure strategy at leader position j.

Parameters
  • i – The leader index

  • j – The position index

  • k – The pure strategy index

  • tol – A numerical tolerance

Returns

The queried attribute

std::vector<unsigned int> mixedStrategyPoly(unsigned int i, double tol = 1e-5) const

Returns the indices of polyhedra feasible for the leader, from which strategies are played with probability greater than the tolerance.

Parameters
  • i – Leader index

  • tol – Tolerance

Returns

The queried attribute

inline const MathOpt::LCP &getLCPDescription() const

Get the Game::LCP object solved in the last iteration either to solve the problem or to prove non-existence of Nash equilibrium. Object is returned using constant reference.

inline const GRBModel &getLCPModel() const

Get the GRBModel solved in the last iteration to solve the problem or to prove non-existence of Nash equilibrium. Object is returned using constant reference.

inline void writeLCPModel(const std::string &filename) const

Writes the GRBModel solved in the last iteration to solve the problem or to prove non-existence of Nash equilibrium to a file.

void getXWithoutHull(const arma::vec &x, arma::vec &xWithoutHull) const

Given the the solution x, the method returns in xWithoutHull the x vector without the convex-hull’s variables. Also, no MC variables are included.

Parameters
  • x – The EPEC’s solution

  • xWithoutHull – The output solution without convex hull variables

void getXofI(const arma::vec &x, const unsigned int &i, arma::vec &xOfI, bool hull = false) const

Given the player id i and the solution x, the method returns in xWithoutHull the x vector for the given player, with the convex-hull’s variables in case hull is false. Also, no MC variables are included.

Parameters
  • x – The EPEC’s solution

  • i – The player id

  • xOfI – Output solution vector for i

  • hull – True if convex-hull variables need to be included in the output

void setWelfareObjective(bool linear, bool quadratic)

Given the object Game::EPEC::LCPModel, this method sets its objective to the social welfare. In specific, we can decide whether to include or not the linear or quadratic part of the welfare

Parameters
  • linear – True if the linear part is to be included, false otherwise.

  • quadratic – True if the quadratic part is to be included, false otherwise.

Protected Functions

bool warmstart(const arma::vec &x)

Warmstarts EPEC with a solution.

Warmstart the solution with x.

Todo:

Complete this implementation

Parameters

x – The warmstart solution

Returns

True if the warmstart was successful

virtual void makeObjectivePlayer(const unsigned int i, MathOpt::QP_Objective &QP_obj) = 0

Empty function - optionally re-implementable in derived class.

Parameters
  • i – The player id

  • QP_obj – The QP_object with the objective

virtual void preFinalize() = 0

Empty function - optionally re-implementable in derived class.

This function can be optionally implemented by the derived class. Code in this class will be run before calling Game::EPEC::finalize().

virtual void postFinalize() = 0

Empty function - optionally re-implementable in derived class.

This function can be optionally implemented by the derived class. Code in this class will be run after calling Game::EPEC::finalize().

virtual void updateLocations() = 0

Empty function - optionally re-implementable in derived class.

If any location tracking system is implemented, that can be called from in here.

inline virtual void makeMCConstraints(arma::sp_mat &MC, arma::vec &RHS) const

Empty function - optionally re-implementable in derived class.

Builds the Market Clearing constraints

Parameters
  • MC – MC constraints matrix

  • RHS – RHS for the constraints

Protected Attributes

std::vector<std::shared_ptr<Game::NashGame>> PlayersLowerLevels = {}

The lower level Nash Games among the followers.

std::vector<std::shared_ptr<MathOpt::LCP>> PlayersLCP = {}

The LCP of each leader, encompassing both Stackelberg and low-level Nash games.

std::vector<std::shared_ptr<MathOpt::QP_Param>> PlayersQP = {}

The QP corresponding to each player.

std::vector<std::shared_ptr<MathOpt::QP_Objective>> LeaderObjective = {}

Objective of each leader.

std::vector<std::shared_ptr<MathOpt::QP_Objective>> LeaderObjectiveConvexHull = {}

Objective of each leader, given the convex hull computation.

std::unique_ptr<Game::NashGame> TheNashGame

The EPEC nash game.

std::vector<unsigned int> LeaderLocations = {}

Location of each leader Number of variables in the current player, including any number of convex hull variables at the current moment. The used, i.e., the inheritor of Game::EPEC has the responsibility to keep this correct by implementing an override of Game::EPEC::updateLocations.

std::vector<const unsigned int*> LocEnds = {}

The ending locations for the leaders’ variables.

std::vector<unsigned int> ConvexHullVariables = {}

Number of convex hull variables for each leader.

unsigned int numMCVariables = {0}

Number of market clearning variables.

bool Finalized = {false}

A flag controlling if the EPEC was finalized.

arma::vec SolutionZ

Solution equation values.

arma::vec SolutionX

Solution variable values.

Private Functions

void addDummyLead(unsigned int i)

Add Dummy variables for the leaders.

Adds dummy variables to the leader of an EPEC - useful after computing the convex hull.

Parameters

i – The leader id to whom dummy variables should be added

void makePlayerQP(unsigned int i)

Makes the MathOpt::QP_Param corresponding to the i-th country.

Note

Overloaded as Models::EPEC::makePlayerQP()

Parameters

i – The player’s id

void makePlayersQPs()

Makes the MathOpt::QP_Param for all the countries. Calls are made to Models::EPEC::makePlayerQP(const unsigned int i) for each valid player id.

void makeTheLCP()

Formulates the LCP to compute an equilibrium. In this LCP, each player’s feasible region is given by the PlayersQP associated entry. If the QP stems from a full enumeration, for instance, the solution will be exact.

void computeLeaderLocations(unsigned int addSpaceForMC = 0)

Assigns the values to LeaderLocations to each player.

Parameters

addSpaceForMC – contains the number of Market Clearing variables

void getXMinusI(const arma::vec &x, const unsigned int &i, arma::vec &xMinusI) const

Given a solution in x, computes the other players’ solution x minus i in xMinusI.

Parameters
  • x – The incumbent solution

  • i – The player id

  • xMinusI – The output strategy

bool computeNashEq(bool pureNE, double localTimeLimit, bool check, bool linearWelfare, bool quadraticWelfare)

Given that Game::EPEC::PlayersQP are all filled with a each country’s MathOpt::QP_Param problem (either exact or approximate), computes the Nash equilibrium. pureNE checks for pure Nash Equilibrium. It does not work with EPEC::Algorithms::CutAndPlay localTimeLimit sets the timelimit for the solver. a negative value is infinite time check If true, the method calls the isSolved() method related to the active algorithm EPEC::Algorithms linearWelfare If true, the objective of the resulting LCP includes the sum of the linear objectives for the players quadraticWelfare If true, the objective of the resulting LCP includes the sum of the quadratic objectives for the players

Returns

true if a Nash equilibrium is found

Private Members

std::shared_ptr<Algorithms::EPEC::PolyBase> Algorithm = {}

The incumbent algorithm used to solve the instance.

std::vector<unsigned int> SizesWithoutHull = {}

Size of leaders’ variables without the convex hull variables.

std::unique_ptr<MathOpt::LCP> TheLCP

The EPEC nash game written as an LCP.

std::unique_ptr<GRBModel> LCPModel

A Gurobi mode object of the LCP form of EPEC.

std::unique_ptr<GRBModel> LCPModelBase

A Gurobi mode object of the LCP form of EPEC. If we are searching for a pure NE, the LCP which is indifferent to pure or mixed NE is stored in this object.

Friends

friend class Algorithms::EPEC::PolyBase
friend class Algorithms::EPEC::InnerApproximation
friend class Algorithms::EPEC::CutAndPlay
friend class Algorithms::EPEC::CombinatorialPNE
friend class Algorithms::EPEC::FullEnumeration