Latest Articles

The Collective Logic of Teams

Zubrin takes up a recent community topic and argues that Hex teams are here to stay as long as Hex tournaments remain valuable.

In the last few weeks, Hexers have engaged in debates over the existence, utility, viability, anti-competitiveness, and hyper-competitiveness of teams. I have personally seen the conversation crop up on Reddit, on in-game chat, on the forums, and in at least one hex stream. Given my small sample of public conversations, I can only imagine that this conversation is a wider discussion with people arguing against the existence of teams. As such, this article is a theoretical introduction to why teams exist and persist in competitive games. If readers are interested in me writing up a guide on applied team-making, my next post will offer a practical guide to creating and running a successful Hex team.

Let us start with a standaridzed conception of a team so we are referring to the same thing. A team, in Hex, is a group of people who collectively exchange ideas, deckbuild, and test together. A team’s collective information is considered to be private and a team typically guards against having internal information spread externally. This information can be as small as a tweak or two to known meta decks or as large as an undiscovered, meta-changing deck that the team will unveil at the next tournament. Other information can include testing results, matchup probabilities, reserve selection and strategy, etc. Teams are not necessarily guilds, though they can be. For the purposes of this article, I will be discussing teams that concentrate on competitive constructed, but we can also have teams that focus on drafting or PVE (Player versus Environment such as the forthcoming Frost Arena).

Reddit discusses the viability of teams

Reddit discusses the viability of teams

The culmination of anti-team sentiment seems to have peaked after the HexTechs Oberon tournament after Future took down the tournament and, in addition to GoreKnight decks dominating the top 8, it was also clear that half of the top 8 came from the same team (or were affiliated with the team); though called a half-dozen different names, the team calls itself “The Collective”. Part of the backlash centered around the anti-community nature of teams as they hold information and strategy to the exclusion of the rest of the Hex community. Alternatively, if the knowledge a team developed were public, it could enrich the community and make lots of players better off rather than just those in the team. However, the incentive structure and the competitive nature of Hex forecloses the possibility of teams disappearing. We will turn to some game theory and political science to understand why teams are here to stay.

Game Theory and Prisoner’s Dilemma

Game theory is a branch of mathematics that scholars commonly use in both the physical sciences (biology, physics, etc.) and the social sciences (political science, economics, psychology, etc.). The main premise of game theory is that if we know the actors in a particular situation, the strategies they can choose, and their preferences over the outcomes of game, then we can discern the optimal strategy for the actors in the game. For a game to be a game, there must be at least two actors (otherwise we have the less exciting field of decision theory) and each actor must have choices available to them. Additionally, the actors also know the other actors exist and their preferences over the outcomes of the game.

One of the most overused scenarios in game theory is the prisoner’s dilemma; however, it is only used so much as it has powerful implications and it does facilitate the scenario that we care about here. In the prisoner’s dilemma, we have a stereotypical situation that you may come across in a crime drama in a television show or a movie. The police have identified two robbery suspects that they are interrogating and pretty much know that the suspects have committed the crime, but lack the physical evidence required to link them to the armed robbery. As such, if the police do not get either of the suspects to turn on the other, they will only be able to convict each of them of a weapons possession charge (say, one year in jail). They interrogate the suspects in separate rooms hoping that they will turn on each other. The suspects have a choice: they can either cooperate with their criminal partner by staying quiet (cooperate) or they can betray them by defecting to the police (defect). If they both stay quiet (cooperate), then they will only go to jail for a year each. If one cooperates and the other defects, then the defector will avoid any jail time while the suspect that remained quiet will serve a full seven year sentence. If, however, they both rat each other out (defect), then they both get convicted of the charge and will serve three years in jail each.

To figure out what the optimal solution is, we assume our players are rational (which has a straightforward meaning in game theory: With the information they have, the actors will pursue actions that allows them to get the things that they like and will attempt to avoid things that they do not like) and they are functionally making this decision at the same time; the police will not inform the other person of their partner’s decision until the interrogation is over. Given this set up, we have a simultaneous game (we could make this a sequential game where the players make decisions after each other, but it is far less interesting). This gives us the following set up:

Suspect 2
CooperateDefect
Suspect 1Cooperate-1,-1-7, 0
Defect0, -7-3,-3

Each cell represents a pair of decisions and the payoffs read in the order of the actors. For example, if Suspect 1 chooses Defect and Suspect 2 chooses Cooperate, Suspect 1 gets a payoff of 0 and Suspect 2 gets a payoff of -7. The payoffs are negative here as they represent jailtime. Right now, we have all the information we need to solve the problem; however, to make things simpler, we will modify the payoffs to rank-order the preferences (in teaching this, I find that first time students can get tripped up on the negative values).

Player 2
CooperateDefect
Player 1Cooperate3,31,4
Defect4,12,2

With this representation, the payoffs are ordered from their least desired (a “1” represents 7 years in jail while a “4” represents no jail time). Each player chooses their actions independently and without communication from the other player. To solve this, we go square by square and see if either player would independently choose a different outcome. So, for example, in the Cooperate-Cooperate payoff square, we see that player 1 would be better off defecting if their partner chose cooperate. Likewise, player 2 would be better off defecting if their partner chose to cooperate. In the Defect-Cooperate and Cooperate-Defect squares, player 2 would be better off changing their cooperate choice to defect and player 1 would be better off defecting instead of cooperating. Finally, in the Defect-Defect square, both players cannot do better off than mutual defection. This is the functional definition of a Nash Equilibrium which is a powerful concept in game theory.

Another way to think about this is, “given my partner’s choice, what I am best doing?” If my partner cooperates, I should defect as I get no jail time. If my partner defects, then I am better off defecting to avoid a seven year jail sentence and getting only three years instead. In game theoretic terms, we would say that Defect is strictly dominant over Cooperate for both players.

The prisoner’s dilemma is an interesting game as the dominant strategy for both players leads to an inefficient outcome; as a whole, both players would be better off in Cooperate-Cooperate, but the incentive to defect trumps the collectively optimal outcome (we would call it the Pareto optimal outcome). Much of social science, math, and even computer science has attempted to find ways to thwart this game to get players to cooperate. A few lessons we have learned after decades of research is that pre-game communication does not help as you have every incentive to lie and get your partner to cooperate while they defect. However, if you can offer some type of outside enforcement that punishes players for defection or rewards them for cooperation, then this can overcome the inefficiency in the dilemma. In our example, if our suspects belong to organized crime, then the threat of broken legs or being killed would squelch the desire to rat out their partner. Early artificial intelligence simulation research has also shown that if the players are playing several rounds of the prisoner’s dilemma and they do not know when it will end, then cooperation is possible.

Game theory itself offers lots of different situations and we can map out basically any interaction where we have actors that are making decisions with known preferences. One of my more popular posts on my academic blog does this with one of the game theory puzzles offered by the Dark Knight; most social interactions can be modeled along similar lines (though we may not always have two actors with two choices, we can have as many actors and choices as we need to model the situation). Naturally, some of you are asking how this applies to the topic at hand (we were talking about the existence of teams); to get there, we need to take one more step that is built upon this game theoretical foundation.

The Logic of Collective Action

Stemming from this, Mancur Olson Jr.’s foundational work on the Logic of Collective Action provides us a political science/economics answer to a related class of problems. One of the continual problems we have as a society is understanding why things that benefit the whole of society do not always happen. This class of goods we call “public goods” and are goods that are non-excludable (you can’t prevent people from consuming them) and non-rival (people’s consumption does not impact other people’s consumption). Classic kinds of public goods include infrastructure (public roads), environmental goods (clean air, fresh water), security (national defense), and may include things like good governance (from having people participate in elections to having a responsive government) or functioning domestic and international economic systems.

The problem with collective action over public goods is the very nature of a public good: they are non-excludable; so, if people or organizations pay the costs necessary to provide them, then other individuals can free-ride on that provision. This problem is even more stark when we consider public goods that require the cooperation of several individuals in their provision (such as managing environmental waste) as they become similar to the prisoner’s dilemma problem. The optimal societal outcome is for everyone to participate and provide the good, however, there is a larger benefit for everyone to provide it but you (that is, other people pay the costs, you free-ride and enjoy the benefits). Of course, you may not be the only person who thinks this, so other individuals will free-ride as well and, if enough people free-ride, no one will provide the public good. In that scenario, if you are paying the cost of the public good, but it is not coming into fruition, then you are doubly worse off as you do not get the benefit of the public good and yet you are still paying the cost for it. This cost can be especially dicey when we talk about mobilizing people to risk their lives; for example, whether a revolution succeeds or fails, you are better off not going to the battlefield.

Olson offers three solutions for collective action problems. First, if you are in a small enough group, then providing the public good and deterring free-riders becomes easier as you can spot defectors and try to coerce them into participation (e.g. social or economic coercion); large groups allow for anonymous defection which increases the likelihood of free-riding. Second, if one or a few members have an asymmetric interest in the public good existing, then that could encourage those individuals to provide the good. Finally, and most potently, a political entrepreneur or organization that has an interest in seeing the public good exist can offer selective incentives for participation. Selective incentives are usually positive incentives for participation that can be material or social such as direct payments (you can pay people to participate in your revolution), access to unique goods (NPR and PBS offer tote bags, mugs, and other such goodies for contributing to the public good of public broadcasting), or goods that provide social status such as the ubiquitous “I Voted” sticker in the United States.

Hex and Public Goods

To recap, public goods have free rider problems. Providing unique insight and decks to the community that are tier 1 and shape the meta are a kind of public good as they would make the community better off. However, if some people worked (spent time and resources) to research and provide that information publicly, then others will certainly free-ride by consuming that information and not providing their own research and testing to the community. As such, there is an incentive for individuals to keep their information private until the next big tournament such that they use the informational advantage they have to secure higher returns in the next tournament.

Teams are a temporal solution to the collective action problem as they keep information privatized temporarily (the information of one’s own work was already privatized and not inherently public), but also offer the benefit of collaboration. However, by keeping the group small, the information becomes excludable in the short-term while also the small-group dynamics can help overcome the free-rider problem within the team. Those who participate, research, theorycraft, and test can gain access to the information provided by the group. Those who do not cooperate can be removed from the group. Also, the information does eventually come out to the community, but it is often after a delay of two-three weeks (there is an analogy here to the patent process as well, but I do not want to bog the article down with asides).

Of course, those in a team can always contribute to both the private and public goods. Often, those who are members of the best testing teams may also be the people who write articles to try to facilitate a stronger and better community. The writers for FiveShards represent at least 3-4 different testing teams, but we come together to help publicly share what information we have learned in constructed, draft, and soon PVE; also, some of us like to share our insights from other pursuits as they apply to the Hex community and the economy.

Final Thoughts

Since there are rewards for keeping some information private (through tournament winnings), there is really no chance of teams going away as they overcome a collective action problem in regards to information. Even if we made an uber team of all the Hex players, there would be subgroups that tested with each other and kept some information back. Given that teams are the reality for both physical and digital trading card games, the question becomes whether you stick with your own casual testing, your group of friends, join a team, or create your own. As I mentioned in the introduction, this piece is meant to be a theoretical introduction as to why teams will continue to exist and, if I get enough interest or feedback in creating a practical guide to creating, building, and maintaining a team, I will make that my next article.

Michael Allen is a competitive HexTCG player, co-host of the 2 Turns Ahead podcast, and founder and moderator of the Hex Subreddit.

6 Comments on The Collective Logic of Teams

  1. Thank you so much for this article, I’ve been trying to join/form a playtest group for as long as I’ve known of their existence, around 2002 when a french team almost completely upended a mtg invitational with a meta-changing deck. I would be eternally grateful if you would delve deeper into the finer points of creating and maintaining such a group, keeping track of what matters, and how to test properly within a particular meta. Thanks again for your time and insight for the public good =D

  2. TL:MS (too long, mostly skimmed)
    One thing to remember is that teams can collectively make bad decisions too. Its less likely, but still possible. I also feel obliged to point out that any group collectively playing a deck based on a meta-defining card has a good chance of being dominant.

    Any competitive endeavor will spawn groups of people seeking to gain advantage by accumulating information not available to the community at large. What needs to be guarded against is the tendency towards elitism that this can promote. I’ve almost stopped listening to 2TA on more than one occasion because of a smugness when you talk about competing. Phil Helmuth was a poker wunderkind at one time, now he’s pretty much a joke.

  3. And when I say “you’, its meant collectively. You are all pretty good deck pilots, but I’ve watched your streams, and you’re definitly mortals, not primals.

    • I know I speak for myself at least when I say that any hubris exhibited is purely facetious. We tend to do a good job as a group but, individually, we all make mistakes (I myself, plenty) and our knowledge and skill has limits. Typically any chest-beating will be egging each other on or being tongue-in-cheek. All in the spirit of friendly competition.

  4. Thats why you’re my favorite 😀

  5. Great read! It’s always nice, if someone brings his expertise in other areas to hex.

    I somehow missed this discussion in the community and i don’t really see a problem with Teams. From a competitive Standpoint it’s a good thing and for the community the drawback isn’t that great. The Community will see the Decks some time later. But sure, a team has a lot more inside than just the decklist

    @Judgenaut

    I recently read “next level magic” by Patrick Chapin and there is a chapter about teams

Leave a Reply

%d bloggers like this: