<< Back to Strategy Forum   Search

Posts 1 - 28 of 28   
Optimal play: 11/9/2019 01:46:22


Norman 
Level 58
Report
Hi,

we had some disussions recently about how optimal WarLight play has to look like when attacking and defending. You can find the results in my strategy guide:
https://docs.google.com/document/d/1NyhCpIQKShAbWGXicO_ph9whV_UyMS-3a7_re0C7R8Y/edit#heading=h.y6pv3soufztp

Here is my final conclusion:
The defender usually wins in a symmetric situation and the attacker in an asymmetric situation. By this I mean that the defender wins if both players choose the same territory to focus on while the attacker wins if both players choose different territories. If you find yourself winning in a symmetric situation you want to prefer the option which gives you more value and if you find yourself winning in an asymmetric situation you want to prefer the option which gives you less value.

Of course an exact calculation of the Nash equilibrium should be impossible for most "real life" WarLight scenarios. However for me it's not about the exact numbers but I'd like to have a rough understanding about which strategies to prefer over others. What is your take on the matter guys? For example the situation during the picking stage seems interesting when you have an almost dominant set of picks which only lose to a bizarre counterpick while the counterpick self loses to everything else.
Optimal play: 11/9/2019 01:58:41


Norman 
Level 58
Report
https://cgi.csc.liv.ac.uk/~rahul/bimatrix_solver/

This was the only site I found after a quick Google search which claimed to calculate mixed strategies. However I always get an error when trying it out.
Optimal play: 11/9/2019 06:26:07


Njord 
Level 62
Report
that seems to be an old verstion..... there is a newer one on there site
Optimal play: 11/9/2019 06:54:57


Glass
Level 59
Report
Disclaimer: I am relatively new at this game, and clearly there are many who are better than I at it.

In my opinion there are two components to optimal play. The first is putting yourself in a position so that you do not have to make any reads in order to win. This will usually happen when getting superior picks or being a superior play to your opponent. It will not happen against a player of commensurate skill with commensurate picks.

This comes to the second part: making the right reads in an even game. This can't be optimized in a handbook, because it depends entirely of predicting a player correctly on any given turn. Yes most people will defend the greater value target, and thus it is advantageous to attack the lesser value target. But if you know that, and your opponent knows that, and your opponent knows that you know that, it becomes a lot like the battle of wits in The Princess Bride. https://www.youtube.com/watch?v=EZSx3zNZOaU

Predicting an opponent is more art than science. To go off your bizarre counter-pick example. Say the semi-dominant set of picks is the one that wins in the most games. Therefore it is objectively the best set of picks. Against an average or even good player who can not divine the best set of picks, it is the best choice as they are highly unlikely to make that pick set or the bizarre counter pick. But say you're playing a great player. Someone you know will have discerned those picks and the bizarre counter. Then once again we are in the battle of wits. What does your opponent think you will do? It's poker not chess.
Optimal play: 11/9/2019 07:21:34


ℳℛᐤƬrαńɋℰ✕
Level 59
Report
Its great to see that someone wants to bring some Game Theory* into Warzone. But I don´t fully understand what you are doing or exact aim is? Neither I can understand what you mean by symmetric situation - by theory it would imply equilibrium was found (vote-to-end) or result is blind gamble (rock-paper-scissors). When you mean mixed strategies it implies that all possible winning strategies are known and found and you change them game to game so opponents can´t pick counter-strategy to beat you. Considering the settings Warzone offers you do not have a certain set of strategies that work template to template.

The easiest and most reasonable way to start would be to look the the total game (with settings) from theoretical perspective.
  • Symmetric/asymmetric information
  • Complete/Incomplete information
  • Perfect/Imperfect information
  • Finite/infinite game
And from this its all predicting opponents moves and trying to figure out your best results. I recommend looking some of the topics listed below to read about game theory. Taking account that Warzone is simultaneous game! Applying Game Theory to Warzone could be much more like the Ted* talk in the notes, rather results or pay-off matrix.

If you want a pay-off matrix of Warzone strategies, then this seems already bizarre in a one-template settings due to exactly simultaneous moves in Warzone. If it be sequential you could work backwards and have roughly a measurable number for each strategy. But still vast amount of possible actions and turn order makes the table 1000 of rows large. Secondly cards make things a whole lot of more complicated.

If you want to start easy and compute optimal, dominant/dominated and equilibrium pay-offs, then choose a relatively small map < 20 territories, with different bonus size and value, set full map picks and limit them to 3 for example, no cards, no luck and you have environment where you can reduce 90% of stuff to numbers up to first-movable turn. If you have enough games from that you can deduce which picking territories lead to most victories or are superior to some variance or set of others. You can´t simply reduce a full match to game-theory pay-off matrix due to fact it has X amount of turns where each turn offers Y amount on actions. The possibility/scenario tree after third turn would be already too large for the eye. Its a work or AI-tools.

Basically each turn, past picking phase ought to have separate pay-off table with all possible move options written down (army deployment variance x territory takings x other changeable factors like turn order) in one axis and then have same for opponent. The amount of stuff that can vary is huge for just turn. And then put each possible turn together and then you could have a full game pay-off telling you which strategy (picking, expanding, attack/defense, order) would be optimal or superior to others in relation to X-amount of other strategies. Its already enough work for simple 6-territory game with few armies. Therefore I suggest to stick with just picking phase and determine which set of territories in relation to other sets are optimal.


https://www.warzone.com/Forum/103038-steinitz-game-theoryin
https://www.warzone.com/Forum/5138-game-theory-warlight
https://www.warzone.com/Forum/150441-strategy-vs-foresight-warlight
https://www.warzone.com/Forum/34646-warlight-ai-challenge
https://www.warzone.com/Forum/133813-impure-skill-vs-pure-skill

http://gametheorysociety.org/
http://game-theory-class.org/
https://plato.stanford.edu/entries/game-theory/
Ted https://www.youtube.com/watch?v=0bFs6ZiynSU
Practical application of Game Theory in games/practice https://www.youtube.com/watch?v=hZDxLi6Xc40

tl;dr
Theoretical aspects of Game Theory can help average Warzone player much more than its practical application due to fact that Warzone is simultaneous turn-based game where each turn already offers enormous amounts of actions sets, which tend to be too large for humans to analyze.

Edited 11/9/2019 07:31:41
Optimal play: 11/9/2019 11:18:06


Rufus 
Level 62
Report
Oh man, you definitely chose a hard topic, Norman. There are so many things to talk about when it comes to optimal play and strategies. I do not even know where to begin.

But since I started to type something, I'll just put one trivial fact here:
Defence (if its kill rate is greater than offence's) is the best strategy/"move" when you use it right.
Optimal play: 11/9/2019 18:39:57


Norman 
Level 58
Report
Hi,

@Njord + all:
I found a site that actually works:
https://www.math.ucla.edu/~tom/gamesolve.html

Here is what I get for the rock paper scissors game:
-----
The matrix is
0 -1 1
1 0 -1
-1 1 0
The value is 0.
An optimal strategy for Player I is:
(0.33333,0.33333,0.33333)
An optimal strategy for Player II is:
(0.33333,0.33333,0.33333)
-----
0 means a tie here, 1 that Player I wins and 2 that Player 2 wins. In the first row player 1 picks rock and in the first column player 2 picks rock etc. So the -1 in the first row comes from Player 1 picking rock and player 2 picking paper. As expected the optimal strategy is a 33% split.


@Glass + all:
Disclaimer: I am relatively new at this game, and clearly there are many who are better than I at it.
You have to distinguish 2 phases here. The first phase is you gathering all possible moves + opponent answers and evaluating them properly. If you say that you are a poor player then your evaluation is way off the carts. However you still end up with an understanding about your winning chances when you make your move and your opponent answers accordingly. I'm not interested in your numbers here but only in the next step where you have to make a decision according to your evaluation:

With the matrix game solver I'd like to make another symmetric example called "Attack - Defend - Expand" with the following input:
- If both players make the same move, they end up both equal.
- If a player expands and the other attacks, the attacker immediately wins.
- If a player expands and the other defends, the expander increases his else equl chances of winning the game by 25%.
- If a player attacks and the other defends, the defender increases his else equal chances of winning the game by 50%.

---
The matrix is
0 -0.5 1
0.5 0 -0.25
-1 0.25 0
The value is 0.
An optimal strategy for Player I is:
(0.14286,0.57143,0.28571)
An optimal strategy for Player II is:
(0.14286,0.57143,0.28571)
---
So basically in this example you want to usually defend which doesn't come as a surprise. Also you want to expand double as often as attack.



@MrX:
But I don´t fully understand what you are doing or exact aim is? Neither I can understand what you mean by symmetric situation

I have only posted my most abstract conclusion here. For the rest you have to read the link I posted. I have basically calculated that (at least for my simplified WarLight example), the attacker has to play differently than the defender.

Edited 11/9/2019 18:42:50
Optimal play: 11/9/2019 18:52:27


Glass
Level 59
Report
@Norman

In Rock paper scissors there is a psychological bias of most people to pick rock or scissors. Therefore it is advantageous to pick rock against most general opponents on the first turn. Also people tend to switch when they lose and repeat when they win so again against most people it is advantageous to switch. Knowing these tendencies, makes the true mathematical equilibrium different than the basic rules of the game.

https://www.psychologytoday.com/us/blog/the-blame-game/201504/the-surprising-psychology-rock-paper-scissors

Also I never said I was a poor player. I am objectively much better than the vast majority on this site. Merely acknowledging that there is a substantial player base of people who are better than me. Furthermore, while I have only been playing this game for 10 months, I am very good at a number of other games and know a thing or two about game strategy.
Optimal play: 11/9/2019 18:57:46

Lasermancer
Level 25
Report
I think that it depends on kill rates.
Optimal play: 11/9/2019 19:18:05


Norman 
Level 58
Report
@Glass:
Also I never said I was a poor player. I am objectively much better than the vast majority on this site.

Excuse me please, I didn't want to insult you. I was just trying to make the point that I'd like to see the playing strength of a player as his ability to come up with the correct win chances given his play and the possible opponent answers.

Knowing these tendencies, makes the true mathematical equilibrium different than the basic rules of the game.

Ah no, this isn't true. The perfect play still is 1/3 for each. As for your described pattern you can of course abuse that by always starting with rock. Rock Paper Scissors is very simple so everybody immediately sees that if his opponent isn't choosing with a 1/3 split, he can abuse him. However you can very easily increase the difficulty of Rock Paper Scissors to make it way harder to immediately see whether your oppoents strategy is flawed. Let's say the winner gets 1$ with the following exception that if Player 1 wins against Player 2 with rock > scissors, then he gets 2$.

The matrix is
0 -1 2
1 0 -1
-1 1 0
The value is 0.08333.
An optimal strategy for Player I is:
(0.25,0.41667,0.33333)
An optimal strategy for Player II is:
(0.33333,0.41667,0.25)

As you see here both players are supposed to prefer paper now. What you were refering to was abusing the fact that your opponent doesn't play optimal. However before you can abuse that you have to understand how your opponents optimal strategy has to look like, else you might think that you found a flaw where there is none.

Edited 11/9/2019 19:18:26
Optimal play: 11/9/2019 19:24:49

DrApe
Level 60
Report
I think the problem with using such an "optimal" strategy is that you are unable to exploit the weakness of an unoptimal opponent strategy. If the opponent is playing their optimal strategy, then you can by definition also play a non-optimal strategy and perform just the same as the optimal strategy. So then the only way to possibly gain any sort of advantage is to deviate from the optimal strategy. So while this discussion is interesting, it is not very useful for practical Warlight play.
Optimal play: 11/9/2019 20:36:26


Norman 
Level 58
Report
@DrApe

If the opponent is playing their optimal strategy, then you can by definition also play a non-optimal strategy and perform just the same as the optimal strategy.
The problem with classic Rock Paper scissors is that it's overly simplified. As a counter example let's say we have agame called Heads or Tails with the following rules: Heads wins against Tails, and if both players choose equally then nobody wins.

The matrix is
0 1
-1 0
The value is 0.
An optimal strategy for Player I is:
(1,0)
An optimal strategy for Player II is:
(1,0)

As you see, both players have to always choose heads here. If one player deviates from the optimal strategy then he loses.

I think the problem with using such an "optimal" strategy is that you are unable to exploit the weakness of an unoptimal opponent strategy.
It's the exact opposite. In order to be capable to exploit the weakness of your opponent you first have to look for patterns in his gameplay. A "gameplay pattern" is nothing else than a deviation from the optimal play which can't get exploitet.
Optimal play: 11/9/2019 21:15:56


Glass
Level 59
Report
@Norman I think I understand your position, but I think we have a fundamentally different view of gameplay.

For one, player ability is solely determined by ability to beat other players. You say you would like to see player ability as correctly evaluating one's chances of winning, and while this will undeniably help any player play better it does not define player ability.

For another, games are won or loss in finite quanta. Take your asymmetric rock paper scissors game. Say I'm player 2, but I'm not playing 10,000 games against you. Instead I am playing 1 game against you. I can choose between 2 strategies. Letting a weighted random generator choose my move according to the optimum distribution, or picking scissors. The former gives me the best expected value. The latter gives me the best chance of winning one game.

For a third, strategies are dynamic. Now imagine a rock paper scissors game where rock beating scissors wins $1 more if you win the previous round, and $1 less if you lose the previous round with a minimum value of $1. Your optimum strategy for best expected value is changing more or less every round. Furthermore, this optimum strategy can be exploited by non-optimal strategies that maximize the chance of winning the round and reversing the position.
Optimal play: 11/9/2019 22:22:53

DrApe
Level 60
Report
@Norman
As a counter example let's say we have agame called Heads or Tails with the following rules: Heads wins against Tails, and if both players choose equally then nobody wins.

Of course in this most trivial of cases, choosing heads will always be better. In this game where heads is the only way one can win, knowing your opponent's move does not impact your choice in the slightest. My point holds when the opponent's decision actually matters, such as in the case of your original post (either flanking or hitting the bonus). Similarly, in the augmented rock paper scissors game ("Let's say the winner gets 1$ with the following exception that if Player 1 wins against Player 2 with rock > scissors, then he gets 2$."), if you were to adhere exactly to this strategy as Player 2, you will perform no better against me if I just choose rock as Player 1 every time.

My expected value with always playing Rock is 2 * 0.25 - 0.41667 = 0.08333. If I were to follow your optimal Player 1 strategy, my expected value is 0.25 * (2* 0.25 - 0.41667) + 0.41667 * (0.333 - 0.25) + 0.3333 * (0.41667 - 0.3333) = 0.08333. As my calculations show, your optimal Player 1 strategy and my strategy of only Rock performs exactly the same against the optimal Player 2 strategy.

In order to be capable to exploit the weakness of your opponent you first have to look for patterns in his gameplay. A "gameplay pattern" is nothing else than a deviation from the optimal play which can't get exploitet.

This I can agree with. But in order to actually exploit this you will have to deviate from the optimal play yourself. Therefore my argument that you should never actually make the optimal play is still valid. If you choose a non-optimal play, you will only lose against another non-optimal play. Therefore you can look at opponent game history to see what sort of non-optimal play he makes and make your move based on that. However, this (making decisions by looking for patterns in old games) in turn can be exploited by the opponent looking at his own previous games and determining what you as a pattern-seeking player might predict, and playing against that. Then this becomes a mind game of how many levels you go. If the opponent instead does use the optimal strategy, you will still perform no worse if you deviate from your own optimal strategy (as I showed earlier). While there is still some degree of luck, I believe this is much more interesting than relying on a simple coin toss by making the play with no weaknesses or strengths.

In any case, unless you believe your opponent to be stronger than you and that you cannot possibly out-predict him, you should never strictly adhere to the optimal strategy.

Edited 11/9/2019 22:35:52
Optimal play: 11/9/2019 23:34:58


Norman 
Level 58
Report
You guys make a couple of interesting points and I have to pick some:

@Glass:
For a third, strategies are dynamic. Now imagine a rock paper scissors game where rock beating scissors wins $1 more if you win the previous round, and $1 less if you lose the previous round with a minimum value of $1. Your optimum strategy for best expected value is changing more or less every round. Furthermore, this optimum strategy can be exploited by non-optimal strategies that maximize the chance of winning the round and reversing the position.

The gamesolver site (https://www.math.ucla.edu/~tom/gamesolve.html) calculates a strategy where we aren't exploitable. For each and every problem (within certain boundaries fullfilled by 2 player WarLight) there always has to be a strategy where we aren't exploitable. As for your example where the payout changes during the game rounds, there also has to be a perfect mixed strategy. I'm not very good at this so spontaneously I don't know how to setup the matrix here. In case of fixed 2 turns I believe that something like a 6x6 matrix where each cell represents a tupple of the the first 2 turns should suffice for calculating the first turn while the second turn is an easy 3x3 matrix again.

@DrApe:
You are probably right with what you are writing. However my point isn't really to use those bizarre matrices to setup our own random number generator. It's more that in some instances those matrices show that the optimal (= non exploitable) play is something else than I would have initially expected as I have shown in my initial example where the attacker has to flank 2/3 of the times. When I am now looking over the past opponent games, I can use the result in order to find out whether he has deviated from that 2/3 rule. Without the exact calculation I wouldn't understand that him doing something different, like tossing a 50/50 coin, is a pattern which I can exploit. Similarly if I was the defender I wouldn't understand that I'm also exposing an exploitable weakness by being indifferent whether to defend my bonus or to defend the flank.

Edited 11/9/2019 23:37:13
Optimal play: 11/10/2019 01:00:16


Benjamin628 
Level 59
Report
Moved to Strategy Forum. Great thread.
Optimal play: 11/10/2019 05:51:21


Rufus 
Level 62
Report
By the way, you forget that there is also another option for attacker in your original scenario: attacker is not attacking nor flanking, but defending. In warzone it might be a good move.
Optimal play: 11/10/2019 05:53:01


Ares
Level 44
Report
rufus, warzone's mourinho
Optimal play: 11/10/2019 10:04:06


ℳℛᐤƬrαńɋℰ✕
Level 59
Report
@Norman
I read it, hence my long post. The situation described has very little to do with Game Theory actually. Optimal/dominant strategies depend on the environment or rules in the game. I see you understand the simultaneous thing from rock-paper-scissors, but still forget about turn order, attack/defense ratio... Warzone game can´t be won by just defending. 1000 turns forward and slight income difference advantage can be lost to higher defense ratio if non-players do nothing.

Its more of a random probability exercise. Mixed strategy if they play infinite times or finite for large amount (over 20/30 for example). For 1. turn defender rolls dice whether to defend B or C1 like you said. For turn two its same, for defender dice roll to defend C1 or C2 vice-verce for attacker. So both players have three different action sets.

1. Defend B
2. Defend C1 Defend C1
3. Defend C1 Defend C2

1. Attack C1
2. Attack B Attack C2
2. Attack B Attack C1

Expected value is opposite of opponents benefits. Defending should defend 2/3 times B and 1/3 of times C1. Attacker should attack 2/3 times C1 and 1/3 of times B1. Now this holds true if both players stick to these strategies and mix them up, by playing randomly. But as you see, its more of predicting whether your opponent sticks to that strategy or actually tries to counter your strategy and not taking account single-turns actions/options. It would be interesting if first turn options would be 3, not two and from there each territory leads to 2 or 3 different scenarios having 6 to 9 different outputs. This would be easy enough to analyse with head-calculator and be more interesting how people actually choose and what probability would recommend.
Optimal play: 11/10/2019 15:04:07


Norman 
Level 58
Report
@Rufus:
By the way, you forget that there is also another option for attacker in your original scenario: attacker is not attacking nor flanking, but defending. In warzone it might be a good move.
Yes, however I had to keep stuff simple. That's why my puzzle had the boundary condition that the break has to happen within 2 turns. Also don't get to hung up on the calculated results in my example, they are just true under the prerequisited which I have set. For example if you have the chance to break a small and a large opponent bonus you could use my model in the sense that the attacker should prefer breaking the small bonus while the defender should prefer defending the large bonus. However this would be to short sighted of a conclusion since if both breaking either the small and the large bonus are enough to win the game then both players have to be indifferent.


@Mr X:
Damn, you even blacklisted me in the time between your first and your second post here... and here I thought you were a cool dude with your clantag and after your nice first post.

Now this holds true if both players stick to these strategies and mix them up, by playing randomly. But as you see, its more of predicting whether your opponent sticks to that strategy or actually tries to counter your strategy and not taking account single-turns actions/options.
I really have covered the topic of you expecting your opponent not to play optimally in my guide:

-------------

Breaking C1 has double the value than breaking B1 since when you break B1 you still have a 50% chance of messing it up. That’s why we have to solve 2 * B1 = C1 which boils down to a 33% to 66% ratio.
Green has to defend B1 33% of the time and C1 66% of the time.
Red has to attack B1 66% of the time and C1 33% of the time.

If both players play optimal the attacker has a 33% chance of breaking the bonus.

Now let’s make some examples where one of the bots plays imperfect and the other bot abuses him:
If the defender throws a 50/50 dice on whether to defend B1 or C1 the attacker can always attack C1. This gives him a 50% chance of winning.
If the defender always defends C1 then the attacker can always attack B1. This also gives him a 50% chance of winning.
If the attacker instead hits C1 instead of B1 66% of the time the defender can abuse this by always defending C1. Then the attacker only ends up with a 33% * 50% = 16.5% chance of winning the game.

-----

Nomally I would ask you how this doesn't cover your points.

Warzone game can´t be won by just defending. 1000 turns forward and slight income difference advantage can be lost to higher defense ratio if non-players do nothing.
I have kinda a hard time seeing how this has anything to do with my posts, however I kinda feel like derailing my own thread here.

Edited 11/10/2019 15:17:46
Optimal play: 11/10/2019 16:15:54


ℳℛᐤƬrαńɋℰ✕
Level 59
Report
Thanks for answer. You are in my blacklist due to spamming in chat! Nothing more.
Optimal play: 11/10/2019 16:25:19


Norman 
Level 58
Report
Optimal play: 11/11/2019 20:37:40

Hergul
Level 60
Report
@Norman
Regarding the first example, you should consider your conclusions in a wider frame. There is not a specific point about one being the attacker and one the defender.

The point is that there is a player that has an advantage (in your example the defender, as in order to win, he needs to do the correct move once in two turns) and the general rule is that:
“The player that has an advantage plays more often the strategy that grants the best worst expected result” (more simply plays more often the safer or obvious move)

Example: there is a choke where Player1 has stack advantage and needs to choose between:
A) Fulldeploy where he already has stack advantage and break his opponent bonus 100% granted, while compromising his secondary objective (expansion, another border, whatever…).
B) Try a smaller attack (and risk being defended), while pursuing also his secondary objective.

Assuming that Player 1 winning chances are as follows:
- Option A: 70% if Player2 defends and 60% if Player2 does not
- Option B: 50% if Player2 defends and 90% if Player2 does not

then Player 1, being in advantage should play more often Option A that “grants the best worst result” (i.e. 60% win rate vs 50% of Option B).

This is true despite Option A seems worse as average, and would apply even with extreme % as:
- Option A gives 51% or 52% win rate depending on Player 2 move
- Option B gives 50% or 100%.
The strategy that cannot be outplayed is still picking Option A (actually surprisingly close to 100% of the times).
---

As of the picking question, under perfect information assumptions, this is a plain rock/paper/scissor game so the correct way to play is 1/3 for each choice: (1) dominant pick, (2) counter and (3) other picks (assuming this loses 100% vs dominant, wins 100% vs counter and gives 50% vs any combination of other picks).

Edited 11/11/2019 20:42:31
Optimal play: 11/11/2019 20:58:55

Hergul
Level 60
Report
Aside from the theory above, coming to the point of "Optimal Play", this is by far more complicated.

The very first consideration is that playing the theoretical % calculated with Nash equilibrium will give nothing more than average results against weak or strong players.

E.g. a bot programmed this way may score below a strong human in a round robin tournement, as being unable to take benefit outpredicting weak players.

So a possible rule is: play the theoretical chances vs players that are stronger at predicts and try outpredict weaker players.
Optimal play: 11/12/2019 10:42:05


Norman 
Level 58
Report
@Hergul: For your scenarios I get the following results:

Your first input:
-----
The matrix is
0.9 0.5
0.6 0.7
The value is 0.66.
An optimal strategy for Player I is:
(0.2,0.8)
An optimal strategy for Player II is:
(0.4,0.6)
-----

Your second input:
---
The matrix is
1 0.5
0.51 0.52
The value is 0.51961.
An optimal strategy for Player I is:
(0.01961,0.98039)
An optimal strategy for Player II is:
(0.03922,0.96078)
---

(The rows are Player 1s decision and the Columns Player 2 decision. The first entry means no deployment and the second means deployment)

The results here are according to what you wrote, for example the second results mean that Player 1 has to deploy 98% of the time and Player 2 has to defend 96% of the time.



The very first consideration is that playing the theoretical % calculated with Nash equilibrium will give nothing more than average results against weak or strong players.

E.g. a bot programmed this way may score below a strong human in a round robin tournement, as being unable to take benefit outpredicting weak players.

So a possible rule is: play the theoretical chances vs players that are stronger at predicts and try outpredict weaker players.


Exploiting weaknesses can obviously increase your win chances. However I'm not completely sold on your first point that playing optimally without exploiting the opponent only gives average win chances. I'm thinking for example at 3v3 Europe which is liked by strong players due to it not being very "rock paper scissors" like. However the optimal picking strategy is very difficult to see on this template. I got outpicked quite some times here where we had like the first 12 picks completely identical and with the later picks we had only slight differences in the order. In this scenario my picks were just plain worse than my opponents, so an optimal player would never go for my picks. However a team playing optimal will maybe never prioritize Spain high but might from time to time go for a sneaky Poland.

Also there are players who were capable of winning the Seasonal Ladder up to 3 times. This confirms my assumption that in an average WarLight turn you have a lot of moves which seam feasible to an average player however the higher the skill level goes the more the players can narrow the moves down since some moves are strictly dominant over the others. For example a highly skilled player might understand after picks that if his opponent counters him in a certain way he has already lost, so for this reason the optimal play is to play as if the possibility of this opponent counter is 0.
Optimal play: 11/12/2019 12:26:45

Hergul
Level 60
Report
Thanks for giving the precise math of my examples. What I also mean is that your first example where you mention the roles of defender and attacker falls under the same wider logic of "party in advantage", that in your example is the defender, having two options:
1) Defend the bonus: 100% win, 50% win
2) Defend the side: 100% win, 0% win
And the "optimal play" is to go for opt. 1 more often, according to the general rule I referred to.

Regarding the other point, i.e. that "Optimal Play" gives average chances, I expressed the concept poorly. What I mean is that in the rigid frame where there is a set of options none of which is dominant or losing, a bot programmed for what we call "Optimal Play" will win an average number of times according to the specific situation. E.g. "Optimal Play" in a 50/50 situation will win 50% of the times, even vs a weak player that always picks an option and is easily outplayed by any decent player.

Hence the second rule stands, i.e. "use Optimal Play vs players stronger than you (e.g. flip the coin for 50/50 decisions), try to outpredict weaker players".

Your example about the 3vs3 Europe map does not fall in the rigid boundaries of the theorical examples, where I assumed perfect information and no dominant/losing option.

I actually fully agree that stronger players outplay others because of two reasons:
1) Are stronger in evaluating possible alternatives (moves, picks, whatever) and the asimmetry in the information
2) Are stronger at predicting

And in my view point 1 is by far the most important. I have analyzed many games I lost vs strong opps, and very often I learned they just used superior strategies and not predicts.
Optimal play: 11/22/2019 18:10:29


Glass
Level 59
Report
"I actually fully agree that stronger players outplay others because of two reasons:
1) Are stronger in evaluating possible alternatives (moves, picks, whatever) and the asimmetry in the information
2) Are stronger at predicting

And in my view point 1 is by far the most important. I have analyzed many games I lost vs strong opps, and very often I learned they just used superior strategies and not predicts."

Strongly agree with this. I've played a great many games, some at very high skill level. Some involved chance, some involved predictions, all involved strategy. In all those games winning with strategy is preferable to relying on chance or predictions. The latter can win games, but good players put themselves in positions where those don't matter, because it is far more consistent.

Edited 11/22/2019 18:11:12
Optimal play: 11/26/2019 11:57:11


astroporn
Level 54
Report
My (1 v 1 experience mostly on the official ladder maps) 2 cents to be added in this wonderful conversation...that I followed for 3-4 posts so excuse me if the following have been already discussed....

Warlight resources (numbers) can be spent on two categories. The first one is FIGHT and the second is EXPAND. Given similar skills of opponents that include the ability to properly analyse the board and conclude which of the two above aspects they should spent their numbers on, the one who manages to spend LESS ON FIGHT => MORE ON EXPAND than the other, should win.

I prefer to keep it as simple as possible so I'll leave it there, knowing that resource allocation is a very long story.
Posts 1 - 28 of 28