<< Back to Warzone Classic Forum   Search

Posts 441 - 460 of 924   <<Prev   1  2  3  ...  12  ...  22  23  24  ...  35  ...  46  47  Next >>   
Multi-day ladder: 4/12/2017 17:19:06


Hog Wild
Level 58
Report
Has the Multiday Ladder been growing consistently? Or was there a spike at some point? I'm fairly sure there were way less than 71 active players when I last played a game on there. More like there were less than 71 players there, period. :P

Edited 4/12/2017 17:19:21
Multi-day ladder: 4/12/2017 22:28:17


Deadman 
Level 64
Report
MoD, when I look at your player page Ladder statistics, I see that you was first on 4 ladders, but not a single word about MDL ladder records. This could be good promotion. I would also like to have my record on MDL mentioned on my page, even though I have no idea how I got that high
This is something I cannot control. It'll have to be done by Fizzer. The last time I spoke to him, he said he would add achievements if MDL is running for a longer period of time. I'll sync with him after MDL completes 6 months(in 13 days). I can add more stats on the MDL player profile page, which I am currently looking into.

Has the Multiday Ladder been growing consistently? Or was there a spike at some point? I'm fairly sure there were way less than 71 active players when I last played a game on there. More like there were less than 71 players there, period. :P
It has always had 65+ players after the first month or so. It peaked to about 85 active players around a month ago, but is down to 72 at the moment.
Multi-day ladder: 4/13/2017 23:31:42


l4v.r0v 
Level 59
Report
I'm currently considering if I should switch to Bayesian Elo(which is what Fizzer uses on the 1v1/2v2/3v3 ladders). If such a switch is made, the ratings may change a bit, but hopefully the impact is minimal as there are very few finished games.


If you go through with the switch to Bayeselo, how are you planning on implementing it? Coulom* has a Windows executable on his site, iirc, and the source code (in C++ or C#?) is there as well. Are you planning on just using the executable, creating a pipeline that uses the bayeselo source, rewriting Bayeselo as a Python library, or something else?

* Also, as a sidenote for anyone reading, it turns out that the guy behind Bayeselo has done some other pretty interesting things- like a pretty solid Go engine (https://en.wikipedia.org/wiki/Crazy_Stone_(software))

Edited 4/13/2017 23:32:55
Multi-day ladder: 4/13/2017 23:39:07


Deadman 
Level 64
Report
If you go through with the switch to Bayeselo, how are you planning on implementing it? Coulom* has a Windows executable on his site, iirc, and the source code (in C++ or C#?) is there as well. Are you planning on just using the executable, creating a pipeline that uses the bayeselo source, rewriting Bayeselo as a Python library, or something else?
I was considering BayesElo almost 4 months ago and explored all the options you talked about. I believe there is only a C++ implementation.

However, the current rating algorithm really grew on me and I'm quite content with the way it's performed over the last 5 months. We had a good conversation on rating algorithms around that time as well if I recall(krunx, memele, Math Wolf etc). I don't plan on switching to BayesElo unless someone gives me good reasons to do so at this point.
Multi-day ladder: 4/14/2017 00:02:08


l4v.r0v 
Level 59
Report
:P Forgive me for getting spammy with technical questions, but then are you using some expiry mechanism with Elo?

I think Bayeselo's biggest advantage (ignoring all the technical measures of statistical rating systems) is that it makes it easy to expire games without storing any extra information. I don't know whether expiring games in a TrueSkill/Glicko/Elo system would be statistically sound, so that's about the biggest advantage of Bayeselo I can think of from a player or ladder-creator standpoint.

Expiring games isn't inherently valuable, though, but that's about the first reason I would think of for wanting to use Bayeselo in particular over Elo, Glicko, and TrueSkill.

On the technical end, there's the advantage that your rating will improve if a player you beat (or lost to) turns out to be much better than the system thought they were when they played you. But I wouldn't put it past you to have figured out some way to pull that off with the Elo system you currently have.




Also the loading animation you have (not sure if it's intentional) for the rating/rank graphs on player pages is really mesmerizing. Even if it's just standard for jqPlot, I really like it and man the crisp UI on there is miles ahead of what I thought CLOTs would look like when the framework first came out. Used to be kind of judgmental about it because it's got tables and doesn't look like every other Bootstrap site (my views about good UI were really skewed thanks to all those startup sites that look the same), but you did an awesome job with information displays on that ladder across-the-board. Not that I'm the arbiter of UI quality, but yours has grown on me so fast and I figured I should tell you it's great just in case no one else did.


EDIT: Also, is the Weekly Report broken or does it just restart on Fridays? http://md-ladder.cloudapp.net/report

Edited 4/14/2017 00:11:04
Multi-day ladder: 4/14/2017 00:34:36


(deleted) 
Level 62
Report
As someone who recently joined the MDL, I say that only half the issue of why low-mid tier guys were addressed here. You guys talked about the need to reach and educate the low-mid tier players, which is inherently the first hurdle that needs to be jumped if you want to have them join the MDL.
"
The issue that I didn't see mentioned is retaining low-mid tier players, which I guess is going to be MUCH trickier. IMO the MDL as is right now isn't designed for mid-tier players. There are always exceptions, but for the most part, those guys have a tendency to teach themselves gameplay on a single map. Why do you think ROR is so popular? I know when I was starting out it took me a while to branch out to more than 3 maps. I didn't really branch out until I was confident in my understanding of how to play the basics. I didn't like feeling lost on a new map. Honestly, even now I'm sometimes just not in the mood to start with 0 knowledge and just want a game I know. To many newer players, that feeling is amplified for reasons stated above.

If you want the MDL to succeed with mid-tier players, I think a restriction on what maps are played is going to be necessary. New players should start with the lowest rating possible, and as their rating improves, new maps are unlocked. If they are drop in ratings, their map availability drops also. As a side note, I also think the total number of maps needs to be reduced. I think even experienced players don't want to learn so many new maps. Variety is nice, but sometimes it feels like a chore when every game you need to analyze the map in full.

Edit: Don't eat dinner mid-forum post...you'll come back and realize this is NOT the forum to post this idea. "

Post by Zack Fair

I thought it was a well presented point and wanted to ensure it got noticed in the thread it should be in. So I copied and Pasted.
Multi-day ladder: 4/14/2017 00:35:16


Deadman 
Level 64
Report
I think Bayeselo's biggest advantage (ignoring all the technical measures of statistical rating systems) is that it makes it easy to expire games without storing any extra information. I don't know whether expiring games in a TrueSkill/Glicko/Elo system would be statistically sound, so that's about the biggest advantage of Bayeselo I can think of from a player or ladder-creator standpoint.

Expiring games isn't inherently valuable, though, but that's about the first reason I would think of for wanting to use Bayeselo in particular over Elo, Glicko, and TrueSkill.

On the technical end, there's the advantage that your rating will improve if a player you beat (or lost to) turns out to be much better than the system thought they were when they played you. But I wouldn't put it past you to have figured out some way to pull that off with the Elo system you currently have.
I think it is really important to have games expire, or you end up with something like the RT ladder, where people have ancient ratings which were acquired in a different time frame. Another advantage of expiring games is that it "gives" points back to the system. It's like a soft reset. There are a lot of 1500+ rated players who have left the ladder. If their games don't expire, the points which they have won off of someone else will never be returned to the system.

Recently MDL crossed the 5 month mark(which is when games expire). The CLOT has stopped considering the expired games when calculating ratings and I've been monitoring changes over the past 2 weeks. Surprisingly, the ratings don't change all that much even when you have a lot of games expire(ex - MoD). I'm quite pleased with this as it means players are close to their "true" Elo ratings even if a few games are taken out of the equation.

Traditionally, Elo scores are computed off of the previous Elo scores. However, to be able to handle expiring games, I made the decision to compute Elo ratings from scratch on every update. Everyone starts with the base rating and the games are considered in chronological order as long as they are within the 5 month window. This is not the best running time for this algorithm, but since the scale of this operation is quite small, it has negligible impact.

but you did an awesome job with information displays on that ladder across-the-board. Not that I'm the arbiter of UI quality, but yours has grown on me so fast and I figured I should tell you that just in case no one else did.
Although I made the initial design choices, lately Muli drives most of the charts and general look and feel of the site. So credit to him. He has some cool new features planned as well, which I'm quite looking forward to.

EDIT: Also, is the Weekly Report broken or does it just restart on Fridays? http://md-ladder.cloudapp.net/report
That's a bug. The page lists the report from the database for the given day. Updates occur daily on the first "run"(which is between 00:00 and 02:00 UTC). So there is a brief time period when there is no recorded update yet and the page displays nothing. I'll fix it soon.

Edited 4/14/2017 00:38:54
Multi-day ladder: 4/14/2017 02:07:23


l4v.r0v 
Level 59
Report
Thanks for explaining that. I think I'd agree that running time takes a backseat to rating system usability.

I think the point loss you mentioned in the context of players leaving might actually work the opposite way; taking a rough look at the player list for MDL, it looks like players that joined and left are disproportionately below-average- so your system is actually gaining points when they join and leave, because their losses caused points from their initial rating to go to other players- increasing the system-wide total. Perhaps a hackier but less expensive approach to achieve the same effect would be checking the sum of remaining teams' ratings, comparing it to the default rating * # of teams, and multiplying each remaining team's rating to make the total equal the default rating * # of teams- every time a team leaves the ladder and does not return for some amount of time. Or you could just run that at the start of every iteration.
Multi-day ladder: 4/14/2017 02:44:04


Deadman 
Level 64
Report
I ran a query to check the average rating of players who have left and it is 1476. So you're correct in that this causes inflation of the active players' rating. I'll have a rethink about this, even though I strongly believe ancient ratings(akin to RTL) is a bad thing.

Perhaps a hackier but less expensive approach to achieve the same effect would be checking the sum of remaining teams' ratings, comparing it to the default rating * # of teams, and multiplying each remaining team's rating to make the total equal the default rating * # of teams- every time a team leaves the ladder and does not return for some amount of time. Or you could just run that at the start of every iteration.
This will achieve equilibrium in the system. I'll consider it as well. I was thinking of introducing small inflation in the system over time(to combat complaints that players feel like they have to win too many games just to maintain 1500 rating). I'll have to run it by Math Wolf who is my rating consultant :p

p.s - These changes will probably take some time since I'm working on some other features for MDL at the moment.
Multi-day ladder: 4/14/2017 02:49:10


l4v.r0v 
Level 59
Report
I think your current system works fine; was just throwing out ideas because, well, this is just interesting. :)

The multiplier approach I suggested wouldn't actually work because Elo and some other systems encode the win/loss/draw probability in the absolute difference between two teams' ratings, so you'd just be throwing off your system. I think subtracting/adding an adjustment might work, though. But like you I don't trust anything about rating statistics until Math Wolf confirms it so I'm not that sure either. :P I was just looking into how other games do it, and noticed that League of Legends has Elo decay for inactive teams so it does seem like absolute adjustments are at least used.

No rush at all to make these changes; I'm more than satisfied with what exists and I'm not even sure whether implementing them would be worth your while or beneficial on the whole.

(EDIT: I was asking those questions because this stuff is pretty interesting to me, not because I want to critique your methods or suggest changes; you're probably much better at figuring out these types of problems than I am, so I'm not in a position to judge your solutions. :P)

Edited 4/14/2017 03:02:12
Multi-day ladder: 4/14/2017 06:26:58


l4v.r0v 
Level 59
Report
Actually on the topic of improvements, I think you still use a greedy algo for pairing based on the old GitHub code on Deadman1. If that's the case, I wrote a small utility library called pair (you can find it on PyPI and the source/basic documentation on my GitHub). You should be able to integrate it in seconds. Also lets you do more interesting things with non-binary template preferences. Let me know if you're interested in integrating it and would like help setting up. It's got full test coverage but the codebase is not great on style or documentation (beyond Python docstrings) so I can't guarantee the absence of bugs but it's pretty small and easy to fix when things go wrong.

Just guarantees optimal pairings and template assignments at every batch (although since combinatorial optimization is generally NP-hard for >2 dimensions, I can't guarantee both at once) so you can pair your teams better system-wide.

Edited 4/14/2017 06:27:43
Multi-day ladder: 4/14/2017 06:50:45


master of desaster 
Level 66
Report
Can you please explain hiw the pairing works and why you think it's the "most optimal way" to do pairings?
Multi-day ladder: 4/14/2017 10:00:55

Mike
Level 59
Report
I used to play a game online where ranking was as follows :

- everybody starts at 1000
- wins bring between 1 and 6 points depending on your current ranking and on rating of your opponent
- losses cost between -6 and -1 points depending on the same criterias
- it seems points won or lost doesn't just look at rating differential, but look first at your rating, to put you in a bracket of possible points to win or lose for the game, then look at your opponent rating to assign the high or low margin in the bracket. For example, I remember that top player would make either 1 or 2 points systematically (bracket), and it would be 1 or 2 depending on your opponent's ranking (1 if opponent is too far behind, 2 if he has a decent ranking)
- Also, in every game, sum of points won and lost cancel each other (if team A wins 3, team B loses 3)
- And sum of possible points won (or lost) by both teams equal 7 (or -7) : in a game where top player can make either 1 or 2 points, opponent can make either 6 or 5 points.
- Every 30 days (or was it 90? forgot) rankings would reset to 1000.
- there was an inflation of up to 2000 / 2200 for top player every season, and it didnt take too long to top the league if you missed the start with this rating system (so yeah was probably more 90 than 30 days season).
- Oh and each game would notify you at the end how much points both teams made or lost (transparency), and ranking doesn't move over time, so is relevant at any time you look at it.

All that to say that this rating was awesome and was making things very addictive, which you may need. Well, this may not be too far from your current rating system.

Is that a specific rating system which has a name to your knowledge ?

Edited 4/14/2017 10:04:33
Multi-day ladder: 4/14/2017 15:33:53


l4v.r0v 
Level 59
Report
@mod: Whoops, thanks for bringing that up.

For assigning templates, the library treats the task as an isomorphism of the (linear) assignment problem. E.g., you have x players and y templates and costs for each combination (as well as some combinations that are disallowed). You can use one of a class of well-known algorithms to generate assignments such that the total cost is minimized. I modified an O(n**3) implementation of the classic algorithm for this problem (Kuhn-Munkres) to do this because it was the easiest to modify to support disallowed assignments (vetos in this ladder). You're able to mathematically guarantee that, at each batch, the total (sum) cost is minimized.

It's only super-worthwhile to use the templates part, though, if you're also interested in giving people more flexibility when it comes to expressing template preferences- but I explained it first since it helps explain the player pairing.

Pairing uses a similar principle, except you're working on connections within one group instead of connections between 2. So while template assignment is analogous to running a taxi company, getting 3 calls, having 3 taxis at different locations such that they don't have the same distance to each customer, and figuring out which taxi to send to which customer such that timeliness (or total customer satisfaction) is best maintained; player pairing is like running a student dormitory and trying to build roommate pairs such that, say, the total happiness of all these students over the year is maximized.

So we have a single graph (not bipartite) where each player is connected to any players that they're allowed to have games with and each of these edges has a score. Using NetworkX's implementation of Edmonds' Blossom algorithm and some other network matching algorithms (suggested to me by Derfellios), we're able to find the player pairings that maximize the total score (and don't match players that you don't want to get matched- like players they just played).

You get to assign the weights (for templates) or supply a scoring function (for players/teams) so it doesn't have to be strictly linear- you can make the scores you supply exponential, for example, if you prefer matchings that (based on Elo parity/quality calculations) are 2x as good, 4x as much.

But at the end of pairing and assignments, we're able to extend guarantees that whatever you chose was, given the scores/weights you supplied, the optimal set of assignments/pairings within the space you provided. If you're still using a greedy algorithm to pair teams, you likely cannot extend that same guarantee and will sometimes perhaps even have cases where your system either had to give up some pairings or create some bad pairings when you didn't actually need to do so- there's certainly at least a wide space of conceivable edge cases where a greedy algorithm only yields terrible results and just breaks down. You might also find this pairing guarantee useful for the more interesting problem space of the Team Ladder- pairing 2 players, then pairing 2 teams, and assigning each resulting matching to a template is exactly within the sweet spot of this utility library.

(Also, like I said, this whole space of algorithms tends to break down after 2 dimensions, so I do not offer any guarantees if you're grouping teams for FFA or building teams of 3 or more players using software. You *can* secure those guarantees, though, but unless you're familiar with combinatorial optimization and can find algorithms that retain polynomial time and space complexity as dimensions increase, your solution will likely be O(n!), O(n**n), or something else that leaves you wondering why you got the solution in .2s for 10 teams but your algorithm has yet to stop running for 11.)
Multi-day ladder: 4/15/2017 00:57:22

rouxburg
Level 61
Report
@knyte; to sum up, you are saying Motd is using a greedy algorithm for pairings and there is a better way of doing it.
Multi-day ladder: 4/15/2017 01:10:56


l4v.r0v 
Level 59
Report
@rouxburg: If MotD is using a greedy algorithm and wants to instead have optimization guarantees, then he can simply install (via pip) and import the pair library (https://pypi.python.org/pypi/pair) and add some helper functions to use its pair_teams, pair_players, and assign_templates functions.

Edited 4/15/2017 01:25:40
Multi-day ladder: 4/15/2017 05:26:38


Deadman 
Level 64
Report
@knyte

Actually on the topic of improvements, I think you still use a greedy algo for pairing based on the old GitHub code on Deadman1. If that's the case, I wrote a small utility library called pair (you can find it on PyPI and the source/basic documentation on my GitHub). You should be able to integrate it in seconds. Also lets you do more interesting things with non-binary template preferences.

But at the end of pairing and assignments, we're able to extend guarantees that whatever you chose was, given the scores/weights you supplied, the optimal set of assignments/pairings within the space you provided. If you're still using a greedy algorithm to pair teams, you likely cannot extend that same guarantee and will sometimes perhaps even have cases where your system either had to give up some pairings or create some bad pairings when you didn't actually need to do so- there's certainly at least a wide space of conceivable edge cases where a greedy algorithm only yields terrible results and just breaks down. You might also find this pairing guarantee useful for the more interesting problem space of the Team Ladder- pairing 2 players, then pairing 2 teams, and assigning each resulting matching to a template is exactly within the sweet spot of this utility library.
I think this would have more application in cases like the seasonal ladder where you are trying to create a lot more pairings at a time. An update cycle on MDL has an average of 2 games finished(max 4-5). A greedy approach works quite well as two opponents can face each other once in 5 days(so the combinations disallowed are very few). Once a pair is picked, there is always a large pool of permissible templates( 51 - 2 * max veto count - 2 * last 5 templates = 27 in the worst case). At this point, picking a template at random is the best option in order to ensure players can experience all the templates with equal probability.

If we have 4-6 players who need to be allotted a game in a cycle, and make early pairings such that some later players cannot get a game on that cycle, it is still not a big problem. The next update cycle will run in 2 hours and will mostly give a legitimate pair for the ignored player.

This is just my gut feeling here. I'll implement some telemetry to figure out how many players are ignored in a cycle on an average and let that guide my ultimate decision. Looks like a neat library.

Edited 4/15/2017 05:57:13
Multi-day ladder: 4/15/2017 05:56:31


Deadman 
Level 64
Report
@Mike

I used to play a game online where ranking was as follows :
- everybody starts at 1000
Everyone starts with a rating of 1500 on MDL.
wins bring between 1 and 6 points depending on your current ranking and on rating of your opponent
Wins bring between 1 and 32 points depending on your current ranking and on rating of your opponent on MDL
losses cost between -6 and -1 points depending on the same criterias
losses cost between -32 and -1 points depending on the same criterion.
it seems points won or lost doesn't just look at rating differential, but look first at your rating, to put you in a bracket of possible points to win or lose for the game, then look at your opponent rating to assign the high or low margin in the bracket. For example, I remember that top player would make either 1 or 2 points systematically (bracket), and it would be 1 or 2 depending on your opponent's ranking (1 if opponent is too far behind, 2 if he has a decent ranking)
The top players usually make about 3-4 points on a win, and lose about 28-29. Elo takes care of how that number is determined.
Also, in every game, sum of points won and lost cancel each other (if team A wins 3, team B loses 3)
True on MDL as well(subject to rounding).
And sum of possible points won (or lost) by both teams equal 7 (or -7) : in a game where top player can make either 1 or 2 points, opponent can make either 6 or 5 points.
True on MDL as well(subject to rounding).
Every 30 days (or was it 90? forgot) rankings would reset to 1000.
This is handled a bit differently on MDL. I don't think it is a rewarding experience to see something you have worked hard for, get wiped every 30 days. To keep ratings relevant to the present, we have the concept of game expiration instead. Any game older than 5 months will not impact ratings. This is a concept which works well as proven on WL ladders.
- there was an inflation of up to 2000 / 2200 for top player every season, and it didnt take too long to top the league if you missed the start with this rating system (so yeah was probably more 90 than 30 days season).
MDL's rating system can provide similar guarantees. This player cracked 1700+ with just 20 games - http://md-ladder.cloudapp.net/player?playerId=611489923
Oh and each game would notify you at the end how much points both teams made or lost (transparency), and ranking doesn't move over time, so is relevant at any time you look at it.
This is a bit tricky as the "worth" of a game will always change given our concept of game expiration. A game worth 3 points to me today may be worth 10 points two months later if some of my older games expire. Therefore it is hard to show such a number. However if you track the rating charts, it is usually quite clear how much you gain or lose. You can also use this site(http://www.3dkingdoms.com/chess/elo.htm) to predict the impact of a game. Use k=32 and the ratings of the players at game creation time.


All that to say that this rating was awesome and was making things very addictive, which you may need. Well, this may not be too far from your current rating system.

Is that a specific rating system which has a name to your knowledge ?
You have described something very similar to the Elo rating. In short, MDL has everything your game had.

Edited 4/15/2017 05:58:28
Multi-day ladder: 4/15/2017 12:40:04


Deadman 
Level 64
Report
Added a new leaderboard page which contains the following:
  • Players with most all-time games
  • Players with most unexpired games.
  • Players with most wins
  • Players with best win rate
  • Players who have spent the longest consecutive days ranked as #1 on MDL
  • Players who have spent the longest consecutive days ranked in the top 5 on MDL
  • Players who have spent the longest consecutive days ranked in the top 10 on MDL
  • Players who have spent the longest consecutive days ranked on MDL
  • Players with the longest consecutive win streak
  • Players with the least number of vetoed templates
If you spot any errors in the data or have any suggestions, let me know. I plan to add more charts and statistics in the next update. I'll also introduce "achievements" which can be earned on MDL and will be listed on your MDL profile.

Some of these leaderboards require continuity on the ladder, so hopefully it helps keep people engaged!



Edited 4/15/2017 12:40:18
Multi-day ladder: 4/15/2017 13:13:27


Kezzo
Level 61
Report
Wohooo fucking awesome!!
Posts 441 - 460 of 924   <<Prev   1  2  3  ...  12  ...  22  23  24  ...  35  ...  46  47  Next >>