{"id":1125,"date":"2005-11-10T18:50:34","date_gmt":"2005-11-11T01:50:34","guid":{"rendered":"http:\/\/dabacon.org\/pontiff\/?p=1125"},"modified":"2005-11-10T18:50:34","modified_gmt":"2005-11-11T01:50:34","slug":"ranking-by-wins","status":"publish","type":"post","link":"https:\/\/dabacon.org\/pontiff\/2005\/11\/10\/ranking-by-wins\/","title":{"rendered":"Ranking By Wins"},"content":{"rendered":"<p>Yesterday I attend a talk by local CS graduate student Atri Rudra describing work he did with Don Coppersmith (IDA, Princeton) and Lisa Fleischer (IBM, NY) where he described their work on rankings for weighted tournaments (see here and <a href=\"http:\/\/eccc.hpi-web.de\/eccc-reports\/2005\/TR05-131\/index.html\">here<\/a>.)  The result is pretty neat, so I thought I&#8217;d describe it here.<br \/>\nSuppose you are have a tournament where every player plays every other player exactly one time (the results are more general than this, but let&#8217;s keep it simple) and there are no ties.  From this tournament, you&#8217;d like to develop a ranking of the players which most respects who beat who.  In other words, you&#8217;d like to rank the players from first place to last place in such a way that you minimize the number of times in your ranking that a lower placed person beats a higher placed person.  Now, it turns out that this problem, finding the optimal such ranking, is NP-hard (this was recently shown by Ailon, Charikar, and Newman)  I.e. as our tournaments get bigger and bigger we have an ice cube&#8217;s chance in hell of finding this optimal ranking before we die \ud83d\ude09<br \/>\nSo what do you do in the real world?  Well a simple strategy you might take is to give a ranking according simply to the number of times each player has won.  Thus the first ranked person is the person with the most wins, the second player has the second most wins, etc, and we break tries in some arbitrary manner.  A natural question to ask, then, is how close to the optimum ranking does a ranking by this simple strategy get?  Let&#8217;s measure our rankings by counting the number of times that our ranking fails (i.e. the number of times in our ranking that the lower ranked person beat a higher ranked person.)  Then, there is a value for this quantity for the optimal such ranking and for a ranking obtained by the simple strategy of ranking by number of wins.  What is shown in the paper by Coppersmith, Fleischer, and Rudra is that the simple raning by the number of wins has at most five times as many failings as the optimal ranking.  This is rather amazing, in my simple mind: we get within a factor of five (at worst) by simply using a ranking which orders by number of wins (with ties broken arbitrarily.)  In fact, what is also shown by these authors is that it doesn&#8217;t matter how you break ties: it will always be the case that the worse case is within a factor of five.<br \/>\nKind of neat, no?  A natural question, of course, is if there are strategies which can be efficiently computed but which beat this factor of five bound.  Indeed!  It turns out that there are algorithms which give only factors of three.  These algorithms, however, are more complicated.  What is so nice is that the simplest ranking you could come up with, has such a nice approximation bound!<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Yesterday I attend a talk by local CS graduate student Atri Rudra describing work he did with Don Coppersmith (IDA, Princeton) and Lisa Fleischer (IBM, NY) where he described their work on rankings for weighted tournaments (see here and here.) The result is pretty neat, so I thought I&#8217;d describe it here. Suppose you are &hellip; <\/p>\n<p class=\"link-more\"><a href=\"https:\/\/dabacon.org\/pontiff\/2005\/11\/10\/ranking-by-wins\/\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;Ranking By Wins&#8221;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":false,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[20],"tags":[],"class_list":["post-1125","post","type-post","status-publish","format-standard","hentry","category-computer-science"],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/dabacon.org\/pontiff\/wp-json\/wp\/v2\/posts\/1125","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dabacon.org\/pontiff\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dabacon.org\/pontiff\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dabacon.org\/pontiff\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/dabacon.org\/pontiff\/wp-json\/wp\/v2\/comments?post=1125"}],"version-history":[{"count":0,"href":"https:\/\/dabacon.org\/pontiff\/wp-json\/wp\/v2\/posts\/1125\/revisions"}],"wp:attachment":[{"href":"https:\/\/dabacon.org\/pontiff\/wp-json\/wp\/v2\/media?parent=1125"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dabacon.org\/pontiff\/wp-json\/wp\/v2\/categories?post=1125"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dabacon.org\/pontiff\/wp-json\/wp\/v2\/tags?post=1125"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}