An alternative ranking table
Posted: Fri Nov 30, 2018 4:55 pm
It was a pleasure to see A7D from DeepMind did excellently in CASP13. My congratulations! The Z-score based ranking provided in the homepage, however, may not tell the entire story, because it creates some level of bias towards new methods such as A7D. Apparently, if you create a good model using a new method that is different from others, it is easier to get a high Z-score. On the other hand, it is much harder to get a high Z-score if you create a good model using traditional TBM methods as other groups may also generate similarly good (or slightly worse) models. Of course, novel methods are always welcome and encouraged by the CASP community. Just for providing more information, I simply add GDT scores of all the 112 'all-groups' domains together and obtained the following table where a couple of groups have a higher overall score than A7D. Indeed, there are quite a few targets where A7D did badly compared others, which lower down its overall GDT scores. Nevertheless, there is no doubt that A7D is an extremely outstanding addition to the community.
Zhang 112 71.720
MULTICOM 112 70.980
A7D 112 70.530
Zhang-Server 112 69.330
QUARK 112 69.240
McGuffin 112 68.840
wfAll-Cheng 112 67.790
RaptorX-DeepModeller 112 67.700
Grudinin 112 66.210
MESHI 112 66.150
Zhang 112 71.720
MULTICOM 112 70.980
A7D 112 70.530
Zhang-Server 112 69.330
QUARK 112 69.240
McGuffin 112 68.840
wfAll-Cheng 112 67.790
RaptorX-DeepModeller 112 67.700
Grudinin 112 66.210
MESHI 112 66.150