15th Community Wide Experiment on the
Critical Assessment of Techniques for Protein Structure Prediction
`
TS Analysis : Z-score based relative group performance
Results Home Table Browser
  GDT_TS   Assessor's formulae

    Models:

    • Ranking on the models designated as "1"
    • Ranking on the models with the best scores

    Groups:

    • All groups on 'all groups' targets
    • Server groups on 'all groups' + 'server only' targets

    Formula and Domains:

      The ranking of groups is based on the analysis of zscores for GDT_TS.
    • TBM-easy
    • TBM-hard
    • TBM/FM
    • FM
    #     GR
    code
    GR
    name
    Domains Count     SUM Zscore
    (>-2.0)
    Rank SUM Zscore
    (>-2.0)
    AVG Zscore
    (>-2.0)
    Rank AVG Zscore
    (>-2.0)
    SUM Zscore
    (>0.0)
    Rank SUM Zscore
    (>0.0)
    AVG Zscore
    (>0.0)
    Rank AVG Zscore
    (>0.0)
1 229 Yang-Server 108 82.5877 2 0.7832 2 92.5982 1 0.8574 1
2 162 UM-TBM 109 87.6934 1 0.8045 1 91.8410 2 0.8426 2
3 035 Manifold-E 109 46.3607 5 0.4253 7 63.2197 3 0.5800 4
4 475 MULTICOM_refine 109 47.7746 3 0.4383 5 54.8979 4 0.5037 6
5 120 MULTICOM_egnn 109 47.1138 4 0.4322 6 52.8480 5 0.4848 7
6 158 MULTICOM_deep 109 45.4041 6 0.4166 8 51.0686 6 0.4685 8
7 288 DFolding-server 109 34.0187 8 0.3121 11 49.9465 7 0.4582 10
8 086 MULTICOM_qa 109 43.1930 7 0.3963 9 49.4940 8 0.4541 11
9 462 MultiFOLD 109 31.6482 10 0.2904 13 47.2831 9 0.4338 12
10 446 ColabFold 109 26.8437 12 0.2463 15 46.8624 10 0.4299 13
11 166 RaptorX 109 33.5969 9 0.3082 12 45.8090 11 0.4203 14
12 125 UltraFold_Server 109 26.4331 13 0.2425 16 40.3241 12 0.3699 15
13 298 MUFold 109 28.7393 11 0.2637 14 39.7967 13 0.3651 16
14 131 Kiharalab_Server 109 10.7099 25 0.0983 29 39.5082 14 0.3625 17
15 098 GuijunLab-Assembly 109 21.7402 15 0.1995 19 36.6711 15 0.3364 19
16 188 GuijunLab-DeepDA 109 26.3224 14 0.2415 17 36.5916 16 0.3357 20
17 466 Shennong 105 13.6144 21 0.2059 18 35.9139 17 0.3420 18
18 383 server_124 109 17.0985 19 0.1569 24 35.5967 18 0.3266 21
19 403 server_126 109 19.5558 17 0.1794 21 35.5320 19 0.3260 22
20 270 NBIS-AF2-standard 109 21.4961 16 0.1972 20 34.1607 20 0.3134 24
21 245 FoldEver 109 15.2209 20 0.1396 26 33.8134 21 0.3102 25
22 353 hFold 106 12.9372 22 0.1787 22 33.3984 22 0.3151 23
23 151 IntFOLD7 109 6.2485 28 0.0573 33 32.9872 23 0.3026 26
24 018 server_123 109 9.5672 27 0.0878 31 31.1801 24 0.2861 28
25 261 server_122 109 10.9444 24 0.1004 28 30.8276 25 0.2828 29
26 264 server_125 109 10.3617 26 0.0951 30 30.2311 26 0.2773 30
27 481 GuijunLab-Meta 107 11.8944 23 0.1485 25 29.1492 27 0.2724 31
28 282 GuijunLab-Threader 109 17.2316 18 0.1581 23 28.1772 28 0.2585 32
29 239 Yang-Multimer 45 -102.7890 37 0.5602 3 27.4611 29 0.6102 3
30 089 GuijunLab-RocketX 108 5.8266 29 0.0725 32 27.3532 30 0.2533 33
31 073 DFolding-refine 106 -36.3592 34 -0.2864 39 25.3667 31 0.2393 35
32 133 ShanghaiTech-TS-SER 105 -13.0873 32 -0.0485 36 25.2661 32 0.2406 34
33 011 GinobiFold-SER 105 -11.1661 31 -0.0302 35 24.6080 33 0.2344 36
34 215 XRC_VU 80 -46.9128 35 0.1386 27 23.5756 34 0.2947 27
35 071 RaptorX-Multimer 45 -106.6032 38 0.4755 4 23.4830 35 0.5218 5
36 450 ManiFold-serv 109 -0.0786 30 -0.0007 34 23.0836 36 0.2118 37
37 390 NBIS-AF2-multimer 50 -98.9688 36 0.3806 10 23.0656 37 0.4613 9
38 443 BAKER-SERVER 109 -20.7476 33 -0.1903 37 20.9480 38 0.1922 38
39 427 MESHI_server 76 -117.1133 39 -0.6725 40 8.6120 39 0.1133 40
40 219 Pan_Server 104 -135.0695 40 -1.2026 41 4.7695 40 0.0459 41
41 315 Cerebra 109 -197.2832 46 -1.8099 46 2.9270 41 0.0269 43
42 370 wuqi 87 -153.9438 41 -1.2637 42 2.7157 42 0.0312 42
43 046 Manifold-LC-E 15 -191.8055 45 -0.2537 38 2.4712 43 0.1647 39
44 368 FALCON2 107 -171.3804 42 -1.5643 43 2.3896 44 0.0223 44
45 333 FALCON0 107 -171.3804 42 -1.5643 43 2.3896 44 0.0223 44
46 212 BhageerathH-Pro 103 -180.8593 44 -1.6394 45 1.2395 46 0.0120 46
47 280 ACOMPMOD 78 -210.5824 47 -1.9049 47 0.1265 47 0.0016 47
The cummulative z-scores in this table are calculated according to the following procedure (example for the "first" models):
1. Calculate z-scores from the raw scores for all "first" models (corresponding values from the main result table);
2. Remove outliers - models with zscores below the tolerance threshold (set to -2.0);
3. Recalculate z-scores on the reduced dataset;
4. Assign z-scores below the penalty threshold (either -2.0 or 0.0) to the value of this threshold.
Protein Structure Prediction Center
Sponsored by the US National Institute of General Medical Sciences (NIH/NIGMS)
Please address any questions or queries to:
© 2007-2022, University of California, Davis