Examples for the CASP9 Asilomar Hands-On Solution Sessions

Examples for the CASP9 Asilomar Hands-On Solution Sessions

Postby jmoult on Mon Nov 29, 2010 10:39 am

Please use this discussion thread to post your examples for the Asilomar Hands-On Solution sessions.

Background:

It has been proposed that we take advantage of all being gathered together in Asilomar to really focus on specific problems in structure modeling and brainstorm our way through to a route to real progress.

Idea is to begin by identifying a set of structure features that nobody got right, and to use those as a guide to thrashing out methods that will fix those problems. The thrash-out will be in as many formal and informal predictor driven sessions as people want to organize. A room will be made available for this purpose. In order for this to work, the whole thing must be hands-on and predictor driven. Make sure you bring your laptop.

Important to arrive at Asilomar with the problem examples already identified. To this end, we are asking that groups identify their own problem examples.

So, bearing in mind the goals, for prediction areas relevant to your CASP9 activities, please post details of three or so:

A. Cases where you failed to find the best single template.
B. Cases where you failed to combine two or more templates.
C. Cases where you made serious alignment failures.
D. Cases where modeling of loops failed (short and long).
E. Cases where refinement away from the template either was not spotted as necessary or did not succeed.
F. Cases where you though a specific feature of a model was correct, but it turned out otherwise.

Feel free to add categories if it fits. Of course, please also post your thoughts on how best to do this, as well as comment on other people’s examples.

On Thursday the 2nd, we will begin trying to make sense of what has been posted, so please get your stuff up by end of Wednesday.

John Moult
For CASP organizers.
jmoult
 
Posts: 3
Joined: Tue Feb 12, 2008 10:37 am

Re: Examples for the CASP9 Asilomar Hands-On Solution Sessions

Postby djones on Mon Nov 29, 2010 11:50 am

Hi John,

Not quite sure I 100% follow what is intended here - what exactly are we going to do with the laptops? Run software, make powerpoints, play FoldIt?

Looking at those categories - does anyone even bother looking at A-D by eye any more? I've hardly even bothered looking at our predictions post submission - I don't really have the emotional attachment that I used to have to my submissions. Everything is such a black box nowadays that I usually don't even know what template was used or what alignment - and when I do bother to look, errors are usually just bugs, lack of specific structures in the fold library - or just what I expect from the usual error distribution. It's all just error management these days rather than anything more interesting.

Loop modelling - happy to leave that to MODELLER in the most part. It would be nice to have some alternatives to MODELLER - but what else is out there that's so robust and easy to use? I can stick almost any old nonsense into MODELLER and it spits out (usually) something vaguely like a protein - which is generally quite sufficient for what I'm doing. Biggest bugbear with MODELLER is the restrictive licensing.

E. Refinement - I don't have any individual case studies (pick almost any of the ones I tried if you want some examples that can be improved :P ), but every "statistical potential" I've tried is a complete waste of time as soon as models get below 2A - none of these potentials (all the usual suspects - no names no pack-drill - plus all of my own) are capable of identifying close to native models. I'll be interested in hearing what did work in that category, however. Definitely short of new ideas here.

F. As I was telling Torsten today, I do think we should give some thought to our ability to correctly identify correct oligomeric structures. Statistical potentials should be able to tell native oligomers from non-native with some ease. We should try to identify a decent set of decoys for this - though maybe CAPRI is a better source of data for this.

Domain swapping - that's an old bugbear. Can anyone predict domain-swapping reliably?

The target that particularly disappointed me in this category was T0605. I was convinced from the quite consistent folding results we obtained that this wasn't just a boring
coiled-coil structure (of course it was). We predicted a turn in the helix which formed a kind of trimeric coiled-coil from the two J-shaped chains (the short ends came together in the
dimer to form the third "chain"). Looked a really convincing model - made more convincing by the enforced symmetry we applied - and the various stat. potentials all loved it. Also I noted that some other servers had also predicted this same J-shaped fold. I was very disappointed to see the model was wrong - though if I'd be less gung-ho I would have build a boring old coiled-coil and then looked at more interesting predictions as 2nd-5th ranked models. However, I still prefer my model to the X-ray structure - much prettier. Shame it was wrong. :roll:

- David Jones -
djones
 
Posts: 10
Joined: Sun Sep 07, 2008 3:21 am

Re: Examples for the CASP9 Asilomar Hands-On Solution Sessions

Postby RAPTOR on Mon Nov 29, 2010 2:28 pm

Hi David,

I agree that error management is important if you want to get a decent overall ranking in CASP.
The popular consensus method can be thought as one of the reliable error management methods.
However, I do believe that there are something new even for A-D.
For example, it is challenging to systematically improve alignments for proteins without good sequence profile and to accurately align a single target to multiple good templates when the target is not very close to the templates.
Some groups may not perform well for the whole target set because of lack of error management, but may do well on a specific type of targets. The issue is that it is hard to identify these methods by simply looking at the overall ranking.

Jinbo
RAPTOR
 
Posts: 6
Joined: Tue Sep 09, 2008 6:47 pm

Re: Examples for the CASP9 Asilomar Hands-On Solution Sessions

Postby djones on Mon Nov 29, 2010 3:17 pm

Jinbo - absolutely agree about the importance of those issues. This is just the kind of thing we want to hear more of in the TBM roundtable discussions.
I'm heartily sick of black box methods - I want to know who and what is generating good alignments etc.

However, what I doubted was whether anyone had actually looked at any of their CASP9 data in enough detail to come up with examples to
share with the expert groups. I certainly haven't looked at mine. I couldn't tell you which of my models were bad due to poor alignments
or which were due to bad templates. It makes me a bit sad to say that I'm not interested enough to want to look at those details
any more - but times change, and part of the problem is that there are so many more models to sift through these days.

In many cases I don't even have the alignments on file (though they may well be sitting in a log file somewhere).

I am certain John will be delighted if you do find examples to analyse, however!
djones
 
Posts: 10
Joined: Sun Sep 07, 2008 3:21 am

Re: Examples for the CASP9 Asilomar Hands-On Solution Sessions

Postby arneelof on Tue Nov 30, 2010 2:09 am

I agree with David.

Our pipelines are completely automatic and in hardly any case we have bothered to look at the models.

Now we looked at a few cases where one of our methods perform significantly than another and what did we learn ?
Well...

The better model is ..... better. :o

A few interesting examples can be found where Free Modelling methods perform significantly better than any template models, but basically I think all of these are helical-coil-coil types


Yours

Arne
arneelof
 
Posts: 12
Joined: Mon Mar 29, 2010 10:40 pm

Re: Examples for the CASP9 Asilomar Hands-On Solution Sessions

Postby RAPTOR on Tue Nov 30, 2010 8:20 am

Baker group has generated amazing FM models in both CASP8 and CASP9 for some targets.
I didn't carefully look at his FM results yet, but I remembered that Baker generated an excellent FM model
for T0482, a mainly-beta protein. Sosnick group also generated a very good model for T0482.
RAPTOR
 
Posts: 6
Joined: Tue Sep 09, 2008 6:47 pm

Re: Examples for the CASP9 Asilomar Hands-On Solution Sessions

Postby arneelof on Tue Nov 30, 2010 1:00 pm

Well in the predictioncenter (GDT_TS) based evaluation baker is ranked #24 in free modeling....
arneelof
 
Posts: 12
Joined: Mon Mar 29, 2010 10:40 pm

Re: Examples for the CASP9 Asilomar Hands-On Solution Sessions

Postby djones on Wed Dec 01, 2010 10:32 am

Well in the predictioncenter (GDT_TS) based evaluation baker is ranked #24 in free modeling....

I think those rankings are only for first models, though. The FOLDIT ("infinite typewriters" :) ) group in particular seems to have done a good job on T0581, but I'm not sure if that's going to be FM - I can see some similar templates.

However, I don't envy the job of the free modeling assessor - FM looks a bit of a washout overall. There was a shortage of tractable targets this time, clearly - but even so, at first glance, the results look pretty dire. :|
djones
 
Posts: 10
Joined: Sun Sep 07, 2008 3:21 am

Re: Examples for the CASP9 Asilomar Hands-On Solution Sessions

Postby ambrishroy on Wed Dec 01, 2010 9:25 pm

Following John's suggestion, I am looking at our first models submitted by Zhang-Server. Mainly, I consider all those targets where the Zhang-Server models had a significantly lower TM-score than the best models submitted by other servers (with a TM-score difference >0.1). It seems to me that there are three major sources for our failure in these cases:

(1) Failure to split domain: The individual domains of a protein cannot be appropriately modeled if the modeling is performed on the whole-chain, either due to the incorrect template recognition or insufficient ab initio sampling.
(2) Inability to fold beta-proteins by ab initio programs: Ab initio folding programs (including QUARK) tend to generate paired strands of short-range contact order independent of the query sequence. As a result, threading almost always did a better job on beta-proteins than ab initio folding, no matter whether it is a TBM or FM target.
(3) I-TASSER often has difficulty in picking up the best template if the best template is ranked low or is hit by only few alignments.

Below are all the cases where I found our first models are obviously worse than the best models by other programs.

T0547_4:
Description: T0547 is a 4-domain protein from a well-packed dimer complex. The 4th domain (T0547_4) is a small tail domain of 3 alpha-helix (57 residues).
TM-score of the Zhang-server model1: 0.232
TM-score of the best model by other servers: 0.532
Reason of failure: Domain-split.
There was no threading alignments in the region [556-611]. However, Zhang-Server did not model this tail domain separately because its size was smaller than the automated domain-length cutoff (>80 AA). In our human prediction (Zhang), however, the domain was modeled separately, which generated a model of TM-score=0.545.
Comments: Unaligned domains should be modeled separately.

T0550_1:
Description: T0550 is a two-domain beta-protein. T0550_1 is the N-terminal domain of 179 residues.
TM-score of Zhang-server model1: 0.238
TM-score of the best model by other servers: 0.581
Reason of failure: Domain-split
model1.pdb was generated by the whole-chain modeling, where LOMETS failed to pick up the best template for the N-terminal domain (3e9t_A). model2.pdb was generated by individual domains and it picked up the correct template, having a TM-score=0.419. Nevertheless, the final model has a lower TM-score than the best template 3e9t_A which has a TM-score=0.463, indicating a failure in I-TASSER refinement.
Comments: If the domain boundary is clear, modeling should be done based on individual domains.

T0550_2:
Description: T0550 is two-domain beta protein. T0550_2 is the C-terminal domain of 159 residues.
TM-score of Zhang-server model1: 0.173
TM-score of the best model by other servers: 0.356
Reason of failure: Domain-split
model1 was generated by the whole-chain modeling but the C-terminal domain is a FM target without any good template. model4 was generated by ab initio QUARK simulation with a more reasonable TM-score (0.344).
Comments: When domain boundaries are clear, model the domain structures separately.

T0538:
Description: A small alpha/beta-protein of 54 residues.
TM-score of Zhang-server model1: 0.726
TM-score of the best model by other servers: 0.864
Reason of failure: Model ranking
The best template (2kruA) by LOMETS has a TM-score=0.714. Model1 from I-TASSER has a slight refinement (TM-score=0.726). Surprisingly, model4 by ab initio QUARK simulation has a much better quality with TM-score=0.833.
Comments: For small TBM proteins, ab initio modeling can generate better model than threading for some alpha and alpha/beta proteins although the average result by QUARK is still worse than LOMETS for TBM targets.

T0551:
Description: T0551 is a small beta-protein of 74 residues.
TM-score of Zhang-server model1: 0.341
TM-score of the best model by other servers: 0.564
Reason of failure: Template was missed
The best template (1pcfA) is missed by majority of the LOMETS alignments.
Comments: It is difficult to pick up the best template if it is hit only by the minority of alignments or ranked very low in the template list.

T0555:
Description: T0555 is a FM target, an alpha-protein with 148 residues
TM-score of Zhang-server model1: 0.315
TM-score of the best model by other servers: 0.478
Reason of failure: Model ranking and incorrect target category
It is a FM target without any good template but model1 was generated from I-TASSER using LOMETS templates. model2 was generated by ab inito QUARK simulation followed by I-TASSER refinement which has a TM-score=0.482.
Comments: none

T0565_1:
Description: T0565 is a three domain protein and T0565_1 has 103 residues.
TM-score of Zhang-server model1: 0.513
TM-score of the best model by other servers: 0.770
Reason of failure: Domain-split
This problem is the same as T0550_1. Model1 was generated by the whole-chain modeling where most of LOMETS alignments align query to 3h41A with the N-terminal shifted to avoid gaps. But the correct alignment should have a big gap at [49-50]. Model2 was modeled as separate domains, where all LOMETS alignments on 2kt8A is correct because this template has no insertion of the big loop (like 3h41A). It has a TM-score=0.765.
Comments: Model the domains separately if the domains are clearly defined.

T0564:
Description: This is a small beta-protein of 89 residues
TM-score of Zhang-server model1: 0.304
TM-score of the best model by other servers: 0.495
Reason of failure: Model ranking
It was judged by LOMETS as a hard target but there were some reasonable templates (1h9rA, 1gutA, 1wjjA) with TM-score>0.4, which however were ranked very low. Model1 from I-TASSER used normal LOMETS templates, with a TM-score=0.304. Model2 also from I-TASSER but with all LOMETS alignments sorted by the TM-score to the ab initio QUARK model. Although the ab initio model itself is not good (TM-score=0.304), many good templates were ranked high. As result, TM-score=0.483 for the model2.
Comments: Sorting LOMETS by ab initio models might be a solution to rank threading templates for distant-homologous targets.

T0569:
Description: This is a small easy protein of beta-structure, 79 residues.
TM-score of Zhang-server model1: 0.459
TM-score of the best model by other servers: 0.720
Reason of failure: Refinement problem.
The best template (3i57A) is hit only by one threading program. The second best template (2kvzA) dominates the LOMETS threading alignments. As result, the beta-sheet (51-63) was shifted to the C-terminal in the Zhang-Server model1.
Comments: same issue as seen in T0551

T0571_2:
Description: This is the second domain (beta-protein, 135 residues) from T0571.
TM-score of Zhang-server model1: 0.202
TM-score of the best model by other servers: 0.331
Reason of failure: Inability to model beta-protein.
All our models were generated by QUARK ab initio modeling. But ab initio programs have difficulty to generate complicated beta-structures, e.g. beta-sheet of long-range contact order or cross beta-strands.
Comments: for beta-proteins, threading did a better job than ab initio proteins in almost all the cases. Hope this will be changed soon.

T0639:
Description: A FM target (alpha-protein, 128 residue) from a well-packed dimer complex
TM-score of Zhang-server model1: 0.318
TM-score of the best model by other servers: 0.480
Reason of failure: Inability to model (it may be an unstable domain by itself)
It is a monomer alpha-protein from a well-packed dimer complex. The elongated shape may render it not a stable domain. It might be better to fold the two chains as a whole in ab initio modeling.
Comments: none

T0623:
Description: An easy target of 220 residues
TM-score of Zhang-server model1: 0.627
TM-score of the best model by other servers: 0.773
Reason of failure: Model/template ranking
LOMETS templates are dominately from 1a0pA which has incorrect orientation of C-tail. The best template (2a3vB) was ranked very low. model4 was from 2a3vB with correct C-tail orientation and have a TM-score=0.785.
Comments: It is difficult to pick up low-rank good templates.
ambrishroy
 
Posts: 1
Joined: Sat Oct 16, 2010 11:48 am

Re: Examples for the CASP9 Asilomar Hands-On Solution Sessions

Postby jianlin.cheng on Wed Dec 01, 2010 10:18 pm

I identified a list of template-based modeling cases in which our servers (e.g. MULTICOM-CLUSTER, MULTICOM-NOVEL) encountered significant difficulties during the CASP9 experiment. :cry: Our servers’ failures on these cases may be due to various reasons including ones listed by John. However, I notice that, no matter how hard these cases were, there were almost always a few servers managed to do well on these cases. What a remarkable achievement of the community! :D I would appreciate any inputs from the community about how to solve these modeling problems. I also hope that these cases may be applied to other servers and would provide useful materials for the community to improve its methods.

T0532: failed to select the best alignment or model
Our server MULTICOM-NOVEL generated the first two models based on two alternative alignments on the same template. Model_1 has score 0.57, model_2 0.71. But MULTICOM-NOVEL failed to put the best model (model 2) at the top. Several servers such as FAMSD, BioSerf, Phyre2, ProQ2, ProfileCRF, Zhang-Server did well on this target. They probably were able to generate and select good alignments and models. Is it possible to distinguish our two models at the alignment level or at the model quality assessment level?
Model 1: http://sysbio.rnet.missouri.edu/casp9_h ... 0532_1.pdb
Alignment file for model 1: http://sysbio.rnet.missouri.edu/casp9_h ... 0532_1.pir
Model 2: http://sysbio.rnet.missouri.edu/casp9_h ... 0532_2.pdb
Alignment file for model 2: http://sysbio.rnet.missouri.edu/casp9_h ... 0532_2.pir

T0540: Failed to find the best single template
A set of local sequence / profile alignment tools used by our servers failed to identify the best template (2KD2 ?). However, Servers such as HHPredA, RaptorX, and even the aging SAM-T02 got this template right. What might contribute to their success?

T0549: Our server (e.g. MULTICOM-NOVEL) found the best template (2KPM?), but failed to generate a good alignment using a number of tools including psi-blast, hhserach. But a few servers (e.g., Jiang_Assembly, RaptorX, Phyre2, ProfileCRF, SAM-T08, BioSerf) did well. How did these servers successfully generate a good alignment or a model?
Our model 1:
http://sysbio.rnet.missouri.edu/casp9_h ... 0549_1.pdb
The alignment file for model 1: http://sysbio.rnet.missouri.edu/casp9_h ... 0549_1.pir

T0550: Failed to select a good template (2DPK) for the first domain of this target
This target has two domains – a hard template based-domain and an ab initio domain. The locally installed hhsearch alignment tool was able to find the 2DPK template in our own template profile database for the first domain with a high e-value (i.e. 40). However, it generated a very short alignment, which was not selected. My question is how to generate a long, better alignment for this case? Here is the ranking and alignment file: http://sysbio.rnet.missouri.edu/casp9_h ... /T0550.hhr
T0551: got the best template, but failed to generate a good alignment.
The local HHSearch tool identified the potentially best template (1PCF) in our template profile database, but generated a short alignment that only covers the half of the sequence. Thus, our modeling failed miserably. I noticed that some servers HHPred, Raptor, Phyre2, GSMetaServer did very well on this target. I would appreciate any input about how to use HHSearch or other features better on the very remotely homologous templates to generate better alignments in this case?
Our alignment file: http://sysbio.rnet.missouri.edu/casp9_h ... 0551_1.pir
Our model file: http://sysbio.rnet.missouri.edu/casp9_h ... 0551_1.pdb

T0557: our servers (e.g. MULTICOM-CLUSTER) used the best template, but failed to generate a good alignment or to use multiple templates? They identified the best template 3LMM, which is also used by other servers such as QUARK and BAKER-ROSETTASERVER. But these two servers generated a significantly better model? Was it because they used multiple templates, a better alignment or both?
Our alignment file: http://sysbio.rnet.missouri.edu/casp9_h ... 0557_1.pir
The model file: http://sysbio.rnet.missouri.edu/casp9_h ... 0557_1.pdb
T0562: MULTICOM-NOVEL got the best template (3LWX), but failed to generate a good alignment or mistakenly used other less similar templates (1SU0, 2QQ4). Other servers such as Bilab-ENABLE used the single template and generated the best alignment for this target. I was keen to learn how Bilab-Enable managed to generate a better alignment successfully? What factors were taken into account? Did multiple templates cause a problem in our modeling in this case?
Our alignment file: http://sysbio.rnet.missouri.edu/casp9_h ... 0562_1.pir
The model file: http://sysbio.rnet.missouri.edu/casp9_h ... 0562_1.pdb

T0564: all of our profile alignment tools failed to get the good template (1WJJ). I was keen to learn how other servers such as Raptor, HHpred, Seok-Server was able to select it successfully? Was it due to high quality profiles, better alignment strategy or other features?

T0568: our servers got good templates (2PN5 & 2P9R), but wasn’t able to model the front uncovered regions (54 residues) well. Our servers tried to refine the front tail, but it didn’t seem to help. Some servers such as Phyre2, SAM-T08, Pcomb, GSmetaserver, QUARK, BAKER-ROSETTASERVER did well on this target. I was wondering what made the difference. Was it due to front end refinement or a better alignment?
Our alignment file: http://sysbio.rnet.missouri.edu/casp9_h ... 0568_1.pir
Our model file: http://sysbio.rnet.missouri.edu/casp9_h ... 0568_1.pdb

T0579: all our profile alignment tools failed to find the best template (2QQR), which is a two domain protein. But quite a few servers got this right? I was wondering what approaches / information were used to successfully identify this template by these servers?

T0588: our servers used some reasonable templates (1QAZ, 1RW9), but was not able to use one of the best templates (3EV1 ?) that was also identified. I was wondering how other servers (e.g. RaptorX, Zhang-Server) chose the better template such as 3EV1?

T0598: MULTICOM-CLUSTER used two good templates (2OSO, 2OSD), but generated a worse model than other servers such as Zhang-Server, pro-sp3-TASSER, and gws using templates (2OSO, 2OSD, 2Z9F, 2C0J, 3CUE). Was it because these servers generated better loops or tails using some refinement protocol?
Our alignment file: http://sysbio.rnet.missouri.edu/casp9_h ... 0598_1.pir
The model file: http://sysbio.rnet.missouri.edu/casp9_h ... 0598_1.pdb

T0602, MULTICOM-NOVEL used a good template (3A7M), but failed to generate a good alignment or model. I was keen to learn how other servers such as Seok-server, Zhang-Server, chunk-TASSER successfully managed to generate one of the best models for this target using the same template? Which part (e.g. model generation, alignment, or loop modeling) contributed to the success?
The alignment file: http://sysbio.rnet.missouri.edu/casp9_h ... 0602_1.pir
The model file: http://sysbio.rnet.missouri.edu/casp9_h ... 0602_1.pdb

T0604 is a three-domain protein, most servers including ours failed on the first domain. I was keen to learn how other servers such as Zhang-server, pro-sp3-TASSER got the best template for this domain successfully?

T0612: our servers were able to get the core of the two-layer beta sheets correct using a template (3FRP, 3FN9). However, none of these templates provides a good conformation to pack the first two stands with the rest of beta sheets. A few servers such as Zhang-Server did very well on this target. I was keen to learn how Zhang-Server managed to pack the first two strands? Was it due to refinement or better alignments?
T0628: Our server did well on the first domain, but failed on the second domain using one template 2E2O. Other servers such as Zhang-Server using multiple templates including 2E2O and BAKER-ROSETTASERVER using one template 1HUX did very well. I was wondering what went wrong in our case. The problem could be caused by alignment or template ranking. I was keen to learn what contributed to the success of the other two servers on this target?

T0630: there was a serious challenge in selecting the best template and generate a good alignment in this case. There are several templates available such as 2IF6, 2JYX, 2HBW, 2EVR. The best template is 2IF6 which can covers the entire target. Other templates only can cover either the beta-barrel region or helix regions. Our pairwise model selection mistakenly chose the model generated from 2JYX, 2HBW, 2EVR because they are predominant. Another challenge lied in alignment with 2IF6, which could lead to a model having a long loop from residue 38 to residue 62. All these challenges confused our servers, which even predicted that the target had two domains. I was keen to learn how other servers such as RaptorX, Jiang_Assembly managed to rank template 2IF6 out of many other templates at the top and generated a good alignment?
Here are the five models predicted by MULTICOM-CLUSTER where model 5 based on 2IF6 is the best. Other models are based on other templates.
The five models are:
http://sysbio.rnet.missouri.edu/casp9_h ... 0630_1.pdb
http://sysbio.rnet.missouri.edu/casp9_h ... 0630_2.pdb
http://sysbio.rnet.missouri.edu/casp9_h ... 0630_3.pdb
http://sysbio.rnet.missouri.edu/casp9_h ... 0630_4.pdb
http://sysbio.rnet.missouri.edu/casp9_h ... 0630_5.pdb
The alignment file for model 5:
http://sysbio.rnet.missouri.edu/casp9_h ... 0630_5.pir
jianlin.cheng
 
Posts: 6
Joined: Sun Jul 20, 2008 8:39 pm

Next

Return to previous CASPs

Who is online

Users browsing this forum: No registered users and 8 guests

cron