[Dock-fans] RE: [Dockdev] Docking and Scoring algorithms
Vincent.Leroux at edam.uhp-nancy.fr
Thu Feb 24 08:58:47 PST 2005
To David Rogers:
Well if I would like to try to "compare" results from different docking methods I would take a MD program to apply a forcefield, minimize the results and then compute the interaction energies if I need them and compare the structures with X-Ray data if it is available... I would never use the scoring functions, which are not really correlated to the binding energies, and are not meant to. See the GOLD documentation for example, you will see that the GOLDscore function is tweaked to make the GA better "explore" the conformational space, rather than to match binding energies closer. The 37.5% boost for the VDW term over the HB term seems to be here (this is not very clear) to compensate for a tendency to favor HB too much over VDW... You don't want a "better" scoring function if it lowers your chances of finding a good structure. See, the people at the CCDC apparently made their scoring function "worse", but like this their program is more efficient... Docking can be good for quickly classifying a large number of ligands, but it is not meant to directly give reliable quantitative results (you have FEP methods for that, but the cost is not the same...).
When doing virtual screening, if you start from large molecular databases you should use several methods of increasing precision (from rigid docking to flexible docking, typically), with the aim of filtering out "bad" molecules at each step rather than picking up "good" molecules. At one point you have to stop the calculations even if there are still too many "not yet bad" molecules remaining and pick, say the 10 best results according to the scoring function (assuming that there is at least a small probability that the corresponding compounds are on average "better" than the others), and 20 more amongst the others, trying to get the highest diversity regarding the binding mode without going too far away from known ligands, which should be amongst the top solutions. The SILVER program bundled with the latest version of GOLD seems to be nice for that selection purpose. Buy those structures or synthetize them, and then I hope you have an experimental protocol for testing them with your target. If you discover some unknown binders for your target like this, you should not complain about the lack of precision of docking scoring functions ;-)
And afterwards you can still try to improve the new binders with MD, look for further clues with additional docking runs based on virtual databases created from the binders with small modifications by combinatorial chemistry, and so on...
********************** A 23/02/2005 12:46, John Irwin a écrit:
Hi David cc dockdev at docking.org, dock-fans at docking.org
Thanks for your contribution to the DOCK developers' discussion
group. We welcome all comments and opinions! Arguably this thread fits more
in with the dock-fans mailing list, so I've copied them on this.
I wonder whether there is a misunderstanding about molecular docking
and virtual screening lurking behind what you've written. In our experience,
molecular docking (virtual screening in high throughput) is considered to be
doing well retrospectively if it can
a) enrich known binders 20 fold over random from a database of drug-like
b) reproduce qualitatively the experimental binding geometries (McGovern &
Shoichet, J Med Chem. 2003 Jul 3;46(14):2895-907.)
Prospectively, we consider docking a success if we purchase and test
50 compounds from among the top 500 of a database of purchasable, drug-like
compounds (e.g. ZINC http://zinc.docking.org/ ) and find 3 previously
That's a pretty low bar, but it is considered the state of the art
in this field. If someone shows me a quantitative comparison between docking
energies and experimental binding affinities, unless it is within a narrow
SAR series (and therefore not very interesting), my instinct is to believe
it is an accidental correlation, and that people are fooling themselves into
believing the correlation is significant.
You can list a dozen reasons why docking shouldn't even work, much
less provide good correlations with experimental binding affinities.
Indeed, in our experience, 90+% of top docking hits are not actual binders.
Correlate that! Hardly worth repeating to this audience, the reasons docking
shouldn't work include but are not limited to the approximations of the
scoring function, the inadequate treatment of desolvation and entropy, and
the rigid or incomplete sampling of receptor structure.
We think of docking as a screen, that sorts a database into "more
likely" (top scorers) and "less likely" (the rest) to actually bind
experimentally. Of course, we are actively working to improve docking, and
there is reason to hope that docking can be improved. One way to do this is
to focus on the decoys, and ask what makes molecules score well in the
computer when they do not bind experimentally. This is one area of research
in the lab, and the subject of a paper that will appear shortly from Graves
and Shoichet 2005.
You are right to be cautious, and I encourage you to perform due
diligence on DOCK5 or any other docking program you choose to use. We
certainly do (see McGovern 2003 as above). But I think you also need to have
realistic expectations of docking technology. As you point out, getting free
energy perturbation calculations to correlate with experiment has been
difficult enough. What do you expect with docking calculations that spend a
few seconds or even a few minutes per molecule?
John Irwin http://johnirwin.compbio.ucsf.edu
> It seems to me that there are a multitude of docking
> algorithms out there, all of which have individual quirks
> (Kitchen et. al.), and none of which work perfectly for every
> type of interaction (since simplifying a thermodynamic binding
> potential energy calculation must obviously make assumptions).
> Dr. Kuntz recently wrote his own review of docking
> methodoligies (Brooijmans & Kuntz).
> It seems clear in these reviews that the most challenging
> task of the docking function is to reproduce correct binding
> energies. Even in the MD community, it has been difficult to
> create force-fields that do this task, and P-Chemists are
> working toward quantum corrections to these methods (what
> seems like the opposite direction).
> I have recently tried scoring several receptor-ligand
> complexes (those that worked from the Gold validation set)
> with different scoring functions, and found that the average
> correlation (R^2) between different scoring functions is about
> 0.3, that is 30% similarity. Dock5, as it is installed here,
> however, gave scores with a correlation of about 0.02, right
> around the limit of statistical validity for our dataset (~73
> receptor-ligand pairs).
> I have also tried changing the random number generation
> seed, and found that (with the parameters included in the
> methotrexate example) Dock's energy scores vary by +/- 0.5,
> which (I believe) is acceptable.
> Anyway, I am highly skeptical of Dock5's scoring algorithm,
> and uncertain about publishing any work based upon it until I
> have been able to reproduce a successful screening. This is,
> of course, difficult to do since assembling a list of relevant
> compounds with known binding affinities in the same conditions
> is time-consuming.
> Brooijmans, N. & Kuntz, I. D. (2003) Annu. Rev. Biophys.
> Biomol. Struct. 32, 335-373.
> Kitchen, D. B., Helene, D., Furr, J. R., Bajorath, J. (2004)
> Nature Reviews Drug Discovery. 3, 935
> Gold Validation Set:
> ~ David Rogers
> Graduate Student
> Department of Chemistry
> University of Cincinnati
> Dockdev mailing list
> Dockdev at docking.org
Dock-fans mailing list
Dock-fans at docking.org
Vincent LEROUX PhD student
Equipe de Dynamique des Assemblages Membranaires,
UMR-CNRS 7565, Université Henri Poincaré, Nancy I,
BP 239, 54506 Vandoeuvre-les-Nancy, France
e-mail: Vincent.Leroux at edam.uhp-nancy.fr
More information about the Dock-fans