[Dock-fans] Fwd: Re: Fwd: Large ligand on Large grid Kill mpirun

Francesco Pietra chiendarret at yahoo.com
Mon Dec 17 09:45:24 PST 2007


A bad printing mistake below. I meant "cluster analysis* when I wrote
"average". I know that averaging would bring things far from what I need to
compare.
Sorry
francesco

--- Francesco Pietra <chiendarret at yahoo.com> wrote:

> Date: Mon, 17 Dec 2007 00:11:12 -0800 (PST)
> From: Francesco Pietra <chiendarret at yahoo.com>
> Subject: Re: [Dock-fans] Fwd: Large ligand on Large grid Kill mpirun
> To: Scott Brozell <sbrozell at scripps.edu>
> 
> Hi Scott:
> This is not to remind the problem below of super-large ligands. Rather, it is
> to tell that I have much advanced with Amber9 in treating the best scored
> protein-ligand (= complex) from DOCK6.1 amber-score (for a 118-atoms ligand).
> 
> The complex was placed in a 80x80A hydrated POPC membrane and treated along
> the
> lines of Amber tutorial A3. Everything went on smoothly. Now I have the first
> production run (500 ps).
> 
> Before continuing with productions runs, my question is how to compare DOCK
> amber-score docking with Amber MD. Grossly, MD has grossly maintained the
> ligand is in the same area of the pore, while examination of a couple of MD
> snapshots shows differences in detail. I mean that, within 2.6A distance from
> the ligand, the protein residues are different in amber-scoring vs Amber-MD.
> 
> With Chimera I have also carried out a rmsd analysis of the production run
> but
> I guess that for the comparison of Amber MD with DOCK I should carry out
> "average" with ptraj. 
> 
> If that is correct, it would only provide guesses on closeness between
> protein
> residues and ligand. What I would really to calculate are binding free
> energies
> for ligand in the complex vs free (both immersed in the membrane) but I find
> no
> way. A post on Amber did not provide help. Aqvist in Uppsala carries out the
> analysis of binding free energies with his "linear interaction energy"
> method,
> which works in a explicit environment. Apparently, there is no similar
> software
> in Amber, where MM_PBSA, if I understand, makes recourse to GB, which is not
> what I would like to do.
> 
> Thanks for your comments
> 
> francesco
> 
> 
> --- Scott Brozell <sbrozell at scripps.edu> wrote:
> 
> > Hi Francesco,
> > 
> > There may be some opportunities to reduce the memory use of
> > Orient::match_ligand.  However,  I'll be out of contact for
> > about two weeks.
> > 
> > Scott
> > 
> > On Sun, 18 Nov 2007, Francesco Pietra wrote:
> > 
> > > There are 4GB ram (in 2GB slots Kingston ecc 400MHz) per processor (for a
> > total
> > > of 16GB ram) and Linux is set to let all that be used. Perhaps you may
> > suggest
> > > how to set the configuration better in order that all ram is used by
> DOCK.
> > >
> > > With the slightly smaller ligands, where DOCK run correctly, I detected
> > through
> > > "top -i" a max of 24% memory used by one of the two processors in use.
> All
> > four
> > > processors were only used during amber rescore, and only for the initial
> > half
> > > reriod of the procedure. Then, the number dropped to two even for amber
> > > rescore.
> > > Thanks
> > > francesco
> > >
> > > --- Scott Brozell <sbrozell at scripps.edu> wrote:
> > >
> > > > Hi,
> > > >
> > > > The failure occurs in Orient::match_ligand.
> > > > This is the focal point of much memory consumption.
> > > > The simplest patch would be additional hardware memory.
> > > >
> > > > Scott
> > > >
> > > > On Fri, 16 Nov 2007, Francesco Pietra wrote:
> > > >
> > > > > I forgot an important observation: docking procedures went on
> regularly
> > > > even
> > > > > for the largest (155 and 165 atoms) ligands when, on previous runs,
> the
> > > > spheres
> > > > > covered ca 1/4 of the protein.
> > > > >
> > > > > --- Francesco Pietra <chiendarret at yahoo.com> wrote:
> > > > >
> > > > > > Date: Fri, 16 Nov 2007 02:11:22 -0800 (PST)
> > > > > > From: Francesco Pietra <chiendarret at yahoo.com>
> > > > > > Subject: Large ligand on Large grid Kill mpirun
> > > > > > To: dock-fans <dock-fans at docking.org>
> > > > > >
> > > > > > Following careful comparative runs I arrived at the conclusion that
> > too
> > > > large
> > > > > > ligands on large grids kill mpirun.
> > > > > >
> > > > > > With grid files and selected_spheres.sph (10,536,750 grid points),
> > for
> > > > > > ligands
> > > > > > of 118 atoms, or less, docking and amber rescore procedures went to
> > > > > > completion
> > > > > > OK. Selected spheres was centered (Magis' sphere_select)
> > symmetrically in
> > > > the
> > > > > > protein for a 25A radius, whereby the spheres covered most of the
> > > > protein.
> > > > > >
> > > > > > mpirun failure occurred with two ligands of much similar shape as
> > above
> > > > > > though
> > > > > > larger (155 atoms and 165 atoms)
> > > > > >
> > > > > >
> > > > > > How mpirun was killed shortly after launching the rigid score
> > procedure,
> > > > is
> > > > > > shown by outputting the screen events following "mpirun -np 4 -i
> > rigid.in
> > > > -o
> > > > > > rigid.out 2>&1 | tee screen.out":
> > > > > >
> > > > > > Initializing MPI Routines...
> > > > > > Initializing MPI Routines...
> > > > > > Initializing MPI Routines...
> > > > > > Initializing MPI Routines...
> > > > > > terminate called after throwing an instance of 'std::bad_alloc'
> > > > > >   what():  St9bad_alloc
> > > > > > [deb64:03725] *** Process received signal ***
> > > > > > [deb64:03725] Signal: Aborted (6)
> > > > > > [deb64:03725] Signal code:  (-6)
> > > > > > [deb64:03725] [ 0] /lib/libpthread.so.0 [0x2b21e08e0410]
> > > > > > [deb64:03725] [ 1] /lib/libc.so.6(gsignal+0x3b) [0x2b21e0a1807b]
> > > > > > [deb64:03725] [ 2] /lib/libc.so.6(abort+0x10e) [0x2b21e0a1984e]
> > > > > > [deb64:03725] [ 3]
> > > > > >
> > > >
> > /usr/lib/libstdc++.so.6(_ZN9__gnu_cxx27__verbose_terminate_handlerEv+0x114)
> > > > > > [0x2b21e0583424]
> > > > > > [deb64:03725] [ 4] /usr/lib/libstdc++.so.6 [0x2b21e05815a6]
> > > > > > [deb64:03725] [ 5] /usr/lib/libstdc++.so.6 [0x2b21e05815d3]
> > > > > > [deb64:03725] [ 6] /usr/lib/libstdc++.so.6 [0x2b21e05816ba]
> > > > > > [deb64:03725] [ 7] dock6.mpi(main+0) [0x42c180]
> > > > > > [deb64:03725] [ 8] /usr/lib/libstdc++.so.6(_Znwm+0x34)
> > [0x2b21e0581954]
> > > > > > [deb64:03725] [ 9] /usr/lib/libstdc++.so.6(_Znam+0x9)
> > [0x2b21e0581a49]
> > > > > > [deb64:03725] [10]
> > dock6.mpi(_ZN6Orient12match_ligandER7DOCKMol+0x375)
> > > > > > [0x447a85]
> > > > > > [deb64:03725] [11] dock6.mpi(main+0xaf5) [0x42cc75]
> > > > > > [deb64:03725] [12] /lib/libc.so.6(__libc_start_main+0xda)
> > > > [0x2b21e0a054ca]
> > > > > > [deb64:03725] [13] dock6.mpi(__gxx_personality_v0+0xc2) [0x41b4ea]
> > > > > > [deb64:03725] *** End of error message ***
> > > > > > mpirun noticed that job rank 0 with PID 3724 on node deb64 exited
> on
> > > > signal
> > > > > > 15
> > > > > > (Terminated).
> > > > > > 3 additional processes aborted (not shown).
> > > > > >
> > > > > > File rigid.out showed correct reading of grid.nrg.
> > > > > >
> > > > > > _______
> > > > > > The procedure failed also on serial run with
> > > > > >
> > > > > > dock6 -i rigid.in -o rigid.out 2>&1 screen_serial.out
> > > > > >
> > > > > > screen_serial.out:
> > > > > >
> > > > > > terminate called after throwing an instance of 'std::bad_alloc'
> > > > > > waht(): St9bad_alloc
> > > > > >
> > > > > >
> > > > > > i.e, as it was already clear from the above, not a problem of mpi.
> > > > > >
> > > > > > ______
> > > > > >
> > > > > > I understand to be largely out of the main stream of docking
> > procedures
> > > > (and
> > > > > > probably scope). Nonetheless, I am interested in how these large
> > ligands
> > > > > > behave. Therefore, how could I manage to compare the above ligands
> of
> > > > various
> > > > > > size?. I am wondering about changing the grid space and see if
> > docking
> > > > with
> > > > > > the
> > > > > > smaller ligands changes much or not. If OK I could try the larger
> > ligands
> > > > > > with
> > > > > > the new grid. Any better idea? I am not considering to work without
> a
> > > > grid
> > > > > > because I want to compare several molecules. So far I used defaults
> > from
> > > > > > tutorials, i.e. for grid.in:
> > > > > >
> > > > > > compute_grids                  yes
> > > > > > grid_spacing                   0.3
> > > > > > output_molecule                no
> > > > > > contact_score                  no
> > > > > > energy_score                   yes
> > > > > > energy_cutoff_distance         9999
> > > > > > atom_model                     a
> > > > > > attractive_exponent            6
> > > > > > repulsive_exponent             12
> > > > > > distance_dielectric            yes
> > > > > > dielectric_factor              4
> > > > > > bump_filter                    yes
> > > > > > bump_overlap                   0.75
> > > > > > receptor_file    /home/francesco/dockwork/grid/myprotein.mol2
> > > > > > box_file            /home/francesco/dockwork/grid/rec_box.pdb
> > > > > > vdw_definition_file
> /usr/local/dock6/parameters/vdw_AMBER_parm99.defn
> > > > > > score_grid_prefix              grid
> > > > > >
> > > > > >
> > > > > > All that was carried out using A. Magis' sphgen_cpp and
> > sphere_select.
> > > > > >
> > > > > > Thanks
> > > > > >
> > > > > > francesco pietra
> > > >
> > >
> > >
> > >
> > >      
> >
>
____________________________________________________________________________________
> > > Be a better sports nut!  Let your teams follow you
> > > with Yahoo Mobile. Try it now. 
> > http://mobile.yahoo.com/sports;_ylt=At9_qDKvtAbMuh1G1SQtBI7ntAcJ
> > >
> > 
> > 
> 
> 
> 
>      
>
____________________________________________________________________________________
> Never miss a thing.  Make Yahoo your home page. 
> http://www.yahoo.com/r/hs
> 



      ____________________________________________________________________________________
Never miss a thing.  Make Yahoo your home page. 
http://www.yahoo.com/r/hs


More information about the Dock-fans mailing list