LANL Home  |  Phone
 

 Answers to SOLVE Frequently-asked questions

Where is SYMINFO?
SOLVE/RESOLVE versions 2.08 and higher use the CCP4 version 5.0 libraries. These require both SYMOP and SYMINFO to be defined. (They are both symmetry libraries). They can be defined (if you are using csh, for example) with:

setenv SYMOP /usr/local/lib/solve/symop.lib
setenv SYMINFO /usr/local/lib/solve/syminfo.lib

What should I try if SOLVE cannot solve my MAD dataset?
Try the SAD script right away if using your MAD data fails. In many cases, the data for the wavelengths collected later are severely affected by crystal decay, and in these cases using just the first wavelength collected (hopefully at the peak wavelength) may work much better than using all the data.

 

Why when I input MR phases doesn't SOLVE use the peaks it finds from the difference Fourier it calculated?
One thing that often happens with the MR phases script is that SOLVE does look at the solution that is found from the MR phases, but then solve rejects the sitesbecause one of the solutions found from the patterson gives a higher score. Try adding these 2 lines to your solve input file:

ntophassp 0
icrmax 0

They should prevent solve from finding solutions on its own and adding them to your solution found by difference Fourier.

 

What is the best way to tell if my data are good?
For MAD data , have a look at the correlation of anomalous differences.  See the table in the beta-catenin dataset for example. For MIR data, check that your R-factors between derivative and native start out large at low resolution, get smaller, and then finally get bigger again (the last rise is due to the errors in measurement and indicate where to cut off).

For SOLVE version 2.01 and higher, you can look at the analysis of signal-to-noise in the data listed in solve.prt. SOLVE estimates the noise from the sigmas in the data and the signal from MAD anomalous or dispersive differences, and from SAD anomalous differences.  Note: If the sigmas in the data are clearly overestimated (rms(sigma) > rms (difference) then SOLVE rescales all the sigmas so as to yield rms(sigma)=rms(difference) in the highest resolution shell.

Can I input sites that I already know into SOLVE? Yes, you can. You just put them right in under the correct wavelength or derivative, add the keywords "addsolve" or "analyze_solve" before the scaling command, and SOLVE will use them to find new sites (addsolve) or to refine and calculate phases (analyze_solve).  See addsolve and analyze_solve instructions for examples.

Can I use a solution at low resolution to run SOLVE at high resolution? Yes, you can. The easiest way is with  addsolve and analyze_solve.

What do "checksolve" and "comparisonfile" doChecksolve tells SOLVE to compare all the solutions it gets with the one that you input.  SOLVE finds the origin (and hand, if you do not have anomalous data) that best matches its trial solutions with the one you entered, and reports the solution relative to this origin and hand.  Comparisonfile allows you to input an FFT that SOLVE has previously calculated (at the same resolution as SOLVE is working); in combination with checksolve, SOLVE will calculate the correlation coefficient of every map that it examines to the one you input. This is handy when you have used "generate" to create a dataset.

Will SOLVE give me the right hand for my structure? Usually if you have good anomalous differences and MAD data, then yes, SOLVE will give you the correct hand.   For SAD data, it is common for SOLVE not to get the correct hand.  Run RESOLVE or the RESOLVE_BUILD script. after SOLVE, and if it fails to build anything useful, rerun SOLVE with analyze_solve and use the heavy-atom sites located in "solve_inverse.xyz" which are just the inverse of the sites in your original SOLVE run.  In very rare cases your anomalous differences might be reversed (due to incorrect analysis of data or detector hooked up backwards). In that case you can use "swap_ano" to reverse the signs of the differences.

How do I get a bigger version of SOLVE? The distribution comes with the regular sized SOLVE and solve_giant and solve_huge.  Try these first. If you need even a bigger version, then email me at terwilliger@lanl.gov and I'll give you the source so you can compile a bigger version. You will need the CCP4 library file libccp4.a to compile SOLVE.

Do I need a new access file for a new version of SOLVE? No, the same access file is good for all versions from version 2.0 through 2.99.  If you are upgrading from version 1 to version 2, yes you do need a new access file (and the new one goes in "solve2.access").
 

Why will SOLVE read my solve2.access file but not RESOLVE ? This is a bug triggered by not having a carriage return after the access code on the second line of solve2.access. Just put in the carriage return and it should work for RESOLVE too. Sorry!
 

Why do I have to set BMIN=0 for high-resolution SAD data or other high-resolution data? You just need to do this for SOLVE versions earlier than 2.03 (20-Sept-2002). The reason is that the default minimum B value for heavy-atom site is B=15. For high-resolution data, typically B values are 5-10 so this default is much too high.  If you set BMIN=0 then SOLVE can properly refine these B values. For version 2.03 and higher, the default is 2.

Where do I get  f' and f" scattering factors? The best place to get f' and f" values for your MAD experiment is from the beamline staff where you collected your data. They will usually have made careful measurements of these for standard settings on their beamline, so if you do a Se experiment, for example, their values should be very good.  You can also measure X-ray fluorescence from your own crystal and use the Kramers-Kronig transformation to estimate these values with the same programs the beamline staff used for their standard cases.

SOLVE does use the f' and f" values and they are very important. The  wavelength values are not used in any important ways by SOLVE

Where do I get scattering factors for atoms that SOLVE has not heard of? They are on pp. 500-501 of Volume C of the international tables. For example (Nb:)

NEWATOMTYPE NB
AVAL 17.6142 12.0144 4.04183 3.53346
BVAL  1.18865 11.7660 0.204785 69.7957
CVAL 3.75591
FPRIMV -.248
FPRPRV 2.48

Why are the figures of merit in the solve.status file not quite the same as the final values? The reason that the final phases look better for MAD data than the ones reported in the solve.status file is that SOLVE calculates phases at the very end using bayesian correlated mad phasing, which gives much better phases than the SIRAS-like phases used during the main part of the run (when the solve.status file is being written).  The reason the full phasing is not used all the time is that it is very slow.

Should I use all my data, or just the good data? Though it would be nice to use all the data, it is far better to use just the good data. Unless your sigmas are perfect and the statistics were done perfectly, it is really hard to get rid of the interference caused by data containing noise and essentially no signal.

Will SOLVE use NCS? Regrettably, no.

Why should I use NO MERGE ORIGINAL INDEX in scalepack? You should use "no merge original index" in scalepack so that SOLVE can re-scale the data with local scaling. This flag tells scalepack to write out the place in reciprocal space that each reflection was measured. Then SOLVE can compare it to its neighbors in reciprocal space.

Can I compare Z-scores for SOLVE runs in different space groups? At different resolutions?  No, Z-scores are relative and therefore cannot be compared for different space groups or resolutions.

Can I read in data in 2 different formats? Unfortunately not.

Can I convert SOLVE files like mad_fbar.scl into mtz files?  Yes, you can. You will need to use "export" to export the data to a flat file, then use the ccp4 routine f2mtz to import into mtz.

Can I look at my patterson maps? Yes, you can.  SOLVE writes some of them out as ".ezd" files which you can read right into "O" or convert to anything else with "mapman".  Others you can convert to ezd with "ffttoezd".

Why do I get an execution error with no output when I try to run SOLVE? On an SGI, if you run a version of SOLVE that does not match your computer, you get an "exec error".  Try a version of SOLVE for a lower version of your machine (i.e., r5000 instead of r12000).

Why does SOLVE say CELL DIMENSION   <1 OR > 1000 FOUND? This happens if you try to use a really huge unit cell that SOLVE didn't expect.  You'll have to cut back on the resolution a bit if it happens.

Why does SOLVE say /sbin/loader: Fatal Error: set_program_attributes failed to set heap start?  This is an error that your Compaq Alpha might give you if you don't have enough memory allocated to you. The solution is to add a line to your .cshrc file that just says: "unlimit".  This tells the system to give you all available resources.

Why doesn't COMBINE_ALL work for me?  For  combine_all to work, you have to be sure and input two or more complete datasets, separated by "new_dataset".

Why can't I use COMBINE_ALL_DATA with mtz files?  SOLVE won't let you read in more than one mtz file, unfortunately.  Sorry about this limitation. This means you can't use COMBINE with mtz files. You would need to dump your mtz data into flat files and read it in with "readformatted" instead.

Why can't SOLVE find 2 sites that are close together?  SOLVE won't let you find sites that are closer than a specified number of grid units. The distance depends on the grid size, which is typically 1/3 the resolution. The default  ("ntol_site") is 8, or about 2 to 3x the resolution. You can decrease it if you want; in which case SOLVE will have to consider more solutions and may have trouble identifying the best.

What does it mean when SOLVE says "error in reading this file" when reading a scalepack file ?   When SOLVE encounters "********" in a data file it will give you this error message. In scalepack (.sca) files this occurs if there is a reflection with a very large intensity that does not fit in the format of the file. One solution is just to edit the .sca file to remove these lines. Another is to re-run scalepack, specifying a scale factor to apply to all the intensities. You can do this with the keyword:

scale factor 10.0
in Scalepack.

Disclaimer

Operated by Los Alamos National Security, LLC for the U.S. Department of Energy's NNSA

Inside | © Copyright 2006 Los Alamos National Security, LLC All rights reserved | Disclaimer/Privacy | Web Contact