Implicit: Memory notes
In the case of implicit analysis, the solution can go out-of-core if insufficient memory is allocated but that will slow down the solution substantially. If you haven't already noticed, implicit analysis is a memory hog. ILIMIT in *CONTROL_IMPLICIT_SOLUTION will affect the memory requirements as will LSOLVR in *CONTROL_IMPLICIT_SOLVER.
.
Effect of ILIMIT:
- Implicit uses quasi-Newton nonlinear solvers. They need an additional 2*ILIMIT*neql storage. neql is the number of rows in the linear algebra problem. So if neql = 100,000 and ILIMIT = 1000 you will need an additional 200,000,000 real words of memory. That is 1.6 Gbytes with AUTODOUBLE. Scale linearly according to neql which is the number printed out with LPRINT > 0 as "number of equations".
.
To allow your job to go out-of-core, set LSOLVR=6 in *CONTROL_IMPLICIT_SOLVER. While you're at it, set LPRINT=2 on the same line.
.
Some very approximate guidelines regarding memory requirements...
General recommendation:
- In-core Memory -- 10 Gbytes RAM per million dofs
- Out of core Memory -- 1 Gbytes RAM per million dofs and 10 Gbytes of free disk space.
- # of dofs = 6 * # of non-rigid elements unless mostly solid elements then 3 * # of non-rigid elements.
- Beware that you need a double precision executable and a 64-bit operating system to access more than 2 Gbytes of RAM.
.
Eigenvalue analysis case study:
- In double precision, 8 bytes per word * 1.17 gigawords = 9.4 gigabytes of RAM is required for a model of 194k nodes and 168k solid elements. If the model is truncated to 79k nodes and 70k solids, 0.38 gigawords are required.