BENEFITS OF FINITE ELEMENT TECHNOLOGY The finite element method is a technique for solution of mathematical problems governed by systems of partial differential equations. It can produce accurate and realistic solutions to problems with highly complex geometries, material behaviors and boundaries which would result in highly complex feildwise variations in the solution variables. The method accomplishes this by subdividing the solution space into many pieces (the finite elements) sufficiently small that the variations in the solution variables can be well approximated within each by very simple functions. Implementation of the method numerically on modern digital computers enables highly accurate solutions with extremely large numbers of small elements. All of the governing equations are then solved on all of the elements, and the elemental solutions are assembled into the solution for the whole, subject to compatibility and continuity requirements. The method is particularly well suited to problems in solid mechanics because the element formulations are most straightforward when the boundaries of the elements represent material surfaces. In this application, the governing equations are laws of classical mechanics and assumptions of continuum mechanics (including constitutive relations for material behavior), and complete problems are posed by adding to these the specification of material properties, loads and boundary conditions. The method is especially attractive for problems with complex geometry or for materials with highly nonlinear behavior because other available solution methods usually require more restrictive material behavior assumptions. With respect to the true solution to a properly posed boundary value problem in solid mechanics, taken as an ideal, one can generally obtain a finite element solution that is as close to the ideal as is desired, so long as one can work the problem within program capabilities and pay the required attention to minimizing the unavoidable errors. In modern finite element programs, all of the governing equations are usually solved exactly, with the exception of nonlinear material behavior and equilibrium. Errors associated with approximation of nonlinear material behavior are normally minimized within the program by iteration subject to very strict criteria. The analyst, then, can generally obtain a solution of any desired accuracy by ensuring proper specification of geometry, location and conditions for loads and boundary conditions, material behavior and properties and by minimizing equilibrium errors. The most fundamental source of equilibrium error in a finite element solution is the fact that equilibrium is enforced only in a weak form  over each element as a whole rather than at every point in the solution space. In a linear analysis, this source of error causes the model to be too stiff; that is, loads will be too high at any particular displacement. In a nonlinear analysis, the results will typically be too strong; e.g., ultimate capacities will be too high. However, in models with disparate redundant load paths, this source of error can result in unrealistically low capacity if the stiffening effect transfers load to a path of inherent weakness. The size of this error depends on the fineness of the mesh and upon the element formulation. For any particular choice of element type, the finer the mesh, the smaller this error. This error can always be reduced by making the mesh finer, particularly in areas of high gradients in the solution, but there are practical limits of cost, schedule and resources. This error can also be reduced, for some problems, by employment of element types that incorporate higher order displacement functions. The analyst can test relative performance of various element types available, choose the most economical adequately performing element and ultimately qualify the mesh fineness by demonstrating insensitivity of results to model change to a mesh that is finer than necessary, either globally or in targeted regions. The type of equilibrium error discussed so far is inherent to the finite element approximation; that is, even an exact solution to the defined finite element problem has this error with respect to the exact solution for the underlying continuum problem. In a linear analysis this inherent equilibrium error is typically the only type of equilibrium error encountered because a linear finite element problem can be solved exactly. In a nonlinear analysis, the nonlinear nature of the governing equations usually necessitates an approximation in the solution of the finite element problem itself. The result of this is an equilibrium error associated with an imbalance of residual forces at the nodes. The nodes are the points in solution space at which continuity between elements is enforced, and equilibrium requires a balance of internal and external forces at each and every node. This solution equilibrium error can be minimized by minimizing the increment size in an incremental solution and/or by taking an iterative approach to the achievement of acceptable solution for each increment. Without iteration, the analyst should inspect the errors of this type as reported by the program and either accept them or rerun the analysis with smaller increments. With an iterative solution technique, the user can specify a tolerance on solution error, and the program will iterate until the tolerance is satisfied or end for inability to achieve a satisfactory solution. Customary engineeing alalyses with attendant simplifying assumptions often involve errors of 15% or 20% or even more. Such errors may be acceptable in a situation where there is no important engineering consequence of such levels or when sufficient conservatism can be applied to cover the uncertainty. In critical applications, on the other hand, the errors in finite element analysis results can be reduced to truly insignificant levels through sufficient attention to error minimization. Perhaps the most important role of finite element analysis in design is one of Enlightenment. By this term reference is made not to the popularly recognized Enlightenment of the 18th century (AD) but rather to that of the fourth and fifth century (AD) philosopher, Augustine of Hippo. Augustine was a student of the philosophy of Plato. Plato identified a hierarchy of levels for the accessibility of things and inversely related levels of perfection (in terms of clarity, certainty, truth, etc.). In Plato’s hierarchy, the lowest level of perfection is associated with images of (physical) objects, which exist at the highest level of accessibility. Next in the hierarchical order are the objects the themselves followed, in turn, by mathematical objects and (ideal) forms. For Plato, the pursuit of higher and higher levels of perfection in states of mind involved progression of observations through each of these levels from imagination (images) to belief (through observation of objects) to thinking (e.g., mathematics) to knowledge (ideal forms). Augustine realized that just as Plato’s observations in the visible world needed to be enabled by Illumination, so also the observations in the intelligible world required a corresponding Enlightenment. If we apply this philosophy to the case of engineering design analysis, the performance of the design object (e.g., structure or equipmentis the physical thing in question. Tests (model or field) of the object performance play the role of the images. An idealized problem in classical solid mechanics is the ideal form, and a mathematical problem with requisite assumptions approximating the mechanics problem completes the set. Because of it’s ability to handle complex details with few restrictive assumptions, the finite element method both allows the consideration of an extremely realistic idealized problem and enables employment of mathematical problems extremely close to the idealized solid mechanics problem. Through these advantages, and through its inherent ability to produce results in forms that graphically elucidate phenomena, the finite element method can provide the Enlightenment necessary for the achievement of very high level of certainty and knowledge in the intelligible world. Most customary engineering is based on mathematical calculations truthed by checks with empirical data both from successful experience and from laboratory and field tests. Good engineering design always involves the application of a level of conservatism consistent with the level of certainty in the methods and data being applied. This level of certainty is limited by the closeness with which both the empirical situations and the mathematical assumptions can approach the actual case. Particularly when new designs depart from relevant experience, the only way that the overall certainty in the engineering can be maintained at a satisfactory level is through the employment of idealizations and mathematical solution methods that reflect reality more closely than do those that have been used traditionally. Finite element analysis can fill this need.
