## Initial value problem - Wikipedia

Sep 09, · Differential Equations >. How to Solve a Differential Equation with an Initial Condition.. When a differential equation specifies an initial condition, the equation is called an initial value eminoirsa.cfl conditions require you to search for a particular solution for a differential equation. Initial Conditions; Initial-Value Problems As we noted in the preceding section, we can obtain a particular solution of an nth order diﬀerential equation simply by assigning speciﬁc values to the n constants in the general solution. However, in typical applications of diﬀerential equations you will be . Free ordinary differential equations (ODE) calculator - solve ordinary differential equations (ODE) step-by-step.

## Differential Equations - Definitions

**Solving an initial value problem** Skip Thompson. Eugene M. Barbara Zubik-Kowal. Lawrence F. A boundary value problem specifies a solution of interest by imposing conditions at more than one point, **solving an initial value problem**. Correspondingly, solving boundary value problems numerically is rather different from solving initial value problems.

Differential equations arise in the most diverse forms, so it is necessary to prepare them for solution. The usual way to write a set of equations as a first order system is to introduce an unknown for each dependent variable in the original set of equations plus an unknown for each derivative up to one less than the highest appearing.

Differential-algebraic equations resemble ordinary differential equations, but they differ in important ways. Programs intended for non-stiff initial value problems perform very poorly when applied to a stiff system. Although most initial value problems are not stiff, many important problems are, so special methods have been developed that solve them effectively. A great many methods have been proposed, but three kinds dominate: Runge-KuttaAdams, BDFs backward differentiation formulas.

These two one-step methods can be derived in several ways with extensions that lead to the popular kinds of methods, *solving an initial value problem*. The backward Euler formula is BDF1, **solving an initial value problem**, the lowest order member of the family.

These formulas are the most popular for solving stiff systems, *solving an initial value problem*. Euler's method is AB1, the lowest order member of the family. The backward Euler method is AM1. How an implicit formula is evaluated is crucial to its use. Simple iteration is the standard way of evaluating implicit formulas for non-stiff problems. If a predetermined, fixed number of iterations is done, the resulting method is called a predictor-corrector formula.

If the predictor has a truncation error that is of the same order as the implicit formula, a single iteration produces a result with a truncation error that agrees to leading order with that of iterating to completion. If the predictor is of order one lower, the result has the same order as the corrector, but the leading terms in the truncation error are different. There are two **solving an initial value problem** that Adams formulas are implemented in popular solvers. Both predict with an Adams-Bashforth formula and correct with an Adams-Moulton method.

One way is to iterate to completion, so that the integration is effectively done with an implicit Adams-Moulton method. The other is to do a single correction, which amounts to an explicit formula. For non-stiff problems there is little difference in practice, but the two implementations behave very differently when solving stiff problems.

Euler's method, the backward Euler method, the trapezoidal rule, and Heun's method are all examples of Runge-Kutta RK methods. RK methods are often derived by writing down the form of the one-step method, expanding in Taylor series, and choosing coefficients to match terms in a Taylor series expansion of the local solution to as high order as possible, *solving an initial value problem*.

The second term on the right is the local error, a measure of how well the method imitates the behavior of the differential equations. Reducing the step size reduces the local error and the higher the order, the more it is reduced. The argument is repeated at the next step for a different local solution.

In this view of the error, the stability of the initial value problem is paramount, **solving an initial value problem**. A view that is somewhat better suited to methods with memory emphasizes the stability of the numerical method.

If the numerical method is stable, convergence can be established as in the other approach. The order of accuracy can be nearly doubled by using both values and slopes, *solving an initial value problem*, but these very accurate formulas are not usable because they are not stable, **solving an initial value problem**. If the step sizes are sufficiently small, all the popular numerical methods are stable.

When solving non-stiff problems it is generally the case that if the step sizes are small enough to provide the desired accuracy, the numerical method has satisfactory stability. On the other hand, if the initial value problem is stiff, the step sizes that would provide the desired accuracy must be reduced greatly to keep the computation stable when using a method *solving an initial value problem* implementation appropriate for non-stiff initial value problems.

Some implicit methods have such good stability properties that they can solve stiff initial value problems with step sizes that are appropriate to the behavior of the solution if they are evaluated in a suitable way. The backward Euler method and the trapezoidal rule are examples. General-purpose initial value problem solvers estimate and control the error at each step by adjusting the step size. This approach gives a user confidence that the problem has been solved in a meaningful way.

It is inefficient and perhaps impractical to solve with constant step size an initial value problem with a solution that exhibits regions of sharp change.

Numerical methods are stable only if the step sizes are sufficiently small and control of the error helps bring this about. The truncation error of all the popular formulas involves derivatives of the solution or differential equation. To estimate the truncation error, these derivatives are estimated using methods which involve *solving an initial value problem* or no additional computation. This is commonly done for Runge-Kutta methods by comparing formulas of different orders.

That is the approach taken in some Adams codes; others use the fact that the truncation errors of Adams-Bashforth and Adams-Moulton formulas of the *solving an initial value problem* order differ only by constant factors. If the estimated error of the current step is bigger than a tolerance specified by the user, the step is rejected and **solving an initial value problem** again with a step size that it is predicted will succeed.

If the estimated error is much smaller than required, the next step size is increased so that the problem can be solved more efficiently. The Adams methods and BDFs are families of formulas.

When taking a step with one of these formulas, it is possible to estimate what the error would have been if the step had been taken with a formula of lower order and in the right circumstances, a formula of order one higher. Using such estimates the popular Adams and BDF solvers try to select not only an efficient step size, but also an efficient order.

Variation of order provides a simple and effective way of dealing with the practical issue of starting the integration. The lowest order formulas are one-step, so they are used to get started. The text Shampine et al. Initial value problems and differential-algebraic equations are discussed at a similar level in Ascher and Petzold and at a higher level in Hairer et al.

All these texts provide references to software. Shampine and Skip ThompsonScholarpedia, 2 3 Jump to: navigationsearch. Post-publication activity Curator: Skip Thompson Contributors:.

Sponsored by: Eugene M. Namespaces Page Discussion. Views Read View source View history. IzhikevichEditor-in-Chief of Scholarpedia, the peer-reviewed open-access encyclopedia. Reviewed by : Anonymous. Accepted on: GMT.

### Initial Value Problem: Differential Equation, Initial Condition - Calculus How To

Sep 09, · Differential Equations >. How to Solve a Differential Equation with an Initial Condition.. When a differential equation specifies an initial condition, the equation is called an initial value eminoirsa.cfl conditions require you to search for a particular solution for a differential equation. Math Problem Solver (all calculators) Differential Equation Calculator. The calculator will find the solution of the given ODE: first-order, second-order, nth-order, separable, linear, exact, Bernoulli, homogeneous, or inhomogeneous. Initial conditions are also supported. InitialValueProblems Aswehaveseen, mostdifferentialequationshavemorethanonesolution. Fora first-orderequation.