Applied Numerical Methods Using MATLAB. Won Y. Yang

Чтение книги онлайн.

Читать онлайн книгу Applied Numerical Methods Using MATLAB - Won Y. Yang страница 23

Applied Numerical Methods Using MATLAB - Won Y. Yang

Скачать книгу

At x= 100000000000, f1(x)=0.500004449631168080, f2(x)=0.499999999998750000 At x= 1000000000000, f1(x)=0.500003807246685030, f2(x)=0.499999999999874990 At x= 10000000000000, f1(x)=0.499194546973835970, f2(x)=0.499999999999987510 At x= 100000000000000, f1(x)=0.502914190292358400, f2(x)=0.499999999999998720 At x= 1000000000000000, f1(x)=0.589020114423405180, f2(x)=0.499999999999999830 sqrt(x+1)= 100000000.0000000000000, sqrt(x)= 100000000.0000000000000 diff=0000000000000000, sum=41a7d78400000000

      The absolute/relative error of an approximate value x to the true value X of a real‐valued variable is defined as follows:

      (1.2 9)equation

      (1.2 10)equation

      If the LSD is the dth digit after the decimal point, then the magnitude of the absolute error is not greater than half the value of LSD.

      (1.2 11)equation

      (1.2 12)equation

      In this section, we will see how the errors of two numbers, x and y, are propagated with the four arithmetic operations. Error propagation means that the errors in the input numbers of a process or an operation cause the errors in the output numbers.

      Let their absolute errors be εx and εy, respectively. Then the magnitudes of the absolute/relative errors in the sum and difference are

equation

      (1.2 13)equation

      (1.2 14)equation

      From this, we can see why the relative error is magnified to cause the ‘loss of significance’ in the case of subtraction when the two numbers X and Y are almost equal so that |XY| ≈ 0.

      The magnitudes of the absolute and relative errors in the multiplication/division are

equation

      (1.2 15)equation

      (1.2 16)equation

equation

      (1.2 17)equation

      (1.2 18)equation

      This implies that, in the worst‐case, the relative error in multiplication/division may be as large as the sum of the relative errors of the two numbers.

      First, to decrease the magnitude of round‐off errors and to lower the possibility of overflow/underflow errors, make the intermediate result as close to 1 (one) as possible in consecutive multiplication/division processes. According to this rule, when computing xy/z, we program the formula as

       (xy)/z when x and y in the multiplication are very different in magnitude,

       x(y/z) when y and z in the division are close in magnitude, and

       (x/z)y when x and z in the division are close in magnitude.

       %nm125_1.m x=36; y=1e16; for n=[-20 -19 19 20] fprintf('ŷ%2d/ê%2dx=%25.15e\n',n,n,ŷn/exp(n*x)); fprintf('(y/êx)̂%2d=%25.15e\n',n,(y/exp(x))̂n); end

      For instance, when computing yn/enx with x > 1 and y > 1, we would program it as (y/ex)n rather than as yn/enx, so that overflow/underflow can be avoided. You may verify this by running the above MATLAB script “nm125_1.m”.

      >nm125_1 ŷ-20/ê-20x= 0.000000000000000e+000 (y/êx)̂-20= 4.920700930263814e-008 ŷ-19/ê-19x= 1.141367814854768e-007 (y/êx)̂-19= 1.141367814854769e-007 ŷ19/ê19x= 8.761417546430845e+006 (y/êx)̂19= 8.761417546430843e+006 ŷ20/ê20x= NaN (y/êx)̂20= 2.032230802424294e+007

      Second, in order to prevent ‘loss of significance’, it is important to avoid a ‘bad subtraction’ (Section 1.2.2), i.e. a subtraction of a number from another number having almost equal value. Let us consider a simple problem of finding the roots of a second‐degree equation ax2 + bx + c = 0 by using the quadratic formula

      Let |4ac| ≪ b2. Then, depending on the sign of b, a ‘bad subtraction’ may be encountered when we try to find x1 or x2, which is a smaller one of the two roots. This implies that it is safe from the ‘loss of significance’ to compute the root having the larger absolute value first and then obtain the other root by using the relation (between the roots and the coefficients) x1x2 = c/a (see Problem 1.20(b)).

       %nm125_2.m: roundoff error test f1=@(x)(1-cos(x))/x/x; % Eq.(1.2.20-1) f2=@(x)sin(x)*sin(x)/x/x/(1+cos(x)); % Eq.(1.2.20-2) for k=0:1 x=k*pi; tmp= 1; for k1=1:8 tmp=tmp*0.1; x1= x+tmp; fprintf('At x=%10.8f, ', x1) fprintf('f1(x)=%18.12e; f2(x)=%18.12e', f1(x1),f2(x1)); end end

      (1.2 20)equation

      It is safe to use f1(x) for xπ since the term (1 + cos x) in f2(x) is a ‘bad subtraction’, while it is safe to use f2(x) for x ≈ 0 since the term (1 − cos x) in f1(x) is a ‘bad subtraction’.

Скачать книгу