Python: Approximating Ln(x) Using Taylor Series
I'm trying to build an approximation for ln(1.9) within ten digits of accuracy (so .641853861). I'm using a simple function I've built from ln[(1 + x)/(1 - x)] Here is my code so
Solution 1:
The principle is;
- Look at how much each iteration adds to the result.
- Stop when the difference is smaller than 1e-10.
You're using the following formula, right;
(Note the validity range!)
def taylor_two():
x = 1.9 - 1
i = 1
taySum = 0while True:
addition = pow(-1,i+1)*pow(x,i)/i
ifabs(addition) < 1e-10:
break
taySum += addition
# print('value: {}, addition: {}'.format(taySum, addition))
i += 1return taySum
Test:
In [2]: print(taylor_two())
0.6418538862240631
In [3]: print('{:.10f}'.format(taylor_two()))
0.6418538862
Post a Comment for "Python: Approximating Ln(x) Using Taylor Series"