#1 in Calculus you learned that

log(1+x) = x - x^2/2 + x^3/3 - x^4/4 + ...

for x in the interval (-1,1] (here x^2 means "x squared", etc.).

Write a program which asks the user to type a number in the interval

[1,2] and then calculates the natural logarithm of that number using

this series with 6 decimal places accuracy (you can use the

alternating harmonic series program as a template).

#2 Modify the program from the first problem and define a function

caled mylog which gives the natural logarithm of a number in the

interval [1,2] using the above series with the six decimal places

accuracy. Use this function in a program which to prints a table which

has 1.0, 1.1, 1.2, 1.3, ... , 2.0 in the first column, the value of

your function mylog for these numbers in the second column, and the

machine values of log for these numbers in the third column.

Check for the accuracy of your computation by comparing the values in

the second and third column. Observe that it takes much longer to

calculate log(2.0) than any other entry in the table. Can you tell

why?

#3 Modify the program for the second problem in the first assignment

so that the table of logarithms is written to a file named output.txt

instead of the standard output.