choosing an 'a' closer to the region of the function you wish to approximate will give a better approximation for less sums.
for example, if we wish to approximate sin(x) for small values of x, a = 0 is a good choice as it wouldnt require as many terms to sum for a good approximation. notice on the graph below how the approximate taylor polynomial can deviate significantly from sin(x) for points further away from a=0:
one obvious potential drawback from not centering your taylor series of the function around 0 however, is that computing f
(k)(a) may not be as simple or you could make each term harder to integrate.
for example, before we had:
e^(-x^2) = sum [(e^a)*(-x^2 - a)^k / k!]
integrating the RHS with respect to x is not easy when 'a' does not equal 0 as we have (-x^2 - a)^k to integrate rather than simply -x^(2k).
if i were to write a program to calculate S [16->12] e^(-x^2) dx, id take a = 0 and just sum more terms. if summing terms is too slow, there are also other, more advanced methods for finding a different series which converges faster to the same limit