If you have two functions f() and g(), then you can "combine" these (with the same argument) as
f(x)+g(x) or (f+g)(x)
f(x)-g(x) or (f-g)(x)
f(x)*g(x) or (f*g)(x)
f(x)/g(x) or (f/g)(x)
Similarly if you want to do function "composition", you have a choice between
f(g(x)) or (f@g)(x)
The obvious question - which of the alternatives is "better". As a general(?) rule, I would say that it is safer to combine the functions before applying the argument - so (f+g)(x) is "safer" than f(x)+g(x).
The reason is best illustrated by considering the case where the argument is a floating point number. (f+g)(x) will combine the function definitions for the symbol 'x', and then evaluate the result for a (floating-point number) 'x' - so there is only one floating point evaluation. On the other hand f(x)+g(x) will evaluate f() as a floating point number g() as a floating point number and then do a floating point addition - so three floating point computations. So will the increase in the number of floating point computations ever result in some kind of floating point rounding "issue"? Maybe or maybe not! But minimising the number of floating point computations is almost certainly a good way to avoid (as far as possible) any potential floating point rounding issues.
In the attached, I have inserted additional execution groups, which illustrate the function combinations above - They all give the "same" answers as you have already obtained - so are they better??? I'd say yes - but it's not something I'd be that interested in having an argument about