My father owned a copy of it which I still have to this day. It had a lot of different formulas than the ones we were taught in school, and I liked the range of topics that it covered.
the_arun 18 hours ago [-]
I just tried a portion of the url & it took me to Bangladesh university - http://182.160.97.198:8080/xmlui/bitstream/handle/ . Intersecting. When I go to root of this url, the error messages are listing the softwares this site is powered by. Generally this is not considered a secure way of protecting a site.
Rendello 1 days ago [-]
The typography and design inside the book is beautiful but the cover looks like an old bargain-store flyer. There's an expression about that, something about judging a book by its cover? I can't remember exactly.
yababa_y 1 days ago [-]
i've been missing this knowledge! thank you for the recommendation.
gus_massa 1 days ago [-]
Nitpicking:
> You’re never going to get error less than 10E−15 since that’s the error in the tabulated values,
If you have like 100 (or 400) values in the table, you can squeeze one more digit.
In the simple case, imagine you have the constant pi, with 15 digits, repeated 100 times. If the rounding was done in a random direction like
floor(pi * 1E15 + random() - 1/2 ) / 1E15
then you can use statistic to get an average with an error that is like K/sqrt(N) where K is a constant and N is the number of samples. If you were paying attention in you statistic clases, you probably know the value of K, but it's not my case. Anyway, just pick N=100 or N=400 or something like that and you get another digit for "free".
Nobody builds a table for a constant and uses a uniform offset for rounding. Anyway, with some smooth function that has no special values in the points where it's sample, the exact value of the cut decimal is quite "random" and you get a similar randomization.
_0ffh 1 days ago [-]
Also nobody in his right mind uses lookup tables where the table value is actually the float approximation of the true f(x) - you choose the support values to minimize an error (e.g. mse) of a dense sampling of your interpolated value over x (or, in the limit, the integral of the chosen error function between the true curve and the interpolation of your supports). If you want to e.g. approximate a convex function using linear interpolation, all the tabulated values f'(x) would be <= the true f(x).
AlotOfReading 1 days ago [-]
The true value is far more useful in a lot of cases. If you're building a table indexed by the upper mantissa bits of the float, for example, it's difficult to distribute the error properly across all intervals.
http://182.160.97.198:8080/xmlui/bitstream/handle/123456789/...
which is all about the kind of numerical analysis you would do by hand and introduces a lot of really cool math like the calculus of differences
https://en.wikipedia.org/wiki/Finite_difference
My father owned a copy of it which I still have to this day. It had a lot of different formulas than the ones we were taught in school, and I liked the range of topics that it covered.
> You’re never going to get error less than 10E−15 since that’s the error in the tabulated values,
If you have like 100 (or 400) values in the table, you can squeeze one more digit.
In the simple case, imagine you have the constant pi, with 15 digits, repeated 100 times. If the rounding was done in a random direction like
then you can use statistic to get an average with an error that is like K/sqrt(N) where K is a constant and N is the number of samples. If you were paying attention in you statistic clases, you probably know the value of K, but it's not my case. Anyway, just pick N=100 or N=400 or something like that and you get another digit for "free".Nobody builds a table for a constant and uses a uniform offset for rounding. Anyway, with some smooth function that has no special values in the points where it's sample, the exact value of the cut decimal is quite "random" and you get a similar randomization.