UmbralRaptor changed the topic of #kspacademia to: https://gist.github.com/pdn4kd/164b9b85435d87afbec0c3a7e69d3e6d | Dogs are cats. Spiders are cat interferometers. | Космизм сегодня! | Document well, for tomorrow you may get mauled by a ネコバス. | <UmbralRaptor> … one of the other grad students just compared me to nomal O_o | <ferram4> I shall beat my problems to death with an engineer.
<egg>
!u №
<Qboid>
U+2116 NUMERO SIGN (№)
oeuf has joined #kspacademia
egg has quit [Killed (NickServ (GHOST command used by oeuf))]
oeuf is now known as egg
<egg>
UmbralRaptor: cat?
<UmbralRaptor>
lasers
<egg>
UmbralRaptor: yes but cats are produced by lasers right
<UmbralRaptor>
uh
<egg>
a photon puts the cat in an excited state, then kittens are emitted
<UmbralRaptor>
hm
icefire has quit [Read error: -0x1: UNKNOWN ERROR CODE (0001)]
tinyurl_comSLASH has joined #kspacademia
tinyurl_comSLASH has left #kspacademia [#kspacademia]
<egg>
!wpn UmbralRaptor
* Qboid
gives UmbralRaptor an epsilon geodesic hydrochlorofluorocarbon which strongly resembles an integrand
egg is now known as egg|zzz|egg
<egg|zzz|egg>
the coffee at 22 may have been a mistake?
<Qboid>
oeuf: I added the explanation for this acronym.
<Qboid>
oeuf: [Vega] => Vettore Europeo di Generazione Avanzata
<oeuf>
Vega?
<oeuf>
LARES?
<oeuf>
!acr -add:LARES LAser RElativity Satellite
<Qboid>
oeuf: I added the explanation for this acronym.
<oeuf>
oh hey it's another ball of retroreflectors
oeuf is now known as egg
<egg>
!acr -add:PRISMA PRecursore IperSpettrale della Missione Applicativa
<Qboid>
egg: I added the explanation for this acronym.
<Iskierka>
someone should make a phone app that, when a screenshot is taken, edits the battery level indicator to be some tiny amount and stress people out on the internet
<kmath>
<eggleroy> ? Cayley, the new release of Principia, is out. Plotting in the tracking station, off-by-one, mac, other bugfixes. https://t.co/C7d5ruu332
<Fiora>
speaking of COMEFROM, it's not even that weird, there's DSPs with COMEFROM.
<Fiora>
i have vague memories of trying to tell our hardware people at one point that what they were suggesting was a variant on comefrom and thus a bad idea
<egg>
D:
<Fiora>
they're fun though
<Fiora>
other things i had to tell them not to do include filling the ISA with WAR hazards
<Fiora>
(i have a lot of stories) ( a lot of stories ) ( dont get me started ) (also hardware bugs)
<egg>
Fiora: also I promised numerics, but sadly principia has been lately bogged down in KSP silliness instead, here's some old but fun principia numerical stuff https://twitter.com/eggleroy/status/815719741990072321
<kmath>
<eggleroy> Double-double sum vs. ill-conditioned compensated summation in multistep integrators: hitting the floor vs. going u… https://t.co/0C7k1P7sUm
<Fiora>
'tis quite fine
<Fiora>
speaking of numerics, we interviewed someone the other day who had on their resume that they were the primary numerics contact for their gpu compiler team
<Fiora>
and responsible for math expansions and precision and so on
<egg>
recently we implemented double * double -> double-double in the principia libs using Atlas's advice :D
<Fiora>
i asked them why newton's method could be practically used to refine some complex functions, but not others, and what some examples are.
<Fiora>
they couldn't figure it out. i was very, very, very sad.
<Fiora>
(the answer is "the derivative is easy")
<egg>
Fiora: I find it rather sad/weird that numerics be taught to mathematicians, most of which care not a bit about it, rather than to computer scientists who will go on to misuse FP
<egg>
s/which/whom
<Qboid>
egg meant to say: Fiora: I find it rather sad/weird that numerics be taught to mathematicians, most of whom care not a bit about it, rather than to computer scientists who will go on to misuse FP
<Fiora>
yeah.... we have a strange situation where the people who need numerics often don't get it
<Fiora>
also scientists writing algorithms *using* that math
<bofh>
so the way scientific computing is currently "taught" is a travesty IMO
<Fiora>
even in my industry it's treated nearly as a black art, even moreso than anything else in the compiler and hardware stack
<Fiora>
despite the fact that it's so important
<UmbralRaptor>
bofh: taught?
<bofh>
UmbralRaptor: there's a reason for the quotes.
<bofh>
:P
<Fiora>
also, the answer is that you cannot refine e^x using newton's method because the derivative of e^x is e^x, which you don't know, because you're trying to find e^x.
<UmbralRaptor>
(I think every grad student is self-taught. Even now.)
<Fiora>
(this is why pow(x, y) is specced at 18 ULPs in OpenGL. true story)
<bofh>
(Really?! I would've expected the log() part of pow to be the nasty one, not the exp() part :/)
<Fiora>
Oh I mean they're both nasty
<Fiora>
I don't think you can easily refine either
<Fiora>
the point being like, you can take rsqrt() and apply newton's method all you want to it
<Fiora>
but you can't do that with, say, sin() or exp()
<egg>
Fiora: I mean pow is hard tbh
<Fiora>
I think the only two biggies you can apply newton-raphson to are rsqrt/sqrt and rcp.
<egg>
but just *properly documenting* your math.h (or Ada.Numerics.Elementary_Functions or whichever equivalent) is a rare skill
<bofh>
So I don't actually know how to efficiently implement log(x). For exp(x) I can at least bit twiddle it into an exponent and a mantissa, the exponent is trivial to handle, the mantissa I just feed to like a minimax (or Taylor if you're lazy) polynomial of order like 5.
<Fiora>
hmm, I wonder if our docs say how we do it in hardware.
<egg>
e.g. gcc lately has semidecent math.h, but documented as correctly rounded instead of faithfully rounded
<egg>
where correctly is essentially impossible (tablemaker's dilemma), faithfully easy
<bofh>
Yeah, exp(x) has the obvious issue and log(x) gives you a nasty ratio with derivatives.
<egg>
and then doesn't document how often their faithfully differs from correctly
<bofh>
egg: so I thought "correctly rounded" in floating-point MEANS faithfully rounded, since correctly rounded in the mathematical sense is impossible.
<Fiora>
No, it means correctly
<egg>
bofh: only impossible for transcendental functions
<Fiora>
it's only *hard* for transcendental, and for a limited set of inputs, you can prove it.
<bofh>
Yeah, it's hard for a limited set of inputs, but it *is* hard.
* egg
pets Fiora for being right which is too rare a skill
<bofh>
(Also I thought the convo *was* about transcendental functions, I mean exp(x), log(x) and division are all transcendental functions...)
<bofh>
(I'm only half-joking about division btw)
<egg>
bofh: but sqrt is not, and you better *correctly* round sqrt, division, etc.)
<Fiora>
not in opengl!
<egg>
lest the wrath of Kahan be upon yo
<egg>
s/$/u/
<Qboid>
egg meant to say: lest the wrath of Kahan be upon you
<Fiora>
division is 2.5 ulps, but only in the range of 2^-127 to 2^127 (ish)
<Fiora>
outside of that range you can be arbitrarily wrong.
<Fiora>
Really
<egg>
which, judging from his avatar, means you will get eaten by a canada wolf
<Fiora>
here's why (this is a fun one!!)
<Fiora>
suppose you do 2^127 / 2^126
<Fiora>
2^127 * (1 / 2^126)
<Fiora>
2^127 * (denormal flushed to zero)
<Fiora>
2^127 * 0
<Fiora>
0
<bofh>
yeah I was literally going to ask, does OpenGL even support non-FTZ/DAZ behaviour?
<bofh>
b/c I thought nope.
<Fiora>
my numbers (e.g. precisely how big you have to get to end up with a denormal) are probably wrong but the rough idea is there.
<Fiora>
OpenGL supports it (in that you can have non-FTZ/DAZ and comply) but doesn't require it.
<Fiora>
opengl in particular is particularly nasty because the spec pretends NaN does not exist. but NaN does exist.
<bofh>
Like arguably I can't think of a situation where denormals would be useful in computer graphics (but I could just be blanking mentally)
<Fiora>
f16 denorms are useful for HDR
<egg>
lamont: um, my upper stage gets stuck in unguided gravity turns and raises my apoapsis to above 630 km (with periapsis remaining underground) with a target circular orbit at 349 km
<Fiora>
""The instruction first goes through the TBLX->MAD stage to calculate the value of and removes the bias from the exponent. After that, a Find First One (FF1) on the output of to calculate the resulting exponent."
<bofh>
Yeah, the exponent part is the easy part. I'm more curious about the mantissa, since all the power series for log(1+x) have shitty as fuck convergence.
<bofh>
Like, it's *really* slow compared to, say, the one for exp(x) or J0(x).
<egg>
bofh: <bofh> Yeah, it's hard for a limited set of inputs, but it *is* hard. << iirc it ends up being hard in the undecidable sense? (that is it may require arbitrarily long intermediate precision)
<bofh>
oh god am I reading that right, 1-log2(mantissa) is approximated by the one's complement of log2(mantissa)?!?
<Fiora>
maybe?
<Fiora>
i don't know what the line means
<bofh>
like I'm not sure what else overline could be
<bofh>
well actually it's an approximation to 10*log10(x)
<Fiora>
but yeah this is basically pasted from a private hw spec, so :P
<bofh>
oh god so it's a third-order polynomial with truncated precision at each stage.
<bofh>
like I guess that's exactly the sensible thing in a h/w implementation but still.
<Fiora>
basically in HW you have the tradeoff of table size, multiplier sizes, and time taken
<Fiora>
this particular one is biased towards smaller tables and more multiplier time/effort
<Fiora>
and this trades off things like quadratic vs cubic etc
<bofh>
yeah it's a bit disorienting to me coming from software
egg is now known as eg|zzz|egg
<bofh>
where I'd use much higher-precision multipliers, tables that are like a single coefficient and a order... I'unno for log, but my sin(x)/cos(x) polynomials go up to 12th/13th order (so 6 terms)
<Fiora>
i mean don't you do the same in software?
<Fiora>
table lookup, then polynomial, etc
eg|zzz|egg is now known as egg|zzz|egg
<bofh>
like it's more the tables collapse down to 1 entry in s/w
<bofh>
in a lot of cases. since you have fp/fp multiplies that are reasonably fast.
<egg|zzz|egg>
Fiora: at least this was how it was done back when my father wrote his Ada.Numerics implementation
<bofh>
like the way above is how I see it being done in fixed-point emulation of floating-point
<bofh>
say, the x87 emulator in the linux kernel
<egg|zzz|egg>
(had to work on weird archs, so he had to implement Sqrt, which is really annoying)
<kmath>
<whitequark> "… I have no idea whether the guy is called Chebyshev, Tchebyshoff, Tschebyscheff or whatever" -- @eggleroy https://t.co/BzvVv8otZP
<Fiora>
like lemme get a good example of this shenanigans