UmbralRaptor changed the topic of #kspacademia to: https://gist.github.com/pdn4kd/164b9b85435d87afbec0c3a7e69d3e6d | Dogs are cats. Spiders are cat interferometers. | Космизм сегодня! | Document well, for tomorrow you may get mauled by a ネコバス. | <UmbralRaptor> egg|nomz|egg: generally if your eyes are dewing over, that's not the weather. | <ferram4> I shall beat my problems to death with an engineer. | We can haz pdf
<egg> !wpn
* galois gives egg a rhodium stannic knife with an eggscuse attachment
<whitequark> !wpn -add:adj satanic
<galois> Added adj 'satanic'
<whitequark> !wpn egg
* galois gives egg a fake convergent doom
<egg|laptop|egg> !wpn -add:adj stannous
<galois> Added adj 'stannous'
<egg|laptop|egg> !wpn -add:adj ferrous
<galois> Added adj 'ferrous'
<egg|laptop|egg> !wpn -add:adj ferric
<galois> Added adj 'ferric'
e_14159 has joined #kspacademia
<egg|laptop|egg> what are the corresponding words for copper in English
<egg|laptop|egg> ah cuprous
<egg|laptop|egg> !acr -add:adj cuprous
<galois> Acronym already defined! Try !acr -redef:WTF When Taylor Fails
<egg|laptop|egg> !wpn -add:adj cuprous
<galois> Added adj 'cuprous'
<egg|laptop|egg> !wpn -add:adj cupric
<galois> Added adj 'cupric'
e_14159_ has quit [Ping timeout: 378 seconds]
<egg|laptop|egg> !wpn -add:wpn oxide
<galois> Added wpn 'oxide'
<egg|laptop|egg> !wpn -add:wpn chloride
<galois> Added wpn 'chloride'
<egg|laptop|egg> !wpn -add:wpn fluoride
<galois> Added wpn 'fluoride'
<egg|laptop|egg> !wpn -add:adj chlorous
<galois> Added adj 'chlorous'
<egg|laptop|egg> !wpn -add:adj chloric
<galois> Added adj 'chloric'
<egg|laptop|egg> !wpn -add:adj hypochlorous
<galois> Added adj 'hypochlorous'
<egg|laptop|egg> !wpn -add:adj chloryl
<galois> Added adj 'chloryl'
<egg|laptop|egg> !wpn whitequark
* galois gives whitequark a californium thrombolysis
<egg|laptop|egg> !wpn UmbralRaptor
* galois gives UmbralRaptor an osmium 🗡
<egg|laptop|egg> would chlorous fluoride be ClF3
<egg> !wpn
* galois gives egg a strontium heptapodial theory
<egg> mofh: UmbralRaptor: are you able to have pdf of Tafeln Höherer Funktionen
<mofh> "Tables of Higher Functions"?
<egg> yeah
<egg> mofh: are elliptic integrals B and D due to Jahnke and Emde or do they predate that
<egg> Funktionentafeln mit Formeln und Kurven has D but not B
<egg> (fourth edition, 1945; the first is from 1933, dunno if it had D already)
<egg> wait there's a Funktionentafeln from 1909!?
<mofh> I know Emde had D but I have no clue where B came from, I first saw it in yours+phl's Fukushima code. I can head by MIR in the morning where I recall there were like Eighteen books on elliptic integrals and flip thru them.
<mofh> (and no I can't seem to find a PDF of that currently; still looking tho).
<egg> mofh: B exists at least in Bulirsch 65, and is cited from Tafeln Höherer Funktionen
<egg> Bulirsch 1965 also cites a C
<mofh> Hrm.
<egg> EF, so they added a D, then C, then B, and then B and D are the more convenient so Bulirsch el2 computes B and D
<egg> and then Fukushima invented J for the third kind because I don't know
<egg> I guess it's the letter before K
<egg> and K is Π
<egg> elliptic integrals!?
<egg> elliptic integrals, where the delimiter tells you which convention you're using
<mofh> I hate that convention so, so much.
<egg> tfw E(φ, sin α) = E(φ | sin² α) = E(sin φ ; sin α) = E(φ \ α)
<egg> whitequark: cursed notation ^
<egg> mofh: I mean yes, but at least it's better than having the same notation for multiple conventions
<egg> K vs. Π is utterly weird though
<egg> mofh: wait no K is F right?
<mofh> Yeah, I thought P is Π, or E is Π, K is *definetly* F.
<egg> mofh: nono Π is Π and E is E
<egg> Π is third kind and is not like the others
<mofh> right, E is E, but I'm positive that the third kind also has a latin letter associated to it.
<mofh> I just can't recall which.
<egg> R_J in Carlson?
<mofh> YES that's it.
<egg> and J in Fukushima
<egg> not sure why R_J
<mofh> I'm not sure why either, tbh.
<egg> mofh: hm Jahnke-Emde has B C and D for the complete integrals
<egg> but only D for the incomplete?
<egg> p. 73
<egg> also uses J to denote "any complete integral" yay
<mofh> *blink* *blink*
<egg> i mean it's tables, it makes sense in context
<egg> mofh: Jahnke-Emde p. 73
<egg|laptop|egg> Es sei J = integral of Φ / Δ dφ, Δ = sqrt (1 - k^2 sin^2 φ),
<egg|laptop|egg> und fuer Φ = 1 Δ^2 sin^2 φ cos^2 φ (sin φ cos φ / Δ)^2
<egg|laptop|egg> J = K E D B C
<egg|laptop|egg> mofh: Fukushima calls B and D associate elliptic integrals of the 2nd kind, what makes them 2ndish as opposed to 1stish
<egg|laptop|egg> oh lol there's even more notation hell for Π depending on which argument is n https://dlmf.nist.gov/19.1
<UmbralRaptor> egg: not for probably ~38 hours
<egg|laptop|egg> mofh: aaaa Legendre called the elliptic integrals "elliptic functions"
<egg|laptop|egg> mofh: Legendre even has integrals Z, P, and Q
<egg|laptop|egg> also T
egg|laptop|egg has quit [Remote host closed the connection]
UmbralRaptor has quit [Quit: Bye]
UmbralRaptop has joined #kspacademia
<UmbralRaptop> kitter-katter is a good cat name, right? https://photos.app.goo.gl/8NNRFLTgnUKSLaGW7
<kmath> <LeaksPh> Dear reader of our paper source, Greetings!
<_whitenotifier-cd19> [Principia] pleroy opened pull request #2353: Fix gross errors found by comparison with Mathematica integration - https://git.io/JeC1B
<e_14159> egg: When I saw the "Fix gross error" thing related to principia, I thought it'd be fixing an actual gross error, like fixing the spelling of maneuver
<UmbralRaptop> e_14159: Aluminum Aluminum Aluminum
<e_14159> UmbralRaptop: s/num/nium?
<galois> e_14159 thinks UmbralRaptop meant to say: e_14159: Aluminium? Aluminum Aluminum
<UmbralRaptop> 🦅🦅🦅
egg|cell|egg has joined #kspacademia
<e_14159> Damn, I should fix the unicode rendering here. I have no idea what you posted :-)
<egg|cell|egg> !u 🦅
<galois> No info for U+1f985 (I only know about Unicode up to 9.0)
<UmbralRaptop> There's a bald eagle emoji
<egg|cell|egg> UmbralRaptop: cat
<UmbralRaptop> friendly cat
<egg|cell|egg> e_14159: what's wrong with our spelling of manœuvre
<e_14159> egg|cell|egg: Nothing. Maneuver is the wrong one.
<mofh> egg: aaaaaaaaaaaaaaaaaaa what even is notation
<egg|cell|egg> Nah but he just calls every integral Z when he works on it
<egg|cell|egg> Have you found who came up with B and D though
<mofh> nope, just got to the library, grabbed like 8 elliptic integral texts, checking now.
egg|laptop|egg has joined #kspacademia
<egg|laptop|egg> whitequark: re. Amblypygi [CW: Amblypygi] https://thesmallermajority.com/2012/10/06/the-scariest-animal-that-will-never-hurt-you/
<whitequark> lol
<egg|laptop|egg> that blog is fun
<whitequark> they don't actually bother me
<egg|laptop|egg> whitequark: from the same blog: https://sixlegsphoto.files.wordpress.com/2014/11/banana.jpg
<egg|laptop|egg> whitequark: also, tailless whipscorpions as opposed to the ones with a tail https://thesmallermajority.com/2012/10/14/the-other-whipscorpions/
<egg|laptop|egg> Thelophonida
<egg|laptop|egg> or Uropygi apparently
<_whitenotifier-cd19> [Principia] azumafuji starred Principia - https://git.io/JeC9x
<egg|laptop|egg> mofh: D is 30s of earlier, and traces of B are from there
egg|laptop|egg has quit [Remote host closed the connection]
egg|laptop|egg has joined #kspacademia
egg|laptop|egg has quit [Remote host closed the connection]
<mofh> egg: yeah, D (or at least its expression) shows up in the integrals bit of Appell & Lacour, which is from 1922
<egg|cell|egg> What about the letter D though
<mofh> also whee mathematical French: "polynome bicarré" everywhere
<mofh> interestingly they're really hesitant to use letters, they just use "Type I, II & III" integrals in Legendre / Weierstrass Normal Forms.
<mofh> (actually they go up to "Type V")
egg|laptop|egg has joined #kspacademia
<mofh> It's definetly D that I'm seeing b/c they eggsplicitly define it in terms of a difference of F and E at the very end
<egg|laptop|egg> mofh: old mathematical french even
<mofh> I mean, 1922.
<kmath> <✔ladyhaja> Registering at a new doctors & on the staff profile page under special interests there are things like diabetes & w… https://t.co/xD4sTeEjGp
<egg|laptop|egg> mofh: yeah, but I'm wondering who called it D
<mofh> okay, let me hop to the next book then
<mofh> which is from... 1933
<mofh> still no traces of B but iirc Legendre proved that all you need is F, E, D & Π to represent any integral of a certain form and B came about as a computational convenience
<egg|laptop|egg> mofh: Legendre called them F, E, and Π, but did he call D D
<mofh> I can't seem to tell, still reading.
* egg|laptop|egg stares at a treatise on elliptic "functions" and eulerian integrals
<mofh> "case when the polynomial under the radical is of the third degree"
<mofh> I'm now staring at a treatise on Eulerian Integrals (R. Campbell, translated into French, 1966)
<egg|laptop|egg> but now you're going to more recent texts, when the notation D already exists in the thirties
<mofh> Point, and this isn't relevant anyway, but it was at least amusing that they began the chapter with "οὔτε λέγει οὔτε κρύπτει ἀλλὰ σημαίνει" from Heraclitus fragment 93
<egg|laptop|egg> mofh: Legendre (T. 1, p.43, in chap. IX comparaison des fonctions elliptiques de la seconde espece) has G(φ) = E(φ) + k F(φ)
<egg|laptop|egg> k being an arbitrary coefficient
<mofh> except E + F isn't quite (E - F)/k^2
<egg|laptop|egg> I don't think that's the same k
<egg|laptop|egg> if that k is negative you might get D but I don't think the name D appears
<mofh> the name D does not appear seem to appear in that Legendre text, no.
<mofh> which is interesting since DLMF calls D the "D⁡(ϕ,k): incomplete elliptic integral of Legendre’s type"
<mofh> (whereas, say, F gets called "F⁡(ϕ,k): Legendre’s incomplete elliptic integral of the first kind")
<egg|laptop|egg> yeah, but while being a very good work about current conventions and methods it's not a history of mathematics work
<mofh> fair, I'm just trying to find leads here :P
<egg|laptop|egg> have you found the Jahnke books in your library?
<mofh> just a second, let me check.
<egg|laptop|egg> mofh: what's your institutional library
<mofh> nope; hancock says to see "Enneper, Elliptische Funktionen, where the historical notes and list of authors cited on pp. 500 - 598 are valuable."
<mofh> so I'm at MIR Jussieu b/c the libraries both here and at École des Ponts are very lacking in older mathematical texts
<mofh> here being Paris-Est.
<mofh> (Enneper does not seem to eggsist here, but it's in German and like from 1876)
<egg|laptop|egg> mofh: https://www.worldcat.org/title/tables-of-higher-functions/oclc/527717 eggists in paris (incl. the observatoire) fwiw
<mofh> oh, it's in the Generale, not Mathematique-Recherche.
<mofh> that's like 2 buildings over, I can be there in 3 minutes
<egg|laptop|egg> Elliptische Functionen. Theorie und Geschichte. does sound like a promising title
<egg|laptop|egg> also cc e
<egg|laptop|egg> s/$/_14159 for weird German spelling/
<galois> egg|laptop|egg meant to say: also cc e_14159 for weird German spelling
<e_14159> egg|laptop|egg: The "Functionen" part?
<egg|laptop|egg> yeah
<mofh> Oh no, it's here, just it's a Tables not a Theory so it's in another section. Nevermind, back in like one minute.
<e_14159> I also like "Das Recht der Uebersetzung in fremde Sprachen bleibt vorbehalten." - some things just stay the same.
<egg|laptop|egg> lol
<egg|laptop|egg> Achter Abschnitt. § 25. Die elliptischen Integrale und ihre Classification.
<egg|laptop|egg> there are integrals called A, B, and C here
<egg|laptop|egg> but I'm not sure they're the right thing
<mofh> okay, I have 1933, 1948, 1960 here, which one do you want me to check? (I grabbed 1933 to start).
<mofh> 1933 already has D in it using eggsplicitly the letter D, and defining it in terms of (E-F)/k^2
<egg|laptop|egg> mofh: so that's tables of higher functions, not tables, formulae, and curves?
<mofh> 1933 is just "Tables of Functions", the other two are "Tables of Higher Functions".
<egg|laptop|egg> s/tables,.*/tables of functions with formulae and curves/
<galois> egg|laptop|egg meant to say: mofh: so that's tables of higher functions, not tables of functions with formulae and curves
<egg|laptop|egg> ?
<mofh> No mention of "formulae and curves" anywhere.
<egg|laptop|egg> okay good
<egg|laptop|egg> and it defines D
<egg|laptop|egg> does it define B?
<egg|laptop|egg> it should, given that Bulirsch cites it as a source for B
<mofh> which year? 1933 does not seem to have it.
<egg|laptop|egg> 1960
<mofh> rofl the later editions are bilingual DE/EN with the texts side-by-side on each page in columns
<egg|laptop|egg> yeah, same for tables of functions with formulae and curves
<mofh> wtf, integral from 0 to 1 of K(k) (the complete integral) is 2*(Catalan's constant)?!
<mofh> YEP
<egg|laptop|egg> wtf is Catalan's constant
<egg|laptop|egg> no D in Enneper that I can see
<mofh> "For calculations sometimes it is advantageous (especially in order to avoid loss of accuracy thru differences of nearly equal numbers) to use besides K and E also D and the further interals: B(k) = (blarg) and C(k) = (blarg)"
<mofh> page 67 of 1960
<mofh> let me see if it's in 1948 as well.
<egg|laptop|egg> mofh: ah so in the *complete* integrals
<egg|laptop|egg> mofh: and the 1933 edition did not have it among the complete integrals?
<mofh> YEP, where it defines B to be D but using cos^2 where D uses sin^2
<mofh> so the 1933 edition seems to be actually "tables of functions with formulae and curves" b/c it reads completely unlike the other two.
<mofh> and it is missing the word "higher"
<mofh> let me check anyway
<mofh> NOPE, page 145 of that one HAS THE EGGSPRESSION (K(k) - D(K)), but does *NOT* assign to it a letter at all.
<mofh> whereas 1948 of Tables of Higher Functions already calls it B (pg. 74-75).
<egg|laptop|egg> aha
<mofh> like 1933 is hilarious b/c it actually uses K-D quite a bit, like it actually enumerates the 4 integrals K, E, D and (K-D)
<mofh> presumably someone got tired of calling it (K-D) all the time in the time interval between 1933 and 1948
<mofh> (K(k) - D(k))*
<egg|laptop|egg> interestingly the 1945 ed. of "tables of functions with formulae and curves" (Jahnke-Emde, no Loesch) has B in the complete integrals
<egg|laptop|egg> p. 73
<egg|laptop|egg> and D in the incompletes, but not B by name
<mofh> I mean all of them have D in the incompletes as well, from what I can tell. But I can't see B in the incompletes even in 1960.
<mofh> But let me double-check.
<egg|laptop|egg> okay so Bulirsch might have extrapolated that name in 1965 then
<mofh> yep, incompletes list F, E and D only in both 1948 and 1960.
<mofh> Yeah, that would make sense.
<egg|laptop|egg> mofh: but B only appears in the completes between 1933 and 1945 (in Jahnke-Emde), 1948 (in Jahnke-Emde-Loesch)?
<egg|laptop|egg> so then I guess the question is when does the name D appear
<egg|laptop|egg> is it in Jahnke-Emde 1909
<mofh> 1948 in Jahnke-Emde-Loesch, yes.
<mofh> I don't know if there's a copy of 1909 here, let me check.
<egg|laptop|egg> that would be a different title
<egg|laptop|egg> Funktionentafeln mit Formeln und Kurven
<mofh> kk, one moment.
<egg|laptop|egg> (being Jahnke-Emde, not Jahnke-Emde-Loesch)
<mofh> https://www.worldcat.org/title/funktionentafeln-mit-formeln-und-kurven/oclc/5894409174&referer=brief_results I mean worldcat seems to lack results for it, and I can't see it here, nor the 1928 edition: https://www.worldcat.org/title/funktionentafeln-mit-formeln-und-kurven/oclc/28712593&referer=brief_results
<egg|laptop|egg> oh there's a 1938 edition of the same too
<egg|laptop|egg> JE, not JEL
<egg|laptop|egg> I wonder whether that one has B
<egg|laptop|egg> JE45 has B
<mofh> 1938 edition of JE?
<egg|laptop|egg> yeah
<mofh> no copies in Paris, it seems: https://www.worldcat.org/title/funktionentafeln-mit-formeln-und-kurven-von-eugen-jahnke-und-fritz-emde/oclc/976916048 -- amusingly there's one in Toronto, so you really should've asked me this back in May,,,
<egg|laptop|egg> if it's in JE38 then the notation is probably Emde's (Jahnke is very dead); the 45 edition only has a preface from the publisher citing a ton of contributors so attribution is muddy
<mofh> actually I know someone who's grad math at UToronto; let me see if I can poke them in a few hours and get them to check out JE38.
<egg|laptop|egg> haha
<mofh> (they're prolly still asleep right now).
<egg|laptop|egg> mofh: actually which JE do you have at hand
<egg|laptop|egg> because we only know that it's not in JEL33, nothing about JE33
<mofh> so it's not JEL33, it only lists Jahnke and Emde as authors.
<egg|laptop|egg> what's the title
<mofh> OH BLOODY HELL AND THE "with formulae and curves" is there, just only in smaller print on the INSERT title, but not on the spine or the front cover.
<egg|laptop|egg> ah okay
<mofh> "Published by B.G. Teubner / Leipzig and Berlin / 1933"
<mofh> so this is JE33 with formulae and curves "Second (Revised) Edition, with 171 Figures"
<egg|laptop|egg> right
<egg|laptop|egg> the first edition being 09, which, good luck
<egg|laptop|egg> and the 1948 one was JEL
<mofh> has (K-D) but not calling it B, so I could totally believe Emde getting fed up and defining it as B
<mofh> yes
<egg|laptop|egg> that makes more sense, because then they're not concurrent
<egg|laptop|egg> JE09, JE33, JE45, and then JEL48, JEL60
<mofh> I'll try to get someone to check JE38 in a few hours, since I think that would get a pretty tight bound.
<egg|laptop|egg> p. 73 would be the place I think?
<mofh> of JE38?
<egg|laptop|egg> yeah
<egg|laptop|egg> well it's p. 73 in JE45, not sure what that tells us
<egg|laptop|egg> s/JE33/JE33, JE38/
<galois> egg|laptop|egg meant to say: JE09, JE33, JE38, JE45, and then JEL48, JEL60
<mofh> it's pg. 145 in JE33, interestingly.
<galois> [WIKIPEDIA] Friedrich Lösch | "Friedrich Lösch (* 10. Dezember 1903 in Geislingen an der Steige; † 9. Januar 1982 in Stuttgart; vollständiger Name: Friedrich Moritz Lösch) war ein deutscher Mathematiker, der sich mit Analysis beschäftigte.…"
<egg|laptop|egg> mofh: OK so no page number stability, understandable
<mofh> much further ahead than in either of the JELs, where it's 74 in JEL48 and 67 in JEL60.
<egg|laptop|egg> mofh: is it at the beginning of section V.B in JE33?
<mofh> nope, it's at the beginning of section XV.B
<egg|laptop|egg> huh
<mofh> like there seem to be extra chapters at the start in JE33 on uh
* egg|laptop|egg stares at the article on Lösch "1938 war er Privatdozent an der Universität Rostock, wo er 1939 Professor wurde und in den letzten Kriegsjahren Prorektor war, aber danach wegen seiner NSDAP-Mitgliedschaft nicht weiterbeschäftigt wurde"
<mofh> "table of powers", "auxiliary tables for computation with complex numbers", "cubic equations", "elementary transcendental equations", "eggsponential function", "Planck's Radiation Function", "Source Functions of Heat Conduction", "The Hyperbolic Functions", "Circular and Hyperbolic Functions of a Complex Variable: Index of Tables of the Elementary Transcendentals" and THEN it goes to Gamma Function /
<mofh> Error Integral etc
<mofh> whereas JEL48 already goes Gamma/Factorial Function, Error Integral
<mofh> er add Sine/Cosine Interals just before Gamma
<mofh> so basically JE33 has 10 extra chapters on very elementary functions and tables.
<mofh> Indeed, 145-76 = Nice, so there's approximate page number stability (it seems to lie in [67,77]), just not chapter eggsistence stability.
<egg|laptop|egg> Preface to 1938 edition: "the elementary functions which occupied the first 75 pages of the second edition have been omitted."
<mofh> Ahh, that'd do it. So something like around Pg. 73 would make sense for JE38 (also did you find a copy?)
<egg|laptop|egg> ibidem: "The principal points in which this third edition of the Tables of Functions differs from the second edition of 1933 are as follows: In the complete elliptic integrals of the first and second kind, formulae and numerical tables have been added for other than the Legendre standard forms. The numerical calculation is thereby improved in many cases."
<egg|laptop|egg> nah, reading off the 38 preface from the 45 edition
<egg|laptop|egg> so my guess is that B in from 38
<egg|laptop|egg> s/in/is/ etc.
<galois> egg|laptop|egg meant to say: so my guess is that B is from 38
<egg|laptop|egg> mofh: OK but the far less clear question is that of D
<mofh> Ahh.
<egg|laptop|egg> is D from 1909? does it predate the Funktionentafeln altogether?
<mofh> Yeah, D seems to go back pre-1900 I suspect. OTOH I didn't see it in Appell & Lacour, but let me give that another readthrough.
<mofh> (1922 and in French)
<egg|laptop|egg> it doesn't seem to appear in Ennerich or in Jacobi though
<mofh> I suspect parallel innovation, possibly? Like I imagine notation wasn't very synchronized this far back.
<mofh> I can't see the *letter* used in Appell & Lacour but this volume is not an easy read (honestly, more due to the mathematical notation than the French, they define the integrals using \int R(x^2)/y, then define various versions of y = sqrt(stuff) instead of just defining the integrals all at once, and it's rapidly overflowing my working memory when I have to mentally compute products of the ys then shove
<mofh> them in the integrals).
<mofh> (like this is godawful notation)
<egg|laptop|egg> mofh: my question is rather, is D an invention of JE09 (or possibly JE33), if we find no earlier record
<egg|laptop|egg> mofh: did JE33 have D in the completes, and did it have C?
<egg|laptop|egg> you mentioned it had D in the incompletes
<mofh> JE33 had D in the completes *and* incompletes, it did *not* have C or the eggspression for C.
<mofh> it had the eggspression for B in the completes, but *not* the incompletes, and it did not call it B.
<egg|laptop|egg> interesting
<egg|laptop|egg> mofh: libgen has a copy of JE45 if you want to compare
<egg|laptop|egg> oh there's a 1928 edition too?
<mofh> sure, let me libgen it.
<egg|laptop|egg> mofh: so there's a copy of the 1909 edition in Italy https://opac.museogalileo.it/imss/resource?uri=000000304309&l=en
<egg|laptop|egg> mofh: also, any idea why Fukushima calls B and D "associate of the 2nd kind"? is there something more 2nd than 1st about them?
<mofh> I don't know, and that's really weird, since I tend to treat B and D as more of the 1st kind.
<mofh> Like they're a difference of the first and second kind (divided by k^2), but they read to me more 1st-kind than anything.
<mofh> I guess you'd have to ask Fukushima.
<mofh> Getting a 504 gateway timeout on libgen, grr.
<mofh> Another mirror worked, nm.
<SilverFox> I present to yall, the fourth dimension, visualized:
<mofh> okay, grabbing JE45
<mofh> JE45 again defines D in the incompletes (pg. 56), and both D and B in the completes (pg. 73).
<mofh> also defines C in the completes, interestingly (what the hell is that one useful for, anyhow?)
<mofh> OHH LOOKING AT JE45 pg.73 I THINK I KNOW WHY FUKUSHIMA CALLED IT "second kind"
<egg|laptop|egg> having a name I guess
<mofh> b/c there's an eggspression of the form f(x)^2 for f non-constant in the numerator of E, D and B. whereas for F it's constant
<mofh> and the denominator's the same when you write them this way.
<egg|laptop|egg> by F you mean K?
<mofh> (E is defined as denominator^2 over denominator)
<mofh> yeah
<egg|laptop|egg> right
<egg|laptop|egg> yeah I guess that makes sense
<egg|laptop|egg> and the third kind is another animal entirely iirc?
<mofh> yep
<mofh> like I'm not even seeing third kind here anywhere, and it's kind of nasty computationally.
<mofh> especially given the time period involved (like the preface mentions slide rules, lmao).
<egg|laptop|egg> mofh: Fukushima seems to have acted upon only one (and the most boring one) of the points that phl made in that email, so you should probably use our implementation if you want to go that way
<egg|laptop|egg> it would be interesting to rigorously compare it with Carlson's
<egg|laptop|egg> mofh: completely unrelated question that I had been thinking about lately
<egg|laptop|egg> mofh: when does one ever care about good argument reduction
<mofh> egg|laptop|egg: yeah, I definetly will. Plus I can convert good C++ into C or Fortran90 much more quickly than I can convert bad Fortran into good Fortran90.
<mofh> also hm. whenever you get fed *large* values of \theta?
<egg|laptop|egg> mofh: consider
<egg|laptop|egg> cos ωt, t large
<mofh> since just doing double-double-double-double gets you to \theta ~ 1023*\pi easily.
<egg|laptop|egg> ωt is a multiplication
<egg|laptop|egg> you have an ULP on your argument
<egg|laptop|egg> so your argument reduction is a waste of time
<mofh> I mean if you have that you want to do Payne-Hanek as an argument of ω *and* t *if you can*, namely if you can get one of them to higher-than-double precision.
<mofh> i.e. compute only the part of the multiplication that you need for the reduction.
<egg|laptop|egg> right, you can reduce t directly
<mofh> (like Payne-Hanek is how you can do cos(ωt) for t large and ω = π, and if ω is an integer then arg reduction is trivial).
<egg|laptop|egg> (you don't even have to have them to higher precision, you can take their given values as truth and extend with 0s)
<mofh> ω is an integer and you're computing cos(πωt)*)
<mofh> +(*
<mofh> OH. yeah, that would work.
<egg|laptop|egg> mofh: but I really don't see a point to argument reduction baked into the trig function
<egg|laptop|egg> (and in practice your ω will have ULPs of its own so it's often pointless anyway, but that's another discussion)
<mofh> Oh, yeah. I don't necessarily either. Like it's necessary for computation since your Taylor polynomials rapidly become unusable outside like [0,π] (or even [0,π/2]), but I can't think of any situations where you feed cos(x) large arguments that are precise nevertheless.
<mofh> ACTUALLY NEVERMIND I HAVE BEEN IN PRECISELY THAT SITUATION
<mofh> asymptotic eggspansions of Ai(-x)
<egg|laptop|egg> mofh: e.g., what would you lose if your trig functions took angles in turns, thereby having trivial argument reduction, and multiply by a π of the working precision where appropriate
<egg|laptop|egg> mofh: hm, what happens with these Airy functions?
<mofh> you *must* do the arg reduction of AiryPhaseAsymptotic(x)*x*sqrt(x) to high precision before feeding it to cos(x) or you get bad values near zeroes.
<mofh> Because asymptotically the zeroes of Ai(-x) are the zeroes of cos(AiryPhaseAsymptotic(x)*x*sqrt(x))
<mofh> and so smol perturbations in the argument mean you get nonsense close to zeroes.
<mofh> so basically, behaviour near zeroes of cos(x) is highly sensitive to the value of x and that's one case when reducing your argument is important.
<mofh> so basically, when*
<mofh> 15:02:28 < egg|laptop|egg> mofh: e.g., what would you lose if your trig functions took angles in turns, thereby having trivial argument reduction, and multiply by a π of the working precision where appropriate
<mofh> nothing, really. if you *can* take it in turns that saves everything.
<mofh> like the problem is I had to deal with x, not πx.
<mofh> if you get to deal with πx you can just do the same thing as you did in your FastSinCos2π and I think that's sufficient.
<mofh> so I guess the problem isn't argument reduction specifically, it's argument reduction when you have a π in what you need to reduce by
<mofh> and that would eggsplain why my hell was near zeroes of cos(x), b/c those are near when x in π(x-1/2) is near an integer.
egg|laptop|egg has quit [Remote host closed the connection]
egg|laptop|egg has joined #kspacademia
<egg|laptop|egg> mofh: okay but I'm confused
<egg|laptop|egg> what were you feeding into your Ai(x) that did have that precision
<egg|laptop|egg> or is it a consistency question?
<egg|laptop|egg> idge
<egg|laptop|egg> s/.$/i/
<galois> egg|laptop|egg meant to say: idgi
<egg|laptop|egg> !wpn
* galois gives egg|laptop|egg a gold ket with a word attachment
* egg|laptop|egg pokes mofh with th eket
<mofh> egg|laptop|egg: -x > ~4096
<egg|laptop|egg> !wpn whitequark
* galois gives whitequark an expired stabber
<mofh> and yes, consistency.
<SilverFox> would an expired stabber be more or less stabby?
<egg|laptop|egg> mofh: right, so e.g. when combining sinusoids with exactly-related args you would also want to make sure that you have consistent reductions
<mofh> yes, eggsactly
<egg|laptop|egg> though tbh few things are exactly related so that's a bad example
<egg|laptop|egg> mofh: e.g. sinπ(x/π)+sin((3x)/π) has an ulp in the 3x whatever you do so you're screwed
<egg|laptop|egg> and with 2x it's consistent no matter whether you reduce consistently upfront or not
<egg|laptop|egg> > mofh so I guess the problem isn't argument reduction specifically, it's argument reduction when you have a π in what you need to reduce by
<mofh> hrm, fair.
<egg|laptop|egg> not all πs are equal
<mofh> I mean π *in the reduction*, not in the argument.
<mofh> i.e. mod kπ, for some rational k.
<mofh> or at least dyadic rationals k.
<egg|laptop|egg> yes, but that's the point; you it's rare that you care about reducing by π as opposed to something vaguely like π
<egg|laptop|egg> and the latter is just a division
<egg|laptop|egg> mofh: I'm still not sure I understand the Ai thingy
<egg|laptop|egg> mofh: what were you computing
<mofh> cos(x*sqrt(x)*f(x)), where f(x) is some C^2 function, and I need the values near the zeroes of cos(x) to be accurate.
<mofh> and so I needed to reduce the eggspression *inside* the cosine by 2π to high precision *of 2π*
<mofh> before feeding it to the cosine.
<egg|laptop|egg> hmm
<egg|laptop|egg> the important thing in this discussion is the backward error
<egg|laptop|egg> mofh: how does f behave
<egg|laptop|egg> does it grow eggstremely fast?
<mofh> https://dlmf.nist.gov/9.8 f(x) is the eggspression in the brackets
<mofh> so actually it grows very slowly.
<egg|laptop|egg> mofh: which brackets
<egg|laptop|egg> there are 23 equations on that page
<egg|laptop|egg> "the brackets" is a bit ambiguous
<mofh> ..
<mofh> thank you X11
<mofh> https://dlmf.nist.gov/9.8.E22 I meant to link this
<egg|laptop|egg> mofh: and we're dealing with large or smol values of x
<mofh> large. smol values of x are not relevant since it's an asymptotic eggspansion, so it's not valid for x < ~8 at all.
<egg|laptop|egg> mofh: and what are you doing with this cosine? because in and of itself its argument has a ton of ULPs, so the backward error due to bad reduction won't be much compared to that
<mofh> I want it to be zero at its zeroes, and care about the value being as close to correct in smol neighbourhoods *of* its zeroes.
<mofh> so basically I need to as accurately clear away 2π from that eggspression
<mofh> as I can
<mofh> It's a very weird thing I stumbled across in my Airy function hell and I need to properly formally analyse why the hell it behaves as it does.
<egg|laptop|egg> mofh: that makes no sense
<mofh> egg|laptop|egg: why not?
<egg|laptop|egg> mofh: you can't make that function be 0 at its 0s by having a better argument reduction, because what you feed into it has a bunch of ULPs from the operations that go into the argument of the cosine
<mofh> no, I want the COSINE to be zero at its zeroes.
<egg|laptop|egg> but why
<mofh> because asymptotically the zeroes of Ai(-x) are the zeroes of cos(that eggspression).
<egg|laptop|egg> ...
<egg|laptop|egg> yes but
<egg|laptop|egg> by "that function" I mean "cos(that stuff)"
<egg|laptop|egg> and
<egg|laptop|egg> > you can't make that function be 0 at its 0s by having a better argument reduction, because what you feed into it has a bunch of ULPs from the operations that go into the argument of the cosine
<mofh> yes you can, because nextafter(M_PI) > π > M_PI
<mofh> the issue is that M_PI != π. Neither is M_PI to higher precision, but it's *closer*.
<egg|laptop|egg> ...
<egg|laptop|egg> mofh: if you feed blah(x) (1+δ) into your argument reduction for blah(x) large, you're not going to get anything that resembles blah(x) mod 2π out of your argument reduction
<mofh> the issue is the ULPs in the reduction matter more than the ULPs in the product? Am I completely misunderstanding things here?
<egg|laptop|egg> well clearly one of us is
<mofh> blah(x) ~ 4096
<mofh> anyhow, let me go thru that code again, very carefully, try to see if the error was actually fixed *incidental* to more careful argument reduction, and try to understand why the hell better.
<mofh> because you raise a good point, and this doesn't make sense.
<egg|laptop|egg> mofh: is that single precision?
<mofh> Yes, since I haven't written double-precision Payne-Hanek yet. Which I now realize complicates matters a lot b/c there might be double promotion somewhere, *fuck*.
<_whitenotifier-cd19> [Principia] eggrobin labeled pull request #2353: Fix gross errors found by comparison with Mathematica integration - https://git.io/JeC1B
<mofh> Grr. Also I need to respond to some emails, then I'll poke both the code and the assembly with a stick.
<egg|laptop|egg> mofh: the code and the assembly are irrelevant, it is the maths that need to be poked
<egg|laptop|egg> less computer more paper
<mofh> egg|laptop|egg: not true, b/c if it turns out that the eggspression x*sqrt(x)*f(x) got promoted into double precision for the multiplication, that means I'm now doing a different analysis.
<egg|laptop|egg> mofh: what kind of errors were you getting near those 0s
<egg|laptop|egg> before you magically unfucked
<mofh> much much greater ULP error when compared to Ai(x) computed using MPFR to quadruple-precision and then cast to float, or when compared to values from WolframAlpha and rounded to float precision.
<mofh> like going from 35291 ULP error to 2 ULP error.
<egg|laptop|egg> wait how are you counting in ULPs at a 0
<mofh> values in neighbourhoods of zero
<mofh> so very smol.
<mofh> (it's single precision, so I just checked agreement for every single possible value, promoting it to an MPFR quad for the reference).
<mofh> anyhow, need to fire off an email and then this library closes in half an hour so I should prolly put away my copies of JE/JEL.
egg|cell|egg has quit [Ping timeout: 206 seconds]
<egg|laptop|egg> mofh: how are you getting numbers anywhere as low as 35291
UmbralRaptor has joined #kspacademia
UmbralRaptor has quit [Remote host closed the connection]
UmbralRaptor has joined #kspacademia
UmbralRaptop has quit [Ping timeout: 190 seconds]
<egg|laptop|egg> let α be the binary32 nearest the first number congruent to π/2 mod 2π after 4096, then the correctly-rounded cosines of α and the following binary32 differ by 1922609207 ULPs
<mofh> okay, so first backing up to theory, and not my eggsample: when you're talking about argument reduction before cos(ωt), are you treating cos to be the mathematical formula or the IMPLEMENTATION OF IT WITHOUT ANY REDUCTION (i.e. just a polynomial)?
<mofh> because I've been implicitly mapping it to the latter at the start of the conversation (but not in the eggsample, where cos(x) there *is* libm cos which presumably has a correct argument reduction)
<egg|laptop|egg> I did not mean at any point to suggest using an approximation polynomial outside its domain of validity, if that's your question
<mofh> I was treating argument reduction as necessary in the initial conversation so that you *would* lie in its domain of validity.
<egg|laptop|egg> my question is to figure out when there is any point in doing a reduction smarter than a division by π in the working precision
<mofh> so after a lot of thought I think the answer is "not particularly". I'm still not sure what the hell is happening in my case, tho.
<mofh> (I'll check the code later, need to head out now).
<egg|laptop|egg> mofh: because it is quite costly to do a Payne-Hanek, and I can't see a case where there's a point to it
<mofh> egg|laptop|egg: yeah, the division in your case doesn't matter, it by all accounts *should* be dwarfed by any other source of perturbation, and if it's not, then any sane impl. of sin/cos *already* would handle that correctly for you.
<mofh> so that just leaves the question of what the hell is my code doing, which I will resolve in a few hours.
<egg|laptop|egg> > any sane impl. of sin/cos *already* would handle that correctly for you.
<egg|laptop|egg> yeah but that's the point
<egg|laptop|egg> if for almost all purposes you don't care about good argument reduction, you can go faster by dropping it
egg|laptop|egg has quit [Remote host closed the connection]
egg|cell|egg has joined #kspacademia
egg|laptop|egg has joined #kspacademia
egg|laptop|egg has quit [Ping timeout: 198 seconds]
<_whitenotifier-cd19> [Principia] Pending. Build queued… - 
<_whitenotifier-cd19> [Principia] Pending. Building… - http://casanova.westeurope.cloudapp.azure.com:8080/job/Principia/3908/
egg|laptop|egg has joined #kspacademia
<_whitenotifier-cd19> [Principia] Success. Build finished. - http://casanova.westeurope.cloudapp.azure.com:8080/job/Principia/3908/
<egg|laptop|egg> mofh: is the backward error from argument reduction by division by π at the working precision ever worse than 1 ULP?
UmbralRaptop has joined #kspacademia
UmbralRaptor has quit [Ping timeout: 198 seconds]
<egg|laptop|egg> OK 2
<_whitenotifier-cd19> [Principia] pleroy closed pull request #2353: Fix gross errors found by comparison with Mathematica integration - https://git.io/JeC1B
<_whitenotifier-cd19> [Principia] pleroy pushed 2 commits to master [+0/-0/±4] https://git.io/JeCxX
<_whitenotifier-cd19> [Principia] pleroy 46f1667 - Fix gross errors by comparison with Mathematica integration.
<_whitenotifier-cd19> [Principia] pleroy 800e892 - Merge pull request #2353 from pleroy/ArnoldTests Fix gross errors found by comparison with Mathematica integration
UmbralRaptop has quit [Quit: Bye]
UmbralRaptop has joined #kspacademia
<egg|laptop|egg> !wpn
* galois gives egg|laptop|egg a heptapodial railgun
* egg|laptop|egg pokes mofh with the railgun
<egg|laptop|egg> !wpn whitequark
* galois gives whitequark a sidereal thaumiel tank
UmbralRaptor has joined #kspacademia
UmbralRaptop has quit [Ping timeout: 190 seconds]
<mofh> BLOODY HELL
<mofh> so yesterday I fed the cats a bit too much and stripey vomited it up after turbo-wolfing it down, it was cleaned up and dumped in the trash
<mofh> this morning both me and fib forgot to feed the cats and stripey decided to raid the trash for ""food""
<mofh> let's just say that cat vomit looks decidedly more vile on the second runthru.
<UmbralRaptor> ick
<kmath> <WAStateArchives> No, Ralph. ⏎ #MountStHelens https://t.co/AfNawnZjAF
UmbralRaptor has quit [Ping timeout: 202 seconds]
<egg> mofh: maybe you should have a cat feeding apparatus so you don't forget or overfeed?
egg|laptop|egg has quit [Remote host closed the connection]
egg|laptop|egg has joined #kspacademia
<egg> mofh: also, why argument reduction
<mofh> egg: staring at it right now, moment
<mofh> BLOODY HELL. I typo'd, so I was computing x*sqrt(x)*p instead of x*sqrtf(x)*p. So what was being computed was (float)(((double)x)*sqrt((double)x)*((double)p)).
<mofh> ...why the hell did I not have -Wdouble-promotion in my CFLAGS
egg|laptop|egg has quit [Remote host closed the connection]
egg|laptop|egg has joined #kspacademia
<egg|laptop|egg> mofh: aha, so you were indeed in the rare case where the argument of the trig function is basically exact
<egg|laptop|egg> and in that case argument reduction matters
<egg|laptop|egg> had the argument of the trig function had even one ULP (of binary32) in it, you'd lose all bits at the 0 nearest 4096
<mofh> ahhhhhhhhh.
<mofh> okay, everything makes sense now.
<egg|laptop|egg> mofh: though you just end up pushing the question a bit further beyond, is your x really precise enough that you care about low forward error given the really bad condition
<egg|laptop|egg> (i.e. now you're paying the cost of fancy reduction + computing the argument of the cosine in double precision, to do something that's ill-conditioned to start with)
<egg|laptop|egg> !wpn whitequark
* galois gives whitequark a kitten
<egg|laptop|egg> please pet
UmbralRaptop has joined #kspacademia
<kmath> <Otter_News> Otters chasing a butterfly https://t.co/qhlgEVn6z0
<mofh> egg|laptop|egg: yeah, now let me see what happens if I take out the fancy reduction and see if that matters
<egg|laptop|egg> mofh: it will matter on the forward error of that function obviously
<egg|laptop|egg> but the question is where that function is used
<mofh> where *which* function is used?
<mofh> also blackcat is currently being pet,
<egg|laptop|egg> mofh: cos(blaargh(x))
<mofh> I mean the function is basically multiplied by 1/(M_PI*sqrt(sqrt(x))) and that's your Airy function right there.
<mofh> like it matters b/c Ai(-x) is supposed to be general-purpose.
<egg|laptop|egg> mofh: how general
<egg|laptop|egg> mofh: what actually gets fed into it that's single precision but exact
<mofh> I don't know, because the point is it's library code, so I have no clue what users are going to feed it.
<egg|laptop|egg> !u ˛
<galois> ˛: U+02db OGONEK
<egg|laptop|egg> mofh: well yes, but that's the same question that we have about the trig functions to start with
<egg|laptop|egg> is there a situation where the forward error matters
<mofh> I don't know, maybe? I can't think of any but if I'm writing a library function maybe my users might think of one so I feel like I should make sure I do things correctly out of an abundance of caution.
<egg|laptop|egg> mofh: yeah sure, and it's cheap for a binary32 because you have a higher precision at hand
<egg|laptop|egg> but imagine writing a binary64 Ai, you're going to pay a lot for that forward error
<mofh> Yeah, at that point you're going to need to write an extended-precision multiply, and that's a lot of why writing that was such hell.
<mofh> and it is *very* hard to make performant; I still haven't figured it out.
<egg|laptop|egg> mofh: an extended precision square root too, etc.; you can't afford an ULP on the argument of the cosine
<egg|laptop|egg> mofh: so the question of "what is the library for to start with" is relevant, because maybe you really don't care about the forward error, and you're just slowing everybody down
<mofh> egg|laptop|egg: that's doable with a double precision sqrt and a single step of Newton; computing x*(x/sqrt(x)) helps since the Newton iterate for 1/sqrt(x) avoids a divide and I *REALLY* did not want to impl. eggstended precision divide
<mofh> tho I guess Newton works for computing 1/x in eggstended precision
<mofh> egg|laptop|egg: the question is I don't know; but that's a valid point.
<egg|laptop|egg> !u ǀ
<galois> ǀ: U+01c0 LATIN LETTER DENTAL CLICK
<mofh> egg|laptop|egg: as-is the double-precision Ai(-x) is good, just not correctly rounded with minimal forward error.
<mofh> which is the main reason I haven't thrown it up on github
<mofh> b/c aargl.
<egg|laptop|egg> > thrown it up on github
<egg|laptop|egg> please do not vomit higher functions
UmbralRaptop has quit [Quit: Bye]
<egg|laptop|egg> on github or otherwise
<SnoopJeDi> When the curry is too spicy
<egg|laptop|egg> mofh: > correctly rounded with minimal forward error
<egg|laptop|egg> correctly rounded you won't get without summoning multiprecision on edge cases though
<egg|laptop|egg> also you'd need a correctly rounded trig function, etc.
<egg|laptop|egg> so I'm not sure what you're trying to say here
<mofh> Yes there's a reason dev stopped; I didn't feel like writing all that multiprecision and I got bogged down *finding* the edge cases.
<mofh> SnoopJeDi: look I've successfully kept down goast peppers, too spicy is not a thing.
<egg|laptop|egg> wait why would you even want correctly rounded Ai
<mofh> egg|laptop|egg: b/c in retrospect I was silly and thought it mattered
<egg|laptop|egg> no I mean regardless of caring about bounding ]forward error (which may make sense), correct rounding is insane
<kmath> <stephentyrone> @jckarter @volatile_void Correct rounding only really has value for portability and analyzability. For accuracy, it… https://t.co/9Nm58rSlTW
<mofh> NOW SEE THAT'S WHAT I THOUGHT
<mofh> but I somehow internalized that it was a necessity I don't even know why
<mofh> in retrospect, I am an idiot and the answer to why is "I am an insecure-as-hell perfectionist sometimes"
<SnoopJeDi> mofh, I just wanted to make a curry joke D:
<mofh> SnoopJeDi: I will now remind you that I once spritzed some bear spray onto a slice of bread and decided to try eating that. It was honestly A BIT TOO SPICY but still edible.
<SnoopJeDi> LOL
<SnoopJeDi> I'm not sure you had told me that before but that is an absolute power play
<mofh> (bear spray is about ~2x as strong as commercial-grade antipersonnel pepper spray)
<egg|laptop|egg> mofh: but anyway, bounding forward error does have an obvious use, namely inverting-by-root-finding
<egg|laptop|egg> where suddenly on the inverted function it becomes the backward error
<mofh> yeah but I can't imagine any situation where one cares about inverting Ai(x) (or even a Bessel Function, tho I suspect those cases *do* eggsist).
<egg|laptop|egg> yeah, and if you do, you want to properly compute the inverse function
<egg|laptop|egg> if you compute arccos by root finding on cos, even if your cos is good, you're not going to have a good time
<mofh> ROFL YEP.
<egg|laptop|egg> okay but what about things involving a trig function
<egg|laptop|egg> hm, Kepler's equation works on a reduced angle to start with so that's not a good example
<mofh> Precisely
<egg|laptop|egg> mofh: okay but you can lose all bits in forward error within [0, 2π] by sinπ x/π reduction so that *is* a good example
<egg|laptop|egg> ... except you lose those bits where the value is 0, so the garbage is added to π, wherein it is within the ULP of π.
<egg|laptop|egg> (E - e sin E, sin E near 0)
<egg|laptop|egg> hm.
<egg|laptop|egg> mofh: should I just ask the cat
<egg|laptop|egg> alternatively, does stripey have an example where the forward error of a trig function matters
<mofh> egg|laptop|egg: yeah I would ask the cat, since I'm at a loss for eggsamples
<egg|laptop|egg> mofh: > Do you have a practical example (other than writing low-forward-error special functions) where one cares about the forward error of the trigonometric functions rather than their backward error (and thus that they do the argument reduction correctly)?
<egg|laptop|egg> does this ask the right question
<mofh> egg|laptop|egg: yes, it seems clear and unambiguous.
<mofh> egg|laptop|egg: also stripey's response was *quzzical head tilt*
<egg|laptop|egg> should I name an identifier that holds B(φ|m) B_φǀm or B_φ_m
<mofh> the latter would be what I'd pick, the former might read more nicely to others, not sure.
<egg|laptop|egg> mofh: yes that's a dental click,
<egg|laptop|egg> mofh: actually that's clearly how you should distinguish those conventions
<mofh> egg|laptop|egg: I mean the dental click looks nearly identical to a |, just B_φ|m looks weird without the brackets to my eyes.
<mofh> whereas B_φ_m I just parse as a double subscript naturally b/c years of LaTex + real analysis
<egg|laptop|egg> mofh: pronounce it [befi|ɛm] to make it clear the second argument is m and not k,
<mofh> OTOH it is very clearly NOT a double subscript, so make of that if you will.
<mofh> also why are you using m for an argument and not k?
<egg|laptop|egg> because that's what Fukushima does
<mofh> except now you have to compute m from k first, since I can't imagine a situation where you have m to start
<egg|laptop|egg> mofh: Fukushima takes mc anyway
<egg|laptop|egg> m(subscript c) is 1-m
<kmath> <eggleroy> @stephentyrone Do you have a practical example (other than writing low-forward-error special functions) where one c… https://t.co/FRNcWv8J3T
<mofh> I presume that's analogous to when you're dealing with k_c (1-k^2)?
<egg|laptop|egg> O_o m = k^2 so your k_c is mc?
<egg|laptop|egg> mofh: anyway, k would have to be computed just as much as m, so there isn't much of a difference https://github.com/mockingbirdnest/Principia/blob/master/physics/euler_solver_body.hpp#L133
<mofh> OHH, I have no clue what I was thinking of earlier then b/c I totally did not map m to k^2
<mofh> neverind
<mofh> nevermind*
<egg|laptop|egg> mofh: > just B_φ|m looks weird without the brackets to my eyes.
<egg|laptop|egg> well
<egg|laptop|egg> I didn't find any good brackets that were allowed in identifiers,
<egg|laptop|egg> B₍φǀm₎ looks silly
<mofh> I mean upon reflection B_φǀm makes more sense than B_φ_m so hm.
<egg|laptop|egg> mofh: also if B(φ|m) is pronounced [befi|ɛm], how is B(φ, k) pronounced
<egg|laptop|egg> also the ; and \ versions,
<egg|laptop|egg> sadly the IPA has no comma, semicolon, or backslash that I know of
<kmath> <stephentyrone> @eggleroy It is quite hard to construct such cases.
<mofh> Welp.
<egg|laptop|egg> thanks, cat. that.
<kmath> <sciencepolicia> @ObservatoryCats https://t.co/StEY9yJ5yR