egg|nomz|egg changed the topic of #kspacademia to: https://gist.github.com/pdn4kd/164b9b85435d87afbec0c3a7e69d3e6d | Dogs are cats. Spiders are cat interferometers. | Космизм сегодня! | Document well, for tomorrow you may get mauled by a ネコバス. | <UmbralRaptor> egg|nomz|egg: generally if your eyes are dewing over, that's not the weather. | <ferram4> I shall beat my problems to death with an engineer.
<bofh>
egg|zzz|egg: took a look at the graph and uh that's weirdly sinc(x)-like behaviour
<egg|zzz|egg>
:D
<bofh>
not *quite* since the minima aren't zeroes, but still.
<bofh>
yuck, the fourth root starting point is nasty.
<bofh>
I'm weirded out, again, by how approximately linear that relative error is.
<egg|zzz|egg>
bofh: uh?
<egg|zzz|egg>
bofh: it's the weird constant that's approximately linear
<bofh>
oh, right, it's γ(n)
<egg|zzz|egg>
bofh: the relative error is the oscillating graph that ranges between 3 % and 3,45 % or so
<bofh>
sorry, I was misreading which graph title corresponded to which graph and was confused.
<egg|zzz|egg>
(and yeah, the 4th root is emphatically non-nice, but it's still not *that big* of a relative error)
<bofh>
uh, mind fitting a line to the γ(n) graph? I'm both curious in the equation you get and the r^2.
<egg|zzz|egg>
bofh: currently recomputing it with n=2 for completeness, will do so when that finishes
<egg|zzz|egg>
(doing it in 100 sig. dec. arithmetic is slightly sluggish and overkill but meh :-p)
<bofh>
(IMO, there's no kill like overkill)
<egg|zzz|egg>
n=2 is worse than n=4 fwiw
<Ellied>
my RPi 3B+ came! :D
<Ellied>
dual-band wifi at last
<egg|zzz|egg>
bofh: slightly below 3,48 % relative error with the square root
<egg|zzz|egg>
I mean that's still a decent fast approximate square root if you need one
<bofh>
egg|zzz|egg: not bad.
<egg|zzz|egg>
bofh: so if I fit a line to this I get 0.0436864 n - 0.0294222
<bofh>
Interesting.
<egg|zzz|egg>
bofh: r² is 0.9999409
<bofh>
That's indeed frighteningly good.
<egg|zzz|egg>
bofh: I'll try to take a look at the n < 0 case; ignoring an unproved assumption about which extrema fall into which intervals I think I have all the calculations done so that should work out
<bofh>
Perhaps the thing I'm curious about is what's causing the slight oscillations in the value, too.
<bofh>
The n<0 case being the inverse n'th root functions?
<egg|zzz|egg>
yeah
<bofh>
s/Perhaps/Also/
<Qboid>
bofh meant to say: Also the thing I'm curious about is what's causing the slight oscillations in the value, too.
<egg|zzz|egg>
bofh: wait a bit, stared at the n > 23 graphs some more and now I'm not sure I didn't screw up in a new way
<egg|zzz|egg>
that error doesn't look optimized
<egg|zzz|egg>
ah nevermind I confused myself, it is working as expected
<bofh>
heh.
e_14159 has quit [Ping timeout: 186 seconds]
e_14159 has joined #kspacademia
<iximeow>
bofh: i'll have you know i'm going to try taking a picture of uranus/venus through my telescope now :(
<egg|zzz|egg>
bofh: fwiw you can fit the two together with r² = 0.999976
<SnoopJeDi>
egg|zzz|egg, so...does that get added back to the original estimate in a final algorithm as a fine-tweaking optimization in that region?
<bofh>
egg|zzz|egg: IIIIIIIIIINTERESTING. granted, also exactly what I'd expect, for some reason.
<bofh>
hm.
<bofh>
time to stare at papers again
<egg|zzz|egg>
bofh: when is it not time to stare at papers
<SnoopJeDi>
when it's time to duel
<bofh>
egg|zzz|egg: well, time to stare at numerics papers instead of condensed matter ones :P
<egg|phone|egg>
Snoopjedi: what gets added where?
<bofh>
SnoopJeDi: I play Maximum Modulus Principle in defense mode
<SnoopJeDi>
silly egg|phone|egg, not time to dual :P
<SnoopJeDi>
I was just talking in another channel about math appreciation. Someone had a question that involved modulus arithmetic, and from there we got off into talking about rings and abstract algebra :)
<bofh>
egg|zzz|egg: okay, that's the r^2, but what's the actual linear function? :P
<bofh>
SnoopJeDi: nice!
<egg|phone|egg>
Snoopjedi: I mean your question
<SnoopJeDi>
egg|phone|egg, oh, a linear correction to away-from-the-root calculations
Technicalfool_ has quit [Ping timeout: 186 seconds]
<SnoopJeDi>
I don't know the context of this, but is the strong correlation there something you can use to correct the underlying error since it's well-behaved?
<egg|zzz|egg>
bofh: -0.0505897 + 0.0439341 n
<egg|zzz|egg>
SnoopJeDi: the context is that we're using an affine map of the integer representation of a binary floating-point number to estimate its nth root or reciprocal nth root
<SnoopJeDi>
i.e. a close-enough algorithm that makes good sense with some magic finishing dust to increase its usefulness? I don't really know this side of numerics
<egg|zzz|egg>
SnoopJeDi: and you have leeway on the affine term, and optimizing it gives a magic constant which is ~linear in n which seems odd
<SnoopJeDi>
interesting
<egg|zzz|egg>
this generalizes the famous fast inverse square root trick and the standard first approximation for cbrt
<egg|zzz|egg>
for values of n other than -2 and 3
<bofh>
Yeah I was not expecting this, like there appears to be a periodic perturbation in the linear fit but it seems to be of *extremely* tiny magnitude.
<SnoopJeDi>
:O
<egg|phone|egg>
Bofh: but the perturbation is real though which is fun
<egg|phone|egg>
Also its period is probably the slope or something?
<SnoopJeDi>
egg|phone|egg, "Newton's method with magic?"
<kmath>
<BenneHolwerda> On a day like this it is perhaps best to remember Bob O'Dell's plot he showed at 400 years of the telescope and jus… https://t.co/gmxF3YP3IA
ferram4_ has quit [Read error: -0x1: UNKNOWN ERROR CODE (0001)]
ferram4_ has joined #kspacademia
UmbralRaptop has joined #kspacademia
dx has joined #kspacademia
UmbralRaptop has quit [Ping timeout: 186 seconds]
UmbralRaptop has joined #kspacademia
Snoozee has joined #kspacademia
Snoozee is now known as Majiir
<egg|phone|egg>
!Wpn bofh
* Qboid
gives bofh a flying loop space
<rqou>
random question: does unicode have symbols for and/or logic gates?
<rqou>
egg|phone|egg?
<egg|zzz|egg>
hm, dunno, ask @fakeunicode on twitter?
<egg|zzz|egg>
don't think there is anything though
<rqou>
disappoint
<rqou>
anyone want to write a proposal for adding them? :P
<egg|zzz|egg>
like, they're mostly not really used in text per se?
<rqou>
i think i've occasionally seen them inline
<rqou>
but usually in the format of "the XXX gate symbol (<shape goes here>) blah blah blah"
<rqou>
so it's not really "text" but effectively just an inline picture
<egg|zzz|egg>
yeah
ferram4_ has quit [Read error: Connection reset by peer]
ferram4_ has joined #kspacademia
<egg|zzz|egg>
!wpn rqou
* Qboid
gives rqou a hacked proof-like submersion
<egg|zzz|egg>
rqou: also, considering changing some identifiers that say "Yoshida" and "Suzuki" in Principia to 吉田 and 鈴木
<rqou>
are identifiers like that accepted by all compilers?
<rqou>
afaik it's actually non-standard
<rqou>
you're supposed to use \uXXXX inside the identifiers
<egg|zzz|egg>
rqou: the standard doesn't specify encoding
<egg|zzz|egg>
it specifies ranges of UCS characters that are allowed in identifiers
<rqou>
right, but i thought stuff like CJK isn't in that list
<egg|zzz|egg>
it is, why wouldn't it
<rqou>
i thought you had to use a "universal character name" instead?
<egg|zzz|egg>
they're equivalent
<egg|zzz|egg>
if you can use the escape, you can use the actual character (assuming the compiler knows how to decode that from the source file)
<whitequark>
rqou: i vaguely recall a proposal
<rqou>
hmm
<rqou>
"Any source file character not in the basic source character set (2.3) is replaced by the universal-character-name that designates that character. "
<rqou>
so maybe you're right
<whitequark>
of course you could just use like
<egg|zzz|egg>
rqou: GCC is silly and doesn't support non-ASCII UCS characters in identifier unless they are so encoded, but that's GCC
<whitequark>
⊼ and ⊽
<egg|zzz|egg>
yeah, those are operators so fair game
<rqou>
wait, you're not targeting gcc at all?
<egg|zzz|egg>
no
<egg|zzz|egg>
clang & msvc
<egg|zzz|egg>
we could add ICC maybe
<rqou>
that's "interesting"
<rqou>
i think i've personally never seen a project before that outright doesn't bother with gcc
<egg|zzz|egg>
well, GCC doesn't like our standard-compliant identifiers, so we don't like GCC :D
egg|cell|egg has joined #kspacademia
egg|phone|egg has quit [Ping timeout: 198 seconds]
egg|phone|egg has joined #kspacademia
egg|cell|egg has quit [Read error: -0x1: UNKNOWN ERROR CODE (0001)]
egg|cell|egg has joined #kspacademia
egg|phone|egg has quit [Ping timeout: 190 seconds]
egg|cell|egg has quit [Read error: Connection reset by peer]
egg|phone|egg has joined #kspacademia
<egg|work|egg>
rqou: see, it works fine https://godbolt.org/g/SdcRRi (MSVC is yellow but that's just becouse it outputs something even when things compile fine)
<egg|work|egg>
rqou: note that GCC actually supports the encoding, the string literal works, it's just that it has opinions on how identifiers specifically are encoded (which the standard doesn't allow as you saw, since the first phase of translation should deal with that uniformly)
tawny has quit [Ping timeout: 383 seconds]
awang has quit [Ping timeout: 186 seconds]
awang has joined #kspacademia
<egg|work|egg>
!wpn whitequark
* Qboid
gives whitequark a promethium apple-space
<egg|work|egg>
bofh: seems there's a recurrence for the relative error out of Newton for sqrt http://www.math.harvard.edu/library/sternberg/slides/lec1.pdf so it's probably possible to prove something about the maxima of relative error not moving (and probably also possible to extend it to higher order Householder methods)
* egg|work|egg
too lazy to work that bit out now
<awang>
!wpn egg|work|egg
* Qboid
gives egg|work|egg a Green's traversal
<awang>
!wpn UmbralRaptop
* Qboid
gives UmbralRaptop a depleted loop space
<awang>
!wpn bofh
* Qboid
gives bofh a gravitomagnetic antebellum
egg|cell|egg has joined #kspacademia
egg|mobile|egg has joined #kspacademia
egg|cell|egg has quit [Read error: Connection reset by peer]
egg|phone|egg has quit [Ping timeout: 190 seconds]
egg|phone|egg has joined #kspacademia
egg|mobile|egg has quit [Ping timeout: 190 seconds]
egg|cell|egg has joined #kspacademia
egg|phone|egg has quit [Read error: Connection reset by peer]
<kmath>
<jatowler> .@elakdawalla @BenneHolwerda @FrancescaCivano I put the JWST and HST schedules on the same graph. Sadly, mine isn't… https://t.co/n1WZufURxr
<egg|zzz|egg>
!wpn UmbralRaptop
* Qboid
gives UmbralRaptop a precessing hexahexaflexagon
<egg|zzz|egg>
!wpn bofh
* Qboid
gives bofh a pressurized quaternion
<egg|zzz|egg>
!wpn whitequark
* Qboid
gives whitequark a code-switching Халатников bustard
<UmbralRaptop>
!wpn egg|zzz|egg
* Qboid
gives egg|zzz|egg a gastrointestinal lavatory
<egg|zzz|egg>
um
<UmbralRaptop>
… a what?
<bofh>
egg|work|egg: I mean that's the error term for each Newton iterate, not for the initial magic. And yeah, you can derive an analogous recurrence for higher-order Householder methods too, I did it for Halley's Method ages ago.
<bofh>
(also I'm not sure if the maxima of relative error don't move across iterates -- I think that might depend on Ляпунов eggsponent here)
<egg|zzz|egg>
bofh: aaaaa
<egg|zzz|egg>
bofh: yeah that's the error term for the Newton iterate, but it means if you do the analysis cleanly for the magic then you might be able to have most of the analysis done post-iterate too
<bofh>
egg|zzz|egg: not bad, that's actually better than I was eggspecting for that since reciprocal obviously will be particularly poorly approximated linearly.
<bofh>
Like I'm not a fan of what it uses for "new cbrt to 23 bits", 3x Newton or Newton followed by Halley is faster iirc.
<egg|zzz|egg>
bofh: yeah but that qbrt paper points out that using a high-order method for the last iterate is good, because fewer ULPs
<egg|zzz|egg>
still not correctly-rounded, but iirc Kahan says 0,59 ULPs?
<bofh>
iirc, yeah. at least he claims <1ULP.
<bofh>
I *think* 3x Newton got me at worst 1ULP error over all normal floats when I did that?
<egg|zzz|egg>
bofh: also bear in mind that those libs adapted from sun etc. are often optimized for machines long gone...
* egg|zzz|egg
stares at the Kahan qbrt thing and its mention of the four floating point formats of the apoc^H^H^H^H^H VAX
<kmath>
<mhoye> I just tried to Rot13 something that turned out to be Dutch.
<bofh>
so vax_f and vax_g aren't bad floats, they're essentially IEEE754 with strange byteorder and no subnormals.
<bofh>
all the other vax floats can go away.
<egg|zzz|egg>
bofh: I mean they're all fairly sane just with strange number of bits aiui
<egg|zzz|egg>
bofh: tbh a 128 bit format is nice
<egg|zzz|egg>
having two 64 bit formats is silly tho
<egg|zzz|egg>
bofh: at least they have guard bits *stares at cray*
<egg|zzz|egg>
(that's why Ada could only say "the result is in the right interval with the number of digits of the type", with Cray crap being a thing you really couldn't say much else)
<egg|zzz|egg>
!wpn bofh
* Qboid
gives bofh an infinite wyvern
<bofh>
okay, Cray's lack of guard bits was... special. iirc, everyone hated it.
<bofh>
for good reason.
<egg|zzz|egg>
bofh: but like, all four VAX formats are binary FP with the implicit bit omitted, so that's pretty much usable whether or not it's an IEEE format
<egg|zzz|egg>
bofh: apparently not all operations were correctly rounded though, which is an issue
<bofh>
oh man fuck IBM floats
<bofh>
base-16, b/c I sure love having leading zeroes in my mantissa.
<egg|zzz|egg>
:D
<egg|zzz|egg>
bofh: like at least decimal floating point is nice since you often display things in decimal, but hex, um,
<bofh>
decimal has *some* uses but sucks a *lot* in terms of perf and honestly isn't that useful outside of i.e. finance.
<bofh>
(tho usually you actually want decimal fixed-point in finance)
<egg|zzz|egg>
bofh: yeah, but at least it conceptually has some use
<kmath>
<✔tanyaofmars> It's quite heartbreaking that all of the budget woes discussions for NASA and the NSF at this meeting are dollar am… https://t.co/1dPGczuXdJ
<SnoopJeDi>
tl;dr compress visual observations into a low-dimensional latent vector, then hallucinate/dream new world models based on assigning probability to temporal transitions in that vector. Train controller model against simulated realities, then transfer that controller model to the full reality
<kmath>
<0xabad1dea> When you have to long-press the power button on a hung computer it feels disturbingly like you’re choking it to death
<SnoopJeDi>
e_14159, the results are very cool, but I also admire the quality of the writing and presentation.
<SnoopJeDi>
I don't grok the actual "here's what the network is like!" portion of it that well, but even as a non-ML person I understood almost all of it
UmbralRaptop has quit [Quit: Bye]
UmbralRaptop has joined #kspacademia
<bofh>
egg|zzz|egg: so I think doing a full table lookup is *massively* excessive, I think it only pays off in cases where adding more Householder iterates is really slow (iirc, that libm has parts specifically for Cortex-A8, with a very *strange* floating-point pipeline).
<bofh>
but I will ask anyway out of curiosity.
UmbralRaptor has joined #kspacademia
UmbralRaptor has quit [Remote host closed the connection]
UmbralRaptor has joined #kspacademia
<e_14159>
SnoopJeDi: From what my tired mind understood, interesting paper. I also liked the presentation.
<kmath>
<IAmSciArt> I make paper art using data from spacecrafts! This is the first piece I made. It's a tradition in my department to… https://t.co/WIsuFX0bTs
UmbralRaptor has joined #kspacademia
UmbralRaptop has quit [Ping timeout: 186 seconds]
<egg|zzz|egg>
bofh: with the table lookup approach you don't get the FP divisions that you have with the Householder iterates
<egg|zzz|egg>
bofh: Kahan says 2 divides for binary64 with 0,59 ULPs
<egg|zzz|egg>
bofh: hm, atlas's comments don't document the error in binary64 (it says correctly rounded in single though)
awang has quit [Ping timeout: 198 seconds]
awang has joined #kspacademia
UmbralRaptop has joined #kspacademia
UmbralRaptop has quit [Remote host closed the connection]
UmbralRaptop has joined #kspacademia
UmbralRaptor has quit [Ping timeout: 383 seconds]
<UmbralRaptop>
So, one of the other students has a ネコノミコン.