egg|nomz|egg changed the topic of #kspacademia to: https://gist.github.com/pdn4kd/164b9b85435d87afbec0c3a7e69d3e6d | Dogs are cats. Spiders are cat interferometers. | Космизм сегодня! | Document well, for tomorrow you may get mauled by a ネコバス. | <UmbralRaptor> egg|nomz|egg: generally if your eyes are dewing over, that's not the weather. | <ferram4> I shall beat my problems to death with an engineer.
<UmbralRaptor>
Danger: HEP
e_14159 has quit [Ping timeout: 198 seconds]
e_14159 has joined #kspacademia
Majiir is now known as Snoozee
<SnoopJeDi>
bofh, if anybody was on the fence about the 280 adjustment, that tweet should stand as evidence of "huh turns out it *was* a good idea"
awang has quit [Ping timeout: 186 seconds]
<SnoopJeDi>
turns out it wasn't really about English or any other tongue, it was a sneaky way to better serve the incompressible language of math :P
<SnoopJeDi>
hmm...from my brief reading in Schneier, since math is incompressible, I guess that's a decent hint that it might be encrypted. But then, I already knew that.
<bofh>
I mean, I feel like we all knew that. :P
<SnoopJeDi>
It was a really neat lightbulb moment to read it and realize (for the first time) that encryption and compression are both about evening out the entropy distribution in memory
<bofh>
And like while I'm not sure if 280 necessarily is optimal, I so often had tweets that would hit 160 - 180 & I'd have to work hard to trim them down, often at *massive* legibility cost (& sometimes I had to resort to Unicode abuse̊̊̊̊̊̊̊̊̊̊̊̊̊̊̊̊̊ on top of that, even), so I'm eternally glad it got increased from 140.
<SnoopJeDi>
Yea doubling it was a complete UX whiff
<SnoopJeDi>
I reckon double + just a bit would have gone over better because it wouldn't be so obviously arbitrary (not that arbitrary is bad, but engineered conciseness feels antithetical to "winging it")
<bofh>
But yeah it makes me far happier than you'd think that I managed to fit a full proof of the Fundamental Thm. of Algebra in a single tweet.
<bofh>
And yeah, I agree too.
<UmbralRaptor>
Also, penalizing characters above an arbitrary part of the BMP and replacing the count with an orb were annoying.
<bofh>
yes, fuck both of those.
<SnoopJeDi>
hmm, is there a way to count distinct rendered characters?
<UmbralRaptor>
I want to say "no, and you'll be eaten by flying polyps if you dig too deep", but could be wrong.
<SnoopJeDi>
UmbralRaptor, yea I don't really know how you get away with not "penalizing" them somehow without making *some* kind of assumption about characters...If their PR is to be believed, English really was anomalously verbose, in which case it's still an increase for other languages, which were maybe doing okay and/or are doing as well now?
<SnoopJeDi>
one assumes they used the same counting rule they use for sending new tweets to analyze the old ones
<SnoopJeDi>
...uh, unless the rule doesn't come into effect before 140 in which case it's just gross?
<whitequark>
wait what?
<whitequark>
they penalize characters above BMP?
<SnoopJeDi>
lmao the official docs still say that the description of how counting works "will be updated shortly"
<SnoopJeDi>
I think the tl;dr is that they did some work to address what a character is, but in the end it's a still-arbitrary weighting for higher codepoints (?) and I guess maybe that broke someone's use profile (except I don't understand how an increase would break anything and it might just be bellyaching?)
<tawny>
wait which tweet was this
<tawny>
re "if anybody was on the fence about the 280 adjustment, that tweet should stand as evidence of "huh turns out it *was* a good idea""
<UmbralRaptor>
whitequark: they penalize characters *within* the BMP also. (eg: fullwidth Latin)
<UmbralRaptor>
SnoopJeDi: admittedly, characters that use ZWJs, combining characters, etc would get de-facto unless you have a time machine and the addresses of Unicode Consortium members.
<kmath>
<katemath> I forgot my previous rule to myself, that on any calculus 2 exam involving an indefinite integral, I need to add "T… https://t.co/EFar3E6rn1
* UmbralRaptor
plays a sad trombone for JWST.
<bofh>
egg|work|egg: no clue, does it even matter tbh? it's just standard Householder iterations of some form I think
<kmath>
<JPMajor> "Simply put we have ONE shot to get this right before putting Webb into space...you've heard this before, but failu… https://t.co/GcDK0SPc0N
<egg|work|egg>
bofh: dunno, it seems to have a lot of magic numbers in that libm?
<egg|work|egg>
bofh: Kahan's stuff is householder but he points out that you can run into over/underflows hence scaling
<kmath>
<touhoudottxt> Marisa chuckled. "You mean the Chaos Emeralds?"
<UmbralRaptor>
bofh: More optimistically: HPF, Neid, LSST,…
<egg|zzz|egg>
bofh: hm, yeah this stuff is going to nicely generalize to approximating rootn for fixed n indeed, finally got around to scribbling some stuff on slices of dead tree which clarifies a lot
<egg|zzz|egg>
bofh: if you have a guess function g(x) for the nth root and do a Newton iterate (or higher Householder), do the maxima of relative error stay in the same place?
<egg|zzz|egg>
bofh: also apparently that person who wrote that invsqrt paper that's on the uwaterloo site now works at google
<SnoopJeDi>
bofh, huh, neat
<egg|zzz|egg>
!wpn UmbralRaptor
* Qboid
gives UmbralRaptor a terminal page table
<UmbralRaptor>
… is Jackson saying that the Taylor expansions of 1/x and cos(x) are the same?
StCypher has joined #kspacademia
<egg|zzz|egg>
!wpn bofh
* Qboid
gives bofh an ovoid 𓌪
<bofh>
UmbralRaptor: uh I really hope he's not b/c that's incorrect.
egg|cell|egg has joined #kspacademia
egg|phone|egg has quit [Ping timeout: 182 seconds]
<egg|zzz|egg>
bofh: okay so my magic constants were indeed wrong starting from 23, but 23 is still magic: https://imgur.com/a/GLSbo
<egg|zzz|egg>
bofh: it's not the only magic root though :D
<egg|zzz|egg>
*boucy relative error*
<awang>
That's an interesting error message
<egg|zzz|egg>
awang: well that's the static_assert message :-p
<awang>
egg|zzz|egg: It's still an interesting error message
<awang>
But yeah, I was wondering if it was some inside joke from a compiler writer
<awang>
Then I opened up godbolt and was quite disappoitned
<egg|zzz|egg>
bofh: https://i.imgur.com/btSrJg6.png so cbrt is nice to compute, 4th root not so much, it oscillates a bit, 23rd root is *really nice*, 46 too
<egg|zzz|egg>
I mean it's all > 3 % and < 3,45 %, so it's not that big of a variation
<egg|zzz|egg>
bofh: and yes, perhaps predictably, this means that the 69th root is Nice.
<egg|zzz|egg>
(cc Fiora, whitequark)
<bofh>
egg|zzz|egg: neat. I'll actually finally be able to take a look at all this in an hour once my office hours are over, but I really want to see a similar plot done for the inverse n-th roots
<bofh>
(since I often actually want to compute those, because their Newton iterates lack divisions in them)
<egg|zzz|egg>
bofh: it's a bit more tedious because then the error is maximal at local maxima rather than interval bounds
<egg|zzz|egg>
bofh: but I have done some of the calculations
<bofh>
Huh, that strikes me as strange, why the immense difference?
<bofh>
(also bbl, undergrads)
<egg|zzz|egg>
bofh: it would be nice to prove that householder iterates preserve the positions of the error extrema, so that the analysis can be done after k iterates too (we know the result may differ)
<egg|zzz|egg>
bofh: anyway, we clearly need to write this up, if only to be able to formally write that the 69th root is nice