egg|nomz|egg changed the topic of #kspacademia to: https://gist.github.com/pdn4kd/164b9b85435d87afbec0c3a7e69d3e6d | Dogs are cats. Spiders are cat interferometers. | Космизм сегодня! | Document well, for tomorrow you may get mauled by a ネコバス. | <UmbralRaptor> egg|nomz|egg: generally if your eyes are dewing over, that's not the weather. | <ferram4> I shall beat my problems to death with an engineer.
<kmath>
<treelobsters> We seem to have acquired one of those 4-dimensional shadows of a 13-dimensional being. ⏎ ⏎ Or possibly vice versa.… https://t.co/8k4ueMxJm2
rqou has quit [Quit: ZNC 1.7.x-git-709-1bb0199 - http://znc.in]
<egg|zzz|egg>
bofh: hm it seems Halley would result in a slightly better error than two newtons on the inverse
<egg|zzz|egg>
(very slightly as far as worst case is concerned, unsure how the fractions of the input space where each is better compare)
<bofh>
egg|zzz|egg: is it ultimately meaningful given you truncate to 17 bits anyway?
<egg|zzz|egg>
bofh: it is, it's one of the two meaningful factors in the error
<egg|zzz|egg>
bofh: the error (for nearest-ties-to-even) is basically half an ULP + (error of the clobbered approximation) * (rounding error in the final Householder)
<egg|zzz|egg>
(assuming the error in the exact householder is eggstremely smol which it is for 6th order)
<bofh>
I thought you found fifth order acceptable?
<egg|zzz|egg>
bofh: yeah but it's no slower than 5th, and the rounding errors look the same iirc
tawny has quit [Remote host closed the connection]
<bofh>
huh, I guess it'd be entirely either throughput limited or limited by the fdiv at the end, go figure.
<egg|zzz|egg>
bofh: well the dependency chains have the same length in 5th and 6th order I think
<bofh>
Yeah, and a few more fmuls are essentially free.
<egg|zzz|egg>
bofh: and it's basically the same for rounding, since the bound ends up being the maximum of the bound on the terms
<awang>
egg|zzz|egg: C++ compilers sound painful to get right
<awang>
I feel sorry for the compiler devs who have to deal with it
<kmath>
<✔dduane> The best pull quote: "I would hope that an actual Mars mission would be, frankly put, less stabby than this one." -… https://t.co/L7YI1HoPdR
<kmath>
<✔HeatherAntos> Sir Mixalot likes big butts and cannot lie. His brother, Sir Mixalittle does not like big butts and cannot tell the… https://t.co/cXtGHMxbFZ
<egg|zzz|egg>
!wpn bofh, whitequark, котя, and the котяchrome kitten
* Qboid
gives bofh, whitequark, котя, and the котяchrome kitten a deterministic lagoon
<APlayer>
So, I have a bunch of data to calibrate an accelerometer. I found a mathematical model which defines e_i = sqrt((x_i / c_x1 - c_x2)^2 + (y_i / c_y1 - c_y2)^2 + (z_i / c_z1 - c_z2)^2) - 1 which basically is the error of the total acceleration vector magnitude relative to a vector of 1 G, where each axis' measurement is adjusted with am offset and a scale.
<APlayer>
The mathematical model calls to select c_x1, c_x2, c_y1, ... such that the sum of all e_i's squared is minimal.
<APlayer>
How would I solve this?
tawny has joined #kspacademia
<SnoopJeDi>
APlayer, when in doubt, least squares ¯\_(ツ)_/¯
<APlayer>
Least squares what?
<SnoopJeDi>
regression
<SnoopJeDi>
i.e. Σe_i should be minimized, so take its gradient and search for a zero
<APlayer>
So a plain gradient descent?
<APlayer>
Wouldn't that be prone to local minima in my case as well?
<SnoopJeDi>
no that's not gradient descent at all
<SnoopJeDi>
APlayer, gradient descent is "take the gradient and go the other way"
<SnoopJeDi>
least-squares is "state the error as a function [i.e. Σe_i] and solve the resulting system of equations"
<APlayer>
"Solve the system of equations" sounds more trivial than it is
<SnoopJeDi>
it's possible to have degeneracy though, of course
<APlayer>
Because I have 27 equations which are all of the form shown above
<SnoopJeDi>
because grad(Σe_i)=0 only tells you its a local minimum yep
<SnoopJeDi>
APlayer, difficulty != complexity though
<APlayer>
Honestly, I was thinking something in the direction of Simulated Annealing for this problem
<SnoopJeDi>
the simplicity of LSQR is exactly why it's nice, even if your matrices get large
<SnoopJeDi>
but it's not a panacea either
<APlayer>
However, I'd need to define how I adjust my values (which I need for either algorithm, really)
<APlayer>
Also, in an ideal world, the algorithm I choose should be able to run on an Arduino, that is in my case an ATmega328
<SnoopJeDi>
APlayer, you said you have 27 of those equations, so 27 errors? Do they all have 6 free parameters c_xi?
<APlayer>
The c_x1 and c_x2 should be once for all equations, they are the calibration for the sensor. That is, I am trying to find those parameters such that the error to a known force (Gravity) is minimized, and so is hopefully the error to an unknown force
<APlayer>
s/force/acceleration/
<Qboid>
APlayer meant to say: The c_x1 and c_x2 should be once for all equations, they are the calibration for the sensor. That is, I am trying to find those parameters such that the error to a known acceleration (Gravity) is minimized, and so is hopefully the error to an unknown force
<SnoopJeDi>
oh okay
<APlayer>
Whyever it only edited one of the "forces"
<SnoopJeDi>
that's not a really big system then, shouldn't be a problem for the Arduino I think
<UmbralRaptop>
<warfreak2> little-known fact: you can't do a linear search on a hard drive because the data is in a circle, not a line
<APlayer>
SnoopJeDi: Okay, I'll have a more proper look at that tomorrow... Thank you!
<SnoopJeDi>
UmbralRaptop, I thought it was binary search because it's just 0s and 1s
<UmbralRaptop>
eggsactly!
<SnoopJeDi>
APlayer, if you've never done LSQ before, I recommend fitting a line first (more or less what Excel et al would do if you asked for a linear fit)
<kmath>
<drilscretelog> (accidentally fires differential cryptanalysis into my custom ternary hash fucntion with 100% accuracy rate) alrigh… https://t.co/0l7ICkM1jj
<SnoopJeDi>
hah
<rqou>
bofh, egg|zzz|egg: ok, i'm actually looking at doing sin/cos for float32 right now, and i noticed (in musl's code) that sin/cos require __rem_pio2f which then calls __rem_pio2_large
<rqou>
but __rem_pio2_large seems to assume that the inputs have at least 24 bits of precision
<rqou>
but float32 only has 23
<rqou>
is this a problem? bofh told me to "just change everything to float"
<egg|zzz|egg>
rqou: 23 + the implicit one, 24
<rqou>
ok
<bofh>
yup, there's an implicit bit in IEEE754 floats, so that's correct.
<egg|zzz|egg>
also in DEC VAX floats
<egg|zzz|egg>
!wpn -add:adj DEC
<Qboid>
egg|zzz|egg: Adjective added!
<bofh>
VAX floats are just slightly weird IEEE754 ones with weird endianness (the d & f kinds, at least. g is erm, ""special"").
<bofh>
:P
<egg|zzz|egg>
bofh: why is G special?
<egg|zzz|egg>
bofh: also you forgot H
<egg|zzz|egg>
but G seems to have the same widths as binary64?
<bofh>
right, I think I'm mixing up G and D.
<bofh>
H, short for "hell" :P
<rqou>
bofh: do you know if the iq/f/fq/q arrays with 20 elements can be shrunken if i only care about binary32?
<egg|zzz|egg>
bofh: but 113 bits of mantissa!
<egg|zzz|egg>
it's like binary128!
<egg|zzz|egg>
except it existed
awang has quit [Ping timeout: 186 seconds]
<bofh>
rqou: I don't think so offhand, but I'll take a look once I get back to my office.
<bofh>
egg|zzz|egg: hey, quad-precision softfloat is a thing
<egg|phone|egg>
Yes but why can i haz no binary128 instruction set
<egg|phone|egg>
Also autocorrect tried turning haz into gazebo
<egg|phone|egg>
!Wpn bofh
* Qboid
gives bofh a Mandelbrot hatpin
<egg|phone|egg>
!Wpn whitequark
* Qboid
gives whitequark a canonic nabla
SilverFox has quit [Ping timeout: 198 seconds]
SilverFox has joined #kspacademia
UmbralRaptop has quit [Quit: Bye]
UmbralRaptop has joined #kspacademia
<egg|phone|egg>
Whitequark: any news from the kittens?