VITAS changed the topic of #spacedock to: Problems?: https://github.com/KSP-SpaceDock/SpaceDock/issues | Matrix/Riot Chat: https://im.52k.de +spacedock:52k.de Feel free to ask for help, we only bite a little bit! | If you want to help, please check https://github.com/KSP-SpaceDock/SpaceDock-Backend/issues/5 :) | <VITAS> inet users have the attentionspan of a squirrel...
Darklight[m] has quit [Quit: Idle timeout reached: 10800s]
RockyTV[m] has quit [Quit: Idle timeout reached: 10800s]
Astro[m] has quit [Quit: Idle timeout reached: 10800s]
VITAS[m] has quit [Quit: Idle timeout reached: 10800s]
egg has quit [Read error: Connection reset by peer]
egg has joined #spacedock
HebaruSan[m] has quit [Quit: Idle timeout reached: 10800s]
cptjames32[m] has joined #spacedock
<cptjames32[m]> anyone here?
Webchat996 has joined #spacedock
<Webchat996> anyone here?
Webchat996 has quit [Client Quit]
VITAS[m] has joined #spacedock
<VITAS[m]> yes but you where to impatiant
Darklight[m] has joined #spacedock
<Darklight[m]> I'm surprised you even have an IRC, especially if nobody is using it 😛
<VITAS[m]> nobody but the sane
<DasSkelett> Nobody's using it, huh?
<VITAS[m]> i should stop supporting discord because im not using it :>
<VITAS[m]> i think communication amongst site users should play a more prominent role
<VITAS[m]> they could help each other
cptjames32[m] has quit [Quit: Idle timeout reached: 10800s]
<VITAS[m]> i wonder how i can share the same dir mountpoint on smb with two containers?
<VITAS[m]> so i can have both acc the mods
<VITAS[m]> the size=0 trick generates folders where they want them to
<VITAS[m]> For example, to make the directory /mnt/bindmounts/shared accessible in the container with ID 100 under the path /shared, use a configuration line like mp0: /mnt/bindmounts/shared,mp=/shared in /etc/pve/lxc/100.conf. Alternatively, use pct set 100 -mp0 /mnt/bindmounts/shared,mp=/shared to achieve the same result. <-lets see if that works
<VITAS[m]> working
<VITAS[m]> next question: whats the easiest way to switch between containers?
<VITAS[m]> now i would have to edit the rev proxy config and reload it
<VITAS[m]> a simple command would be easier
<VITAS[m]> lets see if ansible and ssh can help me
<VITAS[m]> < thinking loud :D
DasSkelett[m] has joined #spacedock
<DasSkelett[m]> Bash script with sed ?
<DasSkelett[m]> (Also the bridge is broken again if you are wondering why everyone ignores you :P )
<VITAS[m]> thx
<VITAS[m]> Darklight would now say: lets get rid of discord im not using it
<DasSkelett[m]> Haha
Darklight[m] has quit [Quit: Idle timeout reached: 10800s]
RockyTV[m] has joined #spacedock
<RockyTV[m]> is it possible to upload a file in cURL from another URL?
<VITAS[m]> if RockyTV can read this: you would have to first download then upload it you might be able to do it with piping between them
<RockyTV> oh discord is broken
<VITAS[m]> theres something for ftp: fxp
<VITAS[m]> yes
<RockyTV> ah so it's not possible then :/
<VITAS[m]> you cant simply tell one webserver to transfer stuff to another from some random client
<VITAS[m]> unless that server has some code to do it
<VITAS[m]> else everything has to go trough your client
<VITAS[m]> what are you trying to do exactly?
<RockyTV> boss asked if we could upload the dwg files stored in our Azure cloud directly to AutoDesk without needing to turn it into a blob then upload it to AutoDeks
<RockyTV> good news is, autodesk hasn't charged anything to convert from DWG to SVF (the file format used by their viewer)
<VITAS[m]> im confused. cant the source server send it when a client requests it ?
<RockyTV> this is the current workflow to view a DWG file: authorization (request OAuth tokens and use required scopes), create a bucket (storage), upload DWG file to bucket (will return a URN), request a translation job (will translate from DWG to SVF)
<RockyTV> uhm I'm not sure
<VITAS[m]> what a mess
<RockyTV> yeah
<RockyTV> I'll suggest that we upload the DWGs to autodesk directly when a user uploads the file so it's transformed automatically and we don't need to do it again every time someone wants to view the DWG file
<VITAS[m]> welcome to the world of corporate closed source software
<VITAS[m]> depends on the amount of time that takes
<VITAS[m]> but yes if theres no limit on the amount of req to autodesk i would always dot hat and save both files
<VITAS[m]> if they have loads of req autodesk might stop those
<RockyTV> thanks, that might be a good alternative to not use autodesk's api
<RockyTV> I just can't find the location of dwg2svg
<VITAS[m]> location ? on your hd? to download?
<RockyTV> no, to run the scripts. I installed it but `dwg2svg` doesn't work
<VITAS[m]> find / -name dwg2svg
<VITAS[m]> i should ask your company for a raise
<VITAS[m]> :D
Darklight[m] has joined #spacedock
<Darklight[m]> I believe you're going to have to download it first...
<VITAS[m]> he said he did
<Darklight[m]> Err to be clear that is a new creation... not an update upload..
<RockyTV> VITAS[m], I think Darklight[m] said that to the curl question I asked before
<VITAS[m]> ah ok
<VITAS[m]> version conflict :D
<VITAS[m]> but Darklight your answer isnt matching the question :)
DasSkelett[m]1 has joined #spacedock
<DasSkelett[m]> Test other way
<DasSkelett[m]1> Test one way
<VITAS[m]> test noway!
<DasSkelett[m]1> Funny, Discord -> Matrix works, Matrix -> Discord doesn't.
<VITAS[m]> thats the usual problem if theres a problem
<VITAS[m]> thats why i feel so ignored all the time :)
<RockyTV[m]> testing
<RockyTV> discord-> irc worked
<RockyTV> and irc->discord too
<Darklight[m]> It works again
<DasSkelett[m]> Let's see
<Darklight[m]> Well... It did...
<Darklight[m]> Oh no it's matrix
<Darklight[m]> So discord isn't sending...
<RockyTV[m]> I can see your messages on irc
<Darklight[m]> I can't see them atm
<Darklight[m]> Only vitas's from matrix
<Darklight[m]> The bridge bot vitas is using is crap tier 😦
<Darklight[m]> WRONG CHANNEL
<Darklight[m]> Fuck me >.<
<DasSkelett[m]1> Ahahaha
<RockyTV> there are 3 DasSkelett's, who's the real one? :P
<DasSkelett[m]1> Me
<DasSkelett[m]> Me
<DasSkelett> Me
<RockyTV> 🤔
<VITAS[m]> Darklight: we could set up our own bridge and not use the ready made one anymore
<DasSkelett[m]1> > What if the elements of a mod's score that reflect how well it's populated (long description, source code link, recently updated) added a percentage of its base score rather than a static number? That way those pieces could remain relevant even at the high end of the distribution.
<DasSkelett[m]1> <span class="d-mention d-user">HebaruSan</span>
<DasSkelett[m]1> Sounds reasonable
<VITAS[m]> why did ubuntu stop seeting mysql root passwords
<VITAS[m]> it screws up my stuff allt he time
<Darklight[m]> I am pretty sure vitas does exactly that already?
<DasSkelett[m]1> Hmm, then I misunderstood the question.
<VITAS[m]> yes i do and i dont want o loadbalance in that case
<VITAS[m]> i want to in one go switch between sd1a and sd1b and flush the rev proxys cache
<VITAS[m]> what ive now:
<VITAS[m]> i set up 2 containers sd1a and sd1b both have a mount called /storage/sdmods pointing to the same folder on my storage backend
<VITAS[m]> i want both of them to be setup exactly the same (as production servers)
<VITAS[m]> one of them will be pointed to in ATS (central rev. proxy cache) to answer spacedock.info req.
<VITAS[m]> the other will be used to deploy new prod versions to.
<Darklight[m]> Errr, I am confused where this is going
<VITAS[m]> now once we want to deploy a new software version we can do that without time contraints
<Darklight[m]> Did you see my thing about ^has_journal?
<VITAS[m]> we just switch the host ATS points to once we are done
<VITAS[m]> i solved the storage thingy (except the quota)
<Darklight[m]> I mentioned if you tune2fs -O ^has_journal (it disables the journal), the journal doesn't abort because there isn't any, and it prevents the mounted loop file from going into read-only mode
<VITAS[m]> i want to enable us to take our time seting up changes to prod systems
<VITAS[m]> and not rush trough it in 1h and then discovering that we missd stuff
<Darklight[m]> Although passing through the directory is probably way safer, but yeah no quota
<VITAS[m]> im using direct mount without the pve storage subsys now
<Darklight[m]> Ah ok
<VITAS[m]> Bind Mount Points
<VITAS[m]> Bind mounts allow you to access arbitrary directories from your Proxmox VE host inside a container. Some potential use cases are:
<VITAS[m]> i have to mount the storage on the pve host via fuse
<DasSkelett[m]1> You wouldn't have to use it as load balancer, if you keep one of the containers disabled all the time. Then it's basically just a rev proxy / traffic server without balancing.
<DasSkelett[m]1> As in: have both sd1a and sd1b as balancer members, but sd1b manually disabled. Now when it's time to switch over for new code, enable sd1b and disable sd1a.
<VITAS[m]> i can monitor its usage by also mounting it in proxmox as storage
<Darklight[m]> Is this about no-downtime updates?
<VITAS[m]> DasSkelett: yes but what benefit would i have? what i need to do now is to change a line in the rev proxys config ans reload it
<DasSkelett[m]1> No no, just planning <span class="d-mention d-user">HebaruSan</span>
DasSkelett[m]1 has quit [Network ban]
Darklight[m] has quit [Network ban]
VITAS[m] has quit [Network ban]
DasSkelett[m] has quit [Network ban]
DasSkelett[m]1 has joined #spacedock
DasSkelett[m] has joined #spacedock
HebaruSan[m] has joined #spacedock
<DasSkelett[m]1> But I thought you wanted a solution without editing the config <span class="d-mention d-user">VITAS</span>
<DasSkelett[m]1> Because the Balancer Manager has a web interface, and maybe even an HTTP API. Look at the second link of my comment above.
<DasSkelett[m]1> You mean on alpha and beta?
<DasSkelett[m]1> `source bin/activate; alembic upgrade head`
<DasSkelett[m]1> I already asked that a week ago, the collective answer was "Let's keep it manually"
<DasSkelett[m]1> oh yeah, there's also `./spacedock database migrate`, which does the same.
<HebaruSan[m]> prepare.sh ?
<DasSkelett[m]1> Backup db before pulling new code, since the new code could already mess the db up.
<DasSkelett[m]1> Regarding 4) yes, alembic will just report "already on newest revision"
<DasSkelett[m]1> prepare.sh would be executed by the systemd scripts. And I understand 5) as starting the systemd services again
<HebaruSan[m]> <span class="d-mention d-user">DasSkelett</span> Do we need `sudo systemctl daemon-reload` ?
<DasSkelett[m]1> Oh, good point. Also swap out `/etc/systemd/system/spacedock.target `, since it isn't symlinked
<DasSkelett[m]1> small correction: probably better to edit it manually instead of copying the one from the repo since there might be differences between them.
<DasSkelett[m]1> This couldn't be done in a script though.
<DasSkelett[m]1> Hmm, not much that can be different, basically only the gunicorn ports. If they are the same on prod, we can put the copying in the script.
godarklight[m] has joined #spacedock
RockyTV[m] has quit [Quit: Idle timeout reached: 10800s]
<HebaruSan[m]> Could it be symlinked? Then we would be able to maintain it in git and upgrades could have one fewer steps
<DasSkelett[m]1> From my side, prod could be. Alpha and Beta can't due to the many gunicorn instances.
<HebaruSan[m]> To be clear, I'm NOT saying that we do not need either one of them. Just that in practice, they both are used the same way by the same people for the same purposes.
<DasSkelett[m]1> I don't think just adding another branch is a (good) replacement for unit tests.
<HebaruSan[m]> If we actually had a group of non-dev users who wanted to beta test things, that would be great, but I do not see that happening
<DasSkelett[m]1> And even then they could test on alpha IMHO.
<HebaruSan[m]> There is just not that much to do on SpaceDock Beta. You're not going to upload or download actual mods there, and the actual changes are pretty small.
<HebaruSan[m]> If that's the idea then beta should not exist at all 🙂
<HebaruSan[m]> That doesn't really make sense. New features need to be possible to add, you can't have an intermediate stage of the development process that blocks features.
<HebaruSan[m]> Speaking for myself and probably <span class="d-mention d-user">DasSkelett</span>, the "I'm still working on that" stage is addressed by feature branches and code review of pull requests.
<HebaruSan[m]> We can probably all agree that tests would help a lot
<HebaruSan[m]> I personally would not necessarily know how to start that, though, given how many of SD's operations require a web server or a database
<HebaruSan[m]> I can link you to the tests for NetKAN-Infra, which are also in Python
<HebaruSan[m]> Calling beta a "freeze" is just weird, because we would never actually say, that's a feature, it can't migrate till next month, like you would in an enterprise "feature freeze" situation
<DasSkelett[m]1> https://flask.palletsprojects.com/en/1.1.x/testing/ likely helps for writing tests
<HebaruSan[m]> Don't worry, pytest and flask are not going anywhere
<DasSkelett[m]1> Never did anything with it myself, so I'd have to read into it too.
<HebaruSan[m]> The API returns HTML if a 500 error occurs 😬
<DasSkelett[m]1> Yes, can confirm.
<HebaruSan[m]> Well I've got enough of a test harness to check whether `/version` works now
<HebaruSan[m]> Will push a branch for discussion as soon as I figure out how to add it to a GitHub action
<DasSkelett[m]1> Wasn't entirely sure how to fix this. But I think it should be possible to register custom error handlers for a route, so that for 500s on all API routes returns a JSON.
<HebaruSan[m]> Yeah almost certainly
<DasSkelett[m]1> If you need help creating a workflow file, I can help (or do it too)
godarklight[m] has quit [Quit: Idle timeout reached: 10800s]
<HebaruSan[m]> GitHub runs the workflow even before there's a PR
<HebaruSan[m]> <span class="d-mention d-user">DasSkelett</span> https://github.com/KSP-SpaceDock/SpaceDock/pull/280
<HebaruSan[m]> I would like to add tests for the various `/api/` routes, but my attempts resulted in errors about the tables not existing. Maybe you can figure that out.
VITAS[m] has joined #spacedock
Darklight[m] has joined #spacedock
<Darklight[m]> I don't think that is possible because of db upgrades?
<Darklight[m]> Alembic is supposed to do it I think, but I would be backing up and verifying
<Darklight[m]> I wonder if that is worth sneaking into spacedock-prepare.sh
<Darklight[m]> For the discord users, I struggled to code block with ``` in discord you can do it inline, matrix needs it on its own line...
<Darklight[m]> It could be, but personally I consider the systemd a deployment detail specific to that instance, with the github being an example. But this isn't my server and it's 100% vitas's say 😉
<Darklight[m]> Also I saw the messages about alpha and beta being redundant, I definitely agree, but we also lack tests afaik so shrug
<VITAS[m]> Darklight: the two containers is for now-downtime-updates the storage thing is to solve our problem and also allow two containers to acc the same data
<Darklight[m]> I have a use case to suggest to beta but then it needs to be locked to vitas - copying prod db data over and syncing it every so often, as it couldn't actually work on the live db, but I know vitas won't go down that road
<VITAS[m]> HebaruSan: no im setting this up and we will deploy it on the next prod update we do
<Darklight[m]> Freeze works in a bit setting, but I think skellets/hebaru's PRs make a bit more sense, I suspect tests would be better because we didn't catch the path in the db change
<VITAS[m]> (this also gives us instant ubuntu 20 upgrade)
<Darklight[m]> I'd have to write them though and I've never done stuff like that before 😛
<VITAS[m]> DasSkelett: exactly and i dont see how that does it
<Darklight[m]> Probably the equivalent of curl tests, it's mainly to make sure all the http facing stuff works
<VITAS[m]> DasSkelett: (and i need a way to also flush ats cache because script updates and such)
<Darklight[m]> I probably shouldn't have mentioned anything if I get shoved into yet another framework 😛
<VITAS[m]> Darklight: this time it wont but we have to get that in place anyways for it to work in the future
<Darklight[m]> Hey vitas: remember what happened to slim? 😄
<VITAS[m]> i dont think we will change db shema all the time
<Darklight[m]> I left the css and js alone!
<VITAS[m]> DasSkelett: uh then im interested :) leaves the problem with flushing ATS cache. i was thinking of simply having a command on the ats host that does all the things we come up with
<Darklight[m]> But after I had a framework die and didn't understand its point, it got the yeeting
<VITAS[m]> (im using that for updating certs atm allready)
<Darklight[m]> It didn't die die, it died because of PHP incompatibility iirc
<VITAS[m]> something like "switchsd sd1a"
<Darklight[m]> And slim2 to slim3 was NOT a drop in upgrade
<VITAS[m]> it could even deploy db shema updates while its at it if we can write a mechanism that does that part.
<Darklight[m]> And then after realising what slim was, I slimmed it right down by yeeting it
<VITAS[m]> schema
<Darklight[m]> Hahaha
<VITAS[m]> DasSkelett: how do we deploy db updates atm?
<Darklight[m]> If you have problems with performance I could cache stuff in redis probably, python has an instance where it can save stuff, I think PHP lacks that
<VITAS[m]> on any sd deployment
<Darklight[m]> But I suspect d-mp's code is... well it doesn't do anything
<VITAS[m]> we have a new version that required db changes how do we deploy them?
<Darklight[m]> Apart from spit out the db
<VITAS[m]> aka is there a command we have to run?
<Darklight[m]> ``ab -n 10000 -c 50 http://d-mp.org/serverlist`` Requests per second: 5203.69 [#/sec] (mean) yeah I think d-mp.org is fine
<VITAS[m]> ah ok thx
<Darklight[m]> That's the hardest page too
<VITAS[m]> if thats usefull
<VITAS[m]> im making a checklist atm
<VITAS[m]> i can then turn that into ansible tasks
<VITAS[m]> DasSkelett: forgive me. i m usualy defaulting to beeing cautious
<VITAS[m]> if i can make sure we have a fresh beackup im ok with automating stuff if the process itself (the deployment) is kicked off manualy
<VITAS[m]> so what in what order would we have to do things?
<VITAS[m]> update script:
<VITAS[m]> 1. stop processes
<VITAS[m]> 2. checkout new code from git?
<VITAS[m]> 3. backup db
<VITAS[m]> 4. update db (running the command if theres no need wont hurt right?)
<VITAS[m]> 5. start flask,cellery,...
<VITAS[m]> 6. automated test if everything is up and running
<VITAS[m]> manual tests
<VITAS[m]> switch server script (switches between prod and prod-spare we just updated):
<VITAS[m]> 1. swap ats pointer (points to hot spare)
<VITAS[m]> 2. reload ats config
<VITAS[m]> 3. flush ats cache
<VITAS[m]> New version is live
<VITAS[m]> repeat update script for the server that just became the new prod-spare (former prod) to keep their versions in sync
<VITAS[m]> am i missing something?
<VITAS[m]> ok
<VITAS[m]> please edit and add stuff :)
<VITAS[m]> HebaruSan i could try to find people who are willing to test on beta
<VITAS[m]> bta exists because i see alpha as way for you guys to have one definitve instance that reflects the current state
<VITAS[m]> beta should be something that isnt in flux but has a feature freeze
<VITAS[m]> something you cna let others look at and not be worried about having to tell them "yes im still working on that"
<VITAS[m]> so when do you want to deploy thje code on prod then?
<VITAS[m]> you have to freeze it at some point
<VITAS[m]> first features then fix bugs till you decide its time for an realease
<VITAS[m]> i need code that you "as in everyone here" is confident to send into production
<VITAS[m]> so what do you want and need to output code that can be deployed on prod safely?