I run it and mariaDB in docker and they run perfectly when left alone, but everything breaks horribly if I try to do an update. I recently figured out that you need to do updates for NC in steps, and docker (unRAID’s, specifically) defaults to jumping to the latest version. I think I figured out how to specify version now so fingers crossed I won’t destroy it the next time I do updates.
This is probably what I’m doing wrong. I’m using linuxserver’s docker which should be okay to auto update, but it just continuously degrades over time with updates until it becomes non-functional. Random login failures, logs failing to load, file thumbnails disappearing, the goddamn Collabora office docker that absolutely refuses to work for more than one week, etc.
I just nuke the NC docker and database and start from scratch every year or so.
You absolutely need to move from patch to patch and cannot just do a multiple version jump safely. You also need to validate the configs between versions, especially major release updates or you risk breaking. New features and optimizations happen and you also may need to change our update your reverse proxy configuration on update, or modify db table configuration (just puking this from memory as I’ve had to do it before). I don’t know that there’s automation for each one of those steps.
Because of that, I run nextcloud in a VM and install it from the binary package. I wrote a shell script that handles downloading, moving the files, updating permissions and copying the old config forward, symlinking and doing the upgrade. Then all I have to do is log in as administrator, check out the admin dashboard and make sure there aren’t new things I have to address in the status page. It’s a pain, but my nextcloud uses external db and redis and PHP caching so it’s not an easy out of the box setup. But it’s been solid for a long time once I adopted using this script.
Would love to take a look at that bash script (or at least a template of it) if you wouldn’t mind
No if I have to keep fixing it , it is not worth my time.
I installed owncloud years ago and came to the same conclusion and just got rid of it. I use syncthing nowadays though its not the same thing.
I’m absolutely at that point with Nextcloud. I kind of didn’t want to go the syncthing route, but I’ll probably give it a shot anyway since none of the NC alternatives seem any better.
I tried nc it for a while I would have taken me till the end of days to import all of my files.
I suspect I could keep it running by doing lockstep backups and updates. But it was just so incredibly slow.
I just want something that would give me remote access to my files with meta information about my files and a good search index.
Any guidance on this? I looked into Synthing at one time to backup Android phones and got overwhelmed very quickly. I’d love to use it in a similar fashion to NextCloud for syncing between various computers too.
Well, it works in a different way than NextCloud. You don’t have a server, instead you just make a share between your computers and they are all peers.
It takes some getting used to the idea, but it’s actually much simpler than NextCloud.
I was very intimidated as well, I’ll try to simplify it, but as always check the documentation ;)
This is the process I used to sync between my Windows PC and Android phone to sync retroarch saves (works well, would recommend, Pokemon is awesome) I’ve never done it on a Linux, though i assume it’s not too different
https://docs.syncthing.net/intro/getting-started.html
I downloaded the Synctrazor program so that it would run in the tray, again I’m not sure what the equivalent/if this would be necessary on Linux.
No shade to the writers, but the documentation isn’t super noob friendly, as I figured out. I’d recommend trying to cut out all the fluff, and boil it down to bare essentials. Download the program (whichever one seems right for your device, there’s an app for Android) and follow the process for syncing stuff (I believe I used a video guide, but it’s not actually as complicated as it seems)
If you need specific help I’d be happy to answer questions, though I only understand a certain amount myself XD
Not using Nextcloud. Found it a bit difficult to deploy and maintain than OwnCloud. Since then, I haven‘t had any problems with OwnCloud.
Take that as you want but a vast majority of the complaints I hear about nextcloud are from people running it through docker.
Does that make it not a substantive complaint about nextcloud, if it can’t run well in docker?
I have a dozen apps all running perfectly happy in Docker, i don’t see why Nextcloud should get a pass for this
Things should not care or mostly even know if they’re being run in docker.
Well, that is boldly assuming:
-
that endlessly duplicating services across containers causes no overhead: you probably already have a SQL server, a Redis server, a PHP daemon, a Web server, … but a docker image doesn’t know, and indeed, doesn’t care about redundancy and wasting storage and memory
-
that the sum of those individual components work as well and as efficiently as a single (highly-optimized) pooled instance: every service/database in its own container duplicates tight event loops, socket communications, JITs, caches, … instead of pooling it and optimizing globally for the whole server, wasting threads, causing CPU cache misses, missing optimization paths, and increasing CPU load in the process
-
that those images are configured according to your actual end-users needs, and not to some packager’s conception of a “typical user”: do you do mailing? A/V calling? collaborative document editing? … Your container probably includes (and runs) those things, and more, whether you want it or not
-
that those images are properly tuned for your hardware, by somehow betting on the packager to know in advance (and for every deployment) about your usable memory, storage layout, available cores/threads, baseline load and service prioritization
And this is even before assuming that docker abstractions are free (which they are not)
Most containers don’t package DB servers, Precisely so you don’t have to run 10 different database servers. You can have one Postgres container or whatever. And if it’s a shitty container that DOES package the db, you can always make your own container.
that those images are configured according to your actual end-users needs, and not to some packager’s conception of a “typical user”: do you do mailing? A/V calling? collaborative document editing? … Your container probably includes (and runs) those things, and more, whether you want it or not
that those images are properly tuned for your hardware, by somehow betting on the packager to know in advance (and for every deployment) about your usable memory, storage layout, available cores/threads, baseline load and service prioritization
You can typically configure the software in a docker container just as much as you could if you installed it on your host OS… what are you on about? They’re not locked up little boxes. You can edit the config files, environment variables, whatever you want.
and why would that be? More abstraction thrown in for the sake of sysadmin convenience doesn’t magically make things more efficient…
Nothing to do with efficiency, more because the containers are come with all dependencies at exactly the right version, tested together, in an environment configured by the container creator. It provides reproducibility. As long as you have the Docker daemon running fine on the host OS, you shouldn’t have any issues running the container. (You’ll still have to configure some things, of course)