And it failed spectacularly.

We only needed a simple form, but we wanted to be fancy, so we used “nextcloud forms”.

The docker image automatically updated the install to nextcloud 30, but the forms app requires nextcloud 29 or lower. No warning whatsoever. It’s an official app, couldn’t they wait that it was ready for NC 30 before launching it? The newsletter boasts “NC hub 9 is the best thing after sliced bread” yet i don’t see any difference both in visual or performance compared to NC hub 2

Conclusion: we made our business to rely on nextcloud forms as a signup form, but the only reason we were using it was disabled who knows how many weeks ago.

  • matzler@lemmy.ml
    link
    fedilink
    English
    arrow-up
    8
    ·
    6 hours ago

    Specify a Version Tag in docker compose and update nextcloud deliberately through the webapp, that way it doesn’t update automatically on a pull

    • Moonrise2473@feddit.itOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      6 hours ago

      I have daily Borg backups held for at least one year but the problem is that the issue came out at least two weeks ago and nobody noticed. It’s better to have nothing (customer gets error page when viewing useless survey that nobody is watching) rather to restore such a old backup (everyone loses 2-4 weeks of data)

  • ShortN0te@lemmy.ml
    link
    fedilink
    English
    arrow-up
    36
    arrow-down
    12
    ·
    15 hours ago

    The docker image automatically updated the install to nextcloud 30, but the forms app requires nextcloud 29 or lower.

    Lol. Do not blame others for your incompetence. If you have automatically updates enabled then that is your fault when it breaks things. Just pin the major version with a tag like nextcloud:29 or something. Upgrading major versions automatically in production is a terrible decision.

    • Scrubbles@poptalk.scrubbles.tech
      link
      fedilink
      English
      arrow-up
      13
      ·
      14 hours ago

      Docker images should never self update - that’s an anti pattern. They should be static code. The only time I would expect a docker image to “auto update” is if I was using the “latest” or “stable” tag and Compose/Kubernetes/I repull the image - but the image should never update itself.

      Yes, OP bit off more than they could chew. Nextcloud, however, is breaking the entire purpose of Docker images by having an auto-updater at all.

      • GBU_28@lemm.ee
        link
        fedilink
        English
        arrow-up
        10
        ·
        13 hours ago

        If you say

        Thing:latest
        

        and then redeploy your compose file or what not,

        well, you’re getting the latest!

      • ShortN0te@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        14 hours ago

        What are you talking about? If you are not manual (or by something like watchtower) pull the newest image it will not update by itself.

        I have never seen an auto-update feature by nextcloud itself, can you pls link to it?

        • Scrubbles@poptalk.scrubbles.tech
          link
          fedilink
          English
          arrow-up
          3
          ·
          13 hours ago

          I don’t have the link here, but essentially yes, nextcloud can update it’s own app code in it’s image because you have to mount the code to your own filesystem. This means that between docker images you can have a mismatch of the code that you have stored and the code that the image is expecting, which frequently causes mismatches for me. This is an antipattern. The code should be stored in the image, not as a volume mount. There should never be a mismatch of code in a docker image - that’s the whole point. The configuration could be out of date sure, or if there’s a data file that’s needed, that’s expected. The actual running code thought, that should never be on a mountable volume.

          Next time you update the image you will probably be greeted with a “Nextcloud needs to update”. That should not exist. You already pulled the image, that should be everything you need to do. The caveats are extensions, kind of a grey area in my book, but I know it’s not a clean pattern with those either. (The best one I’ve seen lets you pin the extension version with environment variables or a config file, and then once again you are in control of when they update, and no running code is stored outside of the image.)

          • ShortN0te@lemmy.ml
            link
            fedilink
            English
            arrow-up
            1
            ·
            13 hours ago

            You can disable the web updater in the config which is the default when deploying via docker. The only time i had a mismatch is when i migrated from a nativ debian installation to a docker one and fucked up some permissions. And that was during tinkering while migrating it. Its solid for me ever since.

            Again, there is no official nextcloud auto updater, OP chose to use an auto updater which bricked OPs setup (a plugin was disabled).

            • Scrubbles@poptalk.scrubbles.tech
              link
              fedilink
              English
              arrow-up
              2
              ·
              13 hours ago

              Thanks, I’ll disable that. I’m extra salty right now because I had to rollback a bad version and had to rebuild some of my config over the last week. I got into version hell because the code on the volume (which is why I’m pissed at it) said that I couldn’t run the image that I had set. So I pinned an earlier version, but then there were extensions that were pinned to a later one and said that I couldn’t rollback and didn’t start. I had to end up redoing the whole drive manually, forcing specific versions in the version.php and the config.php to finally make it work (Why is it in two places). Then after all that I had to run the upgrade command. Extremely annoying, and a waste of time for me. Other docker containers if I need to pin a version? I just… pin the version. Nextcloud is the only one I’ve seen where they store code on my volume and then pin specific versions to it.

    • Moonrise2473@feddit.itOP
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      6
      ·
      15 hours ago

      They’re releasing a new version every two month or so and dropping them rapidly from support, pinning it with a tag means that in 12 months the install would be exploitable.

      Now, I did directly to production because this is low priority stuff, but it would have happened even with a testing stage. I would have never noticed that the forms apps was disabled, the system disabled it without any notification.

      You would expect that an official app supports the latest release, no?

      This wasn’t an app released by a nobody in their free time, this is a main feature heavily advertised in their blog. Look by yourself:

      https://nextcloud.com/blog/nextcloud-forms-to-keep-your-surveys-private/

      It’s not unreasonable to get pissed when 6 months after that blog post it doesn’t support the latest release anymore.

        • Moonrise2473@feddit.itOP
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          6 hours ago

          Exactly, they have a release schedule, why their own plugin, that they’re heavily promoting as a feature, isn’t following that? If for some reason the forms app isn’t ready for that date, why not postponing the launch instead of having it broken for who know how many months?

          It’s not a plugin made by someone else in their free time. They knew that by updating to NC 30 that feature that was marketed just 6 months ago would be disabled, so at least have the decency to write it in the release notes. I subscribe to the newsletter and the RSS for what, just enjoy the marketing buzzwords?

          It’s like if Microsoft releases an operating system with a buggy and broken taskbar because of a rushed self imposed deadline and fixes it one year later.

          • Maalus@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            1
            ·
            2 hours ago

            Okay, let’s be angry at the company and frown a lot at what happened. Gurr, bad company, evil.

            And now think of what you’d rather have - a working system, or a reason to be angry? If you have something that integrated with something else, lock it down at a specific version so you control the upgrade and know those versions work 100% of the time together. “Latest” is just asking for trouble - be it in a docker image, in dependencies or elsewhere. It’s absolutely not a “best practice” if it isn’t even a code smell or an outright bug. You could’ve had a slightly outdated version, which won’t be “exploitable” - you wouldn’t have enough time to exploit anything in that time, especially with smaller companies and obscure exploits.

            Instead of putting out the fire, you could’ve been now looking into the upgrade, seeing on UAT or Test or whatever that forms aren’t supported, chilling till they are supported or complaining that they aren’t.

            Upgrades breaking shit is like programming / devops 101, and a huge reason for technical debt in very old projects. Leaving all that to chance is just irressponsible.

            • Moonrise2473@feddit.itOP
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              2
              ·
              1 hour ago

              the upgrade command is sent manually, it’s not automated and it’s not unattended. It would have made ZERO difference if i tagged 29 and then in test tagged 30. The upgrade would have not failed, it would have given ZERO warnings, I would have seen that everything worked as expected (because who tests the useless survey that is filled once a semester?) and I would have pushed the update to production.

              • Maalus@lemmy.world
                link
                fedilink
                English
                arrow-up
                3
                ·
                40 minutes ago

                Who tests the useless survey? Everyone with regression tests. Like dude, everything you talk about has been written “in blood” from years of hosting production systems. If the useless survey is needed, then write a test for it, or a testcase to manually try it. Don’t just upgrade, see that the app is up and push to prod, that’s not testing, that’s asking for trouble.

  • JASN_DE@lemmy.world
    link
    fedilink
    English
    arrow-up
    57
    arrow-down
    5
    ·
    18 hours ago

    Wait, you update productions systems without running a staging environment? Or even checking the update notes and your installed apps? Also no backups? What kind of business are you running over there?

    • Scrubbles@poptalk.scrubbles.tech
      link
      fedilink
      English
      arrow-up
      40
      arrow-down
      3
      ·
      18 hours ago

      Oh, Nextcloud docker is a joke. They follow no standards or best practices when it comes to docker. They keep the entire app directory mounted as a volume, which means it does upgrade you without you “needing” to upgrade the docker image. They have volumes within volumes they need to mount. Their configs can (and do) override environment variables. Most actions that need to be taken require running an occ command which can only be done by exec’ing into the container.

      Nextcloud docker is honestly just such a joke. They should have rethought their application from a docker sense and they didn’t. God just number one - Docker images should never update. It’s a freaking pinned version for a reason. If I want to update, it should be as simple as upping the version tag, and it does any upgrades in place when I do that.

      I honestly steer people away from Nextcloud now because of how mismanaged their images are.

      • Max-P@lemmy.max-p.me
        link
        fedilink
        English
        arrow-up
        11
        ·
        edit-2
        15 hours ago

        Yep, and I’d guess there’s probably a huge component of “it must be as easy as possible” because the primary target is selfhosters that don’t really even want to learn how to set up Docker containers properly.

        The AIO Docker image is an abomination. The other ones are slightly more sane but they still fundamentally mix code and data in the same folder so it’s not trivial to just replace the app.

        In Docker, the auto updater should be completely neutered, it’s the wrong way to update the app.

        The packages in the Arch repo are legit saner than the Docker version.

        • Scrubbles@poptalk.scrubbles.tech
          link
          fedilink
          English
          arrow-up
          2
          ·
          14 hours ago

          I had to learn how to mount subpaths for their terrible container, and god just the updater is mind boggling. And I have to store their code in a volume, because of course I have to, why would code and configuration ever need to be… configurable? I actually just tried to put their config.php into a ConfigMap just to try, and of course PHP doesn’t allow that - not that I blame PHP for it - but ffs it’s been years, it’s time to allow config to also come from a yaml or something.

        • Scrubbles@poptalk.scrubbles.tech
          link
          fedilink
          English
          arrow-up
          3
          ·
          16 hours ago

          I do it in docker at home, for myself, in an environment I am okay with accidentally destroying - and even then I have nightly backups of the volumes.

          In a professional system, as mentioned in my other comment, I would simply just do it in a VM with the disk scheduled also for nightly backups. Nextcloud just hardcoded too many things dependent on thinking the underlying system was mutable. Unfortuantely that’s just the easiest way to handle it.

          However, also as mentioned, if I were in a professional environment, I’d have to really look at the cost for all of that infrastructure and my time to run it - and decide if I really thought I could run it myself with all of that overhead, and that it would still make sense compared to just doing google docs or something. Remember it’d be my ass on the line, as OP is learning

      • Moonrise2473@feddit.itOP
        link
        fedilink
        English
        arrow-up
        2
        ·
        17 hours ago

        I wiped a whole drive (luckily it was filled with a redundant backup) with the docker image, as the behavior was (or still is, don’t know if it was fixed) to rm -rf . and replace with fresh stuff if occ isn’t found. So in the docker compose I accidentally mistyped the wrong volume as /mnt/disk2 instead of /mnt/disk3 and it erased it

        • Scrubbles@poptalk.scrubbles.tech
          link
          fedilink
          English
          arrow-up
          3
          ·
          17 hours ago

          Oh yeah, if you’re in a professional environment, I’m sorry but that’s just not great. The only way I’d consider running Nextcloud professionally would be on a VM of it’s own with nightly disk backups, with blob storage as the backing - and even then with the cloud costs really how close are you to just paying for an enterprise license to Google or Microsoft? Plus the headache of not having to worry about it yourself

    • yggstyle@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      11 hours ago

      To be fair a certain security company was in global news for exactly that same send it behavior. Why waste precious resources on multiple instances? Investors hate waste. 😅

    • Moonrise2473@feddit.itOP
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      2
      ·
      17 hours ago

      Yes no staging because it’s something used at most by 2 concurrent users, we were ok with 95% reliability (we discovered it was disabled after at least two weeks lol)

      Otherwise we would just have signed up to one of the many cloud forms sites at $100/year

      Backups daily but it’s unthinkable to revert something like nextcloud to a months old one

      Subscribed to both newsletter and RSS feed to know about issues (the command to update the docker images isn’t automated but manually issued). The maintainer of the forms app is nextcloud itself so any incompatibility should have been written in red bold characters in the blog posts and newsletter.

    • ilmagico@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      18 hours ago

      If I understand correctly, nextcloud automatically updated … which I didn’t think it would, normally. Maybe it’s a “feature” of the AIO docker image?

  • Zak@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    18 hours ago

    There was a recent related discussion on Hacker News and the top comment discusses why this sort of solution is not likely to be the best fit for smaller organizations. In short, doing it well requires time and effort from someone technically sophisticated, who must do more than the bare minimum for good results, as you just learned.

    Even then, it’s likely to be less reliable than solutions hosted by big corporations and when there’s a problem, it’s your problem. I don’t want to discourage you, but understand what you’re committing to and make sure you have adequate buy-in in your organization.

    • interurbain1er@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 hours ago

      That reminds me of work. I’m old, young me has been through the mistakes and the pain of wanting to control and self-host everything.

      Now I manage a team of young idealists who have not yet been burned sufficiently hard by reality and I feel like I spend half of my time denying them permission to add new self-hosted services to our stack.

      Just last month a young padawan was pissed at the spent on an external auth service and had been pushing hard for a self hosted OSS solution which he was convinced he could handle by himself (which was most likely true, from a purely technical standpoint).

      Since he wouldn’t let it go, I “punished” him by having him spend one day in excel and powerpoint to prepare a cost benefit analysis to present to the architecture review board, including server cost, backups, redundancy, security, monitoring, pen-testing, auditing, his time and all the bells and whistles we needed to be compliant with all the ISO-x we have to be. (we’re in a banking related field).

      Our estimated internal cost ended up about 6x the one of the SASS solutions and still wasn’t as reliable.

      Most people don’t understand the amount of effort it requires to run a secure & reliable system and if I had a dollar for everytime I heard it’s as simple as “docker run”, I could retire early.

  • ilmagico@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    2
    ·
    edit-2
    18 hours ago

    Never upgrade to the latest and greatest of … anything really, especially in production. Let others test it first, or as suggested already, have a staging environment where you test the upgrade first. I guess you can still downgrade nextcloud though, especially if you have a backup.

    Are you using the AIO image? I don’t know how well that works, but yeah, I absolutely hate automatic updates like that. I tried it once and I decided to use the plain “official but not supported” docker image instead, where I manage things myself. Never had an issue, and I can control which version I’m running, I can backup to wherever I want, using whichever system I want, etc.

    • Akatsuki Levi@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 hours ago

      AIO has a updater but it is manual by default You need to enable automatic updates yourself, which… Is done through a bash script you need to add yourself into the system crontab

      And not only that, the instructions do say things could break and even suggests setting up backups for such

  • Saiwal@hub.utsukta.org
    link
    fedilink
    arrow-up
    3
    ·
    18 hours ago

    You can still choose to installt he old version in NC30 and it will do so. and I upgraded to NC30 and my forms app continues to be functional. you can still give it a try.

  • Lucidlethargy@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    9
    ·
    18 hours ago

    Docker is kind of a giant mess in my experience. The trick to it is creating backup plans to recover your data when it fails. As such, I don’t really recommend it to anyone at all.

    • ShortN0te@lemmy.ml
      link
      fedilink
      English
      arrow-up
      4
      ·
      13 hours ago

      Docker is kind of a giant mess in my experience. The trick to it is creating backup plans to recover your data when it fails.

      Thats the trick for any production service. Especially when you do an update.

    • anyhow2503@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      14 hours ago

      I wouldn’t recommend Docker for a production environment either, but there are plenty of container-based solutions that use OCI compatible images just fine and they are very widely used in production. Having said that, plenty of people run docker images in a homelab setting and they work fine. I don’t like running rootful containers under a system daemon, but calling it a giant mess doesn’t seem fair in my experience.

  • Meldrik@lemmy.wtf
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    7
    ·
    18 hours ago

    No offence, but is Docker really the best way of running NC in a professional environment? Also, if you don’t want Docker to upgrade to latest image, don’t use the “latest” tag in your configuration.

    • schizo@forum.uncomfortable.business
      link
      fedilink
      English
      arrow-up
      11
      ·
      18 hours ago

      Docker is probably the simplest way to get a working deployment, since there’s a lot of moving pieces in a Nextcloud install.

      Though, it’s not going to automatically update itself unless you’ve made a poor choice for a production environment configuration, which sounds like what happened here.

      (Even using a latest tag isn’t really a problem until/unless you re-pull the image to do the upgrade. And/or have configured something to automatically update your shit, but again, don’t do that in production.)

      Nextcloud is also annoying in that updating the base won’t pull all the apps to a current version, so you have to know what’s going to break before you update the base so you can then update the apps as needed. Which, again, can’t just be left up to automatic updates.

      • timbuck2themoon@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        15 hours ago

        Exactly. I don’t know if the AIO image was used and how that all works (I stay away from that and the snap which is just an abomination) but no one should try to selfhost anything for prod unless they know exactly how it works. That and have a staging env. If you’re not up to the task then just pay for some commercial hosting (even if it’s just Nextcloud that is hosted elsewhere.)

        I’ve run the nextcloud image (just docker.io/nextcloud IIRC) pinned for years with k8s and it’s durable and fine. It stays put and I just take the time to update my testing instance, make sure it all works with some cheap smoke tests, then upgrade prod.