Exit the cloud


  • ♿ (Parody)

    A bunch of stuff that seems obvious to me but obviously not to a lot of people out there:


  • BINNED

    @boomzilla said in Exit the cloud:

    A bunch of stuff that seems obvious to me

    I'll just reference my completely uninformed shit-posting:

    @topspin said in The Official Status Thread:

    @Arantor said in The Official Status Thread:

    migrated his entire business off cloud, off serverless and Tech Twitter lost its goddamn mind.

    I'm so far ahead, my stuff has never been on the cloud to begin with. 🎺

    Never mind that DHH wasn’t evangelising this for everyone, but simply “for our needs, at our scale, I can repatriate the workload to our own hardware for cheaper including having dedicated folks to manage it”. Even when this point was brought up, he was still derided as “well you should still build the app to be flexible just in case going to the cloud is a useful idea”

    You mean "somebody else's computer" isn't cheaper than running your own computer, once you reach a scale that you actually need the whole computer and not just a fraction of it? :surprised-pikachu:

    Also paging @Arantor for maybe less-uninformed shit-posting.


  • ♿ (Parody)

    @topspin said in Exit the cloud:

    The Official Status Thread:

    bc36c391-c5cf-40d4-99cb-6be10277d7d0-image.png



  • @topspin yup, this is the FAQ redux of the story that's been kicking around Tech Twitter.

    It's almost like we have a guy here who is running a business, looking at his costs and thinking 'maybe if I didn't outsource that part of it, I could do it for cheaper with my own hardware and my own people' without preaching that this is how everyone has to do it.

    In 2023 this is apparently radical thinking.


  • Notification Spam Recipient

    IMG_1696.jpeg

    :fu:


  • Discourse touched me in a no-no place



  • @loopback0 said in Exit the cloud:

    @DogsB :yell-at-cloud:

    no, this is :yell-at-not-cloud:


  • Discourse touched me in a no-no place

    @Arantor said in Exit the cloud:

    @loopback0 said in Exit the cloud:

    @DogsB :yell-at-cloud:

    no, this is :yell-at-not-cloud:

    Same difference. Yelling at something in someone else's datacentre.



  • @loopback0 yes but the whole point of the article is that it's not the cloud :rofl:


  • Discourse touched me in a no-no place

    @Arantor I know, I read it but :faxbarrierjoker:


  • ♿ (Parody)

    @Arantor the important thing is to yell.



  • @topspin said in Exit the cloud:

    You mean "somebody else's computer" isn't cheaper than running your own computer, once you reach a scale that you actually need the whole computer and not just a fraction of it?

    For anything above the free plans I don't think the cloud is cost effective. I use a minipc for my silly experiments


  • Notification Spam Recipient

    @sockpuppet7 said in Exit the cloud:

    @topspin said in Exit the cloud:

    You mean "somebody else's computer" isn't cheaper than running your own computer, once you reach a scale that you actually need the whole computer and not just a fraction of it?

    For anything above the free plans I don't think the cloud is cost effective. I use a minipcVM on my server for my silly experiments

    🔧 for me.

    I decided running one or two somewhat-rather-powerful servers was more economical than a dozen or so hand-me-down boxes.

    I'd like to say I saved money, but I still have literal stacks of computers I can't seem to sell anyone...

    By the way, anyone want to by Classic or almost-Vintage computer, working*?



  • @Tsaukpaetra if you have a big server and lots of stuff that are idle a lot of time, I think something that supports scaling to zero could be interesting

    for me it wasn't worth it cause the things that support scaling to zero I found had a starting memory consumption higher than my things (knative was one of them)

    I was thinking on writing something simpler that used a reverse proxy and started/stopped a container automatically, but :kneeling_warthog:


  • ♿ (Parody)

    @sockpuppet7 said in Exit the cloud:

    @topspin said in Exit the cloud:

    You mean "somebody else's computer" isn't cheaper than running your own computer, once you reach a scale that you actually need the whole computer and not just a fraction of it?

    For anything above the free plans I don't think the cloud is cost effective. I use a minipc for my silly experiments

    A point he makes is that the cloud can be useful if you have highly variable demand, but probably only if you're starting from something very small to begin with.



  • @boomzilla and if you're in the SaaS startup arena that's absolutely a place you're likely to be where this does legitimately make sense.

    The problem is that the Twitter Tech Bros don't remember that actually DHH is running a multi-million dollar business where the cloud savings in year 1 are more than their annual revenue.


  • ♿ (Parody)

    @Arantor math is hard. Let's go serverless!



  • @boomzilla no thanks


  • Considered Harmful

    There's just one fly in the ointment:

    Still in February, we announced the new tool I had bootstrapped in a few weeks to take us out of the cloud

    So nice to have someone like that to single-handedly whip up a new Kubernetes over the span of one quiet month 🏆


  • Notification Spam Recipient

    @boomzilla said in Exit the cloud:

    @Arantor math is hard. Let's go serverless!

    A lot of the batch stuff we do could probably be done more reliably and cheaply with lambda. Probably more to do with admins killing severs that look idle for long periods but forget the consolidate services bit of the equation.



  • @Applied-Mediocrity said in Exit the cloud:

    There's just one fly in the ointment:

    Still in February, we announced the new tool I had bootstrapped in a few weeks to take us out of the cloud

    So nice to have someone like that to single-handedly whip up a new Kubernetes over the span of one quiet month 🏆

    With Kubernetes, you can handle Docker containers.
    But when you also need to deploy (and clean up) storage, networks, functions (lambdas), secrets, ..., the way from ARM Templates (Azure, I do not know Amazon's or Google's equivalent) is more complicated.



  • This begs the question:
    Should I feel glad that somebody still doesn't know what Kubernetes actually does?

    quoth reddit: "..and at this point I'm afraid to ask.."

    Edit: I can't even run a Docker-container more complex than the "Hello, World" one a.k.a. "don't want to 'yos why should I?
    Our ..Enterprise..-Vendor supplies two or three .dockerfiles (sp?) but after questioning them why these don't work and -after insisting they show me a live demo how "it must be done / installed 'cos obviously I'm too dumb to click next->next->next->" and them failing gloriously with the same errors in their own environment -
    ..well: [there's your problem.jpg]

    MOTD: it's WOMMZFG all the way! 🥂



  • In theory K8s is “on the one hand I have a bunch of compute resources, on the other I have a bunch of containers to run on them, and my job is to start them, restart them if they fail and generally manage this mess”.

    But K8s is so fucked up you end up needing tools to configure K8s such as Helm. Manage the manager.



  • @Arantor It's managers all the way down! 🐢


  • ♿ (Parody)

    @iKnowItsLame said in Exit the cloud:

    This begs the question:

    I'm sure it doesn't.

    Should I feel glad that somebody still doesn't know what Kubernetes actually does?

    It, like...runs containers or something?



  • @boomzilla
    you mean a conga-line of managers managing something :yell-at-cloud: IN THE CLOUD!)

    I'm quite sure that my current job-profile doesn't include/require this.
    .. and I'm -being an old fart- quite happy with that.

    Make it so!


  • Discourse touched me in a no-no place

    @iKnowItsLame said in Exit the cloud:

    Should I feel glad that somebody still doesn't know what Kubernetes actually does?

    I know enough about it to know that I don't want to know any more.



  • I have built and deployed shit on Kubernetes.

    It put me off the whole affair and reinforced #monolithforever



  • @iKnowItsLame said in Exit the cloud:

    This begs the question:
    Should I feel glad that somebody still doesn't know what Kubernetes actually does?

    Fail a lot!

    Edit: I can't even run a Docker-container more complex than the "Hello, World" one a.k.a. "don't want to 'yos why should I?
    Our ..Enterprise..-Vendor supplies two or three .dockerfiles (sp?) but after questioning them why these don't work and -after insisting they show me a live demo how "it must be done / installed 'cos obviously I'm too dumb to click next->next->next->" and them failing gloriously with the same errors in their own environment -
    ..well: [there's your problem.jpg]

    MOTD: it's WOMMZFG all the way! 🥂

    All these tools only solve a quarter of the problem and replace the remaining three quarters with a different version of it that you have to learn again.

    E.g., you build a docker container with a Dockerfile, but if you want to build several related containers using a common base (to save space, the shared layers are shared on the disk), you have to wrap it with your own build scripts.

    And then you need to remember the bambillion of options you need to mount appropriate volumes and whatever when starting the containers, because the containers don't have much of a useful metadata.

    And then you want to deploy them into kubernetes, which you do by specifying a bunch of configuration objects, but some of the configuration is deployment-specific, so you need some templating tool to generate them.

    And so on and so forth and you end up with hyperinstrumentemia.


    @Arantor said in Exit the cloud:

    I have built and deployed shit on Kubernetes.

    I think it does make things … a bit … simpler. Or at least a bit more uniform.

    But you soon find out that the automagic does not actually work. If you don't tune the resource requests and limits just right, it will happily keep restarting the pod in place while there is another node with more resources, for example.

    It put me off the whole affair and reinforced #monolithforever

    You certainly shouldn't be overdoing the splitting into microservices, but it's a bit orthogonal.

    Splitting your own application into dozens of components provides no practical benefits, just wastes resources. But having standard-ish reverse proxy and certificate management and logging and such still makes it easier when I want to spawn a dozen of test instances, which I'm totally taking advantage of on both projects. Otherwise I'd either have to spawn lots of tiny VMs or fiddle with port assignments.



  • @Arantor said in Exit the cloud:

    But K8s is so fucked up you end up needing tools to configure K8s such as Helm. Manage the manager.

    This is my favorite part of modern software development. When something goes wrong you must first find out which gun it was you actually used to shoot yourself in the foot



  • @Arantor said in Exit the cloud:

    In theory K8s is “on the one hand I have a bunch of compute resources, on the other I have a bunch of containers to run on them, and my job is to start them, restart them if they fail and generally manage this mess”.

    But K8s is so fucked up you end up needing tools to configure K8s such as Helm. Manage the manager.

    I don't think Kubernetes is that fucked. I think it is a relatively sensibly defined layer. You give it a list of things to run and it tries to keep them running. The controllers are not always stable and give really crappy diagnostics, but it is not illogical.

    Helm, on the other hand, is thoroughly fucked. It generates YAML—with 2D syntax—using text templates, then parses the YAML and re-serializes it as JSON for feeding to the K8s API. Which, besides being really fragile, means that when an error is detected in the later steps, they don't have the context to actually tell you what. And it's stringly typed, so cross-references are not checked, you will only find out you have a typo in one when you see it isn't starting.

    As I've learned Terraform, I concluded that the better tools to deploy things into Kubernetes (which unlike Helm additionally support other platforms too) are Pulumi, Terraform, or, if you don't like the fact HashiCorp changed Terraform's license from 1.6, OpenTofu.

    I started to think Pulumi is better, since it is scripted with normal programming language rather than custom configuration language that does not support functions, but I didn't get to try it yet.



  • @homoBalkanus said in Exit the cloud:

    This is my favorite part of modern software development. When something goes wrong you must first find out which gun it was you actually used to shoot yourself in the foot

    Software development has involved TLAs, with way too many places featuring footguns, for ages.



  • @Bulb everyone I’ve spoken to, including the people teaching K8s workshops, and people running large applications in the cloud, utterly insist that Helm is necessary to “effectively” manage K8s. “Best practices” or some such.

    I noped the hell out of it a while ago. Haven’t touched it myself in 3 years, have no intention of ever touching it again myself if I can help it.

    Even supposedly veteran practitioners have trouble - previous workplace was getting an “expert” they’d worked with, at £500/day to stand up three small PHP apps on a K8s cluster. Despite me doing most of the legwork around the container specs, environment vars handling etc, it still took him 3 weeks.

    Pretty sure it wouldn’t haven’t taken me 3 weeks and I definitely wouldn’t have allocated 0.003 vCPUs to each of the instances so they were dog slow, and then blamed the developer for it (despite the developer in question being given instructions from the lead as to what frameworks etc to use)



  • @Arantor said in Exit the cloud:

    @Bulb everyone I’ve spoken to, including the people teaching K8s workshops, and people running large applications in the cloud, utterly insist that Helm is necessary to “effectively” manage K8s. “Best practices” or some such.

    You definitely need something. It might be something you have written yourselves (like, shell scripts running kubectl), or it might be someone (running kubectl) 😱 , but "just use k8s" is not even a thing.

    Unfortunately, Helm is the only widely accepted tool, despite being 💩 . We can only hope that the "Godot" phase will eventually end.



  • @Arantor said in Exit the cloud:

    @Bulb everyone I’ve spoken to, including the people teaching K8s workshops, and people running large applications in the cloud, utterly insist that Helm is necessary to “effectively” manage K8s. “Best practices” or some such.

    Helm is best current practice for publishing Kubernetes manifests for non-trivial components, which means you can't avoid dealing with it altogether. B.U.T:

    Helm does not really fit with the declarative, “infrastructure-as-code”, approach of Kubernetes and the other cloud tools. Helm works imperatively, and needs S.I.X (:eek:) parameters to run: repository, chart, version, release name, values, and the implicit kubernetes context to deploy to. And it does not (or until recently did not) even remember all the parameters in case you want to re-run it e.g. for upgrade.

    So what you really want is some tool that can call helm internally, but is declarative, so you can follow the “IasC”, a.k.a. “GitOps” approach, i.e. you edit the description of what you want running and that is the sole input to some kind of CD job that makes it happen.

    You can use Pulumi or Terraform, or if you only care about Kubernetes, perhaps ArgoCD. And if you use these tools, you can probably write your own definitions in them directly and only use Helm for the third-party stuff, which is published as Helm charts.

    I noped the hell out of it a while ago. Haven’t touched it myself in 3 years, have no intention of ever touching it again myself if I can help it.

    I've been working with it regularly over last 5 years. It's crap, but so are the alternatives and the consistency does help some.

    Even supposedly veteran practitioners have trouble - previous workplace was getting an “expert” they’d worked with, at £500/day to stand up three small PHP apps on a K8s cluster. Despite me doing most of the legwork around the container specs, environment vars handling etc, it still took him 3 weeks.

    That is, unfortunately, normal. Everything to do with build and deployment servers is insane time sink. Mostly because the tools take their time, so you make a three character fix and wait another 10 minutes for the build server to start the job and check out the repo again or for terraform to refresh the state and then it spits the next error. And between that you hunt through the crappy documentation for the right options. And then you make another stupid typo, rinse and repeat.

    Pretty sure it wouldn’t haven’t taken me 3 weeks and I definitely wouldn’t have allocated 0.003 vCPUs to each of the instances so they were dog slow, and then blamed the developer for it (despite the developer in question being given instructions from the lead as to what frameworks etc to use)

    I always think this shouldn't take more than 3 days and it takes 3 weeks. But yeah, allocating 0.003 vCPUs to each instance is a Dumb™ mistake.

    And so is starting 3 instances with 0.003 vCPU each instead of 1 instance with 0.009 vCPU, because the later will eat up less memory. And memory, not CPU, is usually the limiting factor on servers. Kubernetes is already taking care of restarting the service if it poops itself and moving it over to another node if the node dies, so unless you care about never even dropping a request—and if you are not handling millions of them a second you almost certainly shouldn't—you don't need to keep multiple instances running.


  • Banned

    @Mason_Wheeler said in Exit the cloud:

    @Arantor It's managers all the way down! 🐢

    That's what I call enterprise!


Log in to reply