[Docker] Again. Deploying docker containers?


  • Discourse touched me in a no-no place

    I'm on another steep learning curve, and need pointers for what I should be searching google for, since I don't know enough shibboleths to get the answers I need for this problem domain.


    In-house custom Linux.

    Deployed on (effectively) disparate computers, but they can communicate with a central system when required. We have total control over what runs on those computers.

    It has been declared that Docker shall be the container system of choice.

    This bit works (well enough..) on a single unit - I can manually create images, upload them remotely (using quay.io if it matters,) re-download them, persist containers over reboots.

    All good stuff.

    Now I need to find a method whereby I can automatically deploy the same setup to multiple units and have the containers start up correctly. And update the containers where necessary.

    Unfortunately the documentation/tutorials I've looked at so far (the Kubernates basic tutorial, but I suspect Swarm will be the same) seems to be predicated on the problem to be solved is deploying (pardon the terminology) one service using one/multiple computers to provide that single service.

    What I want to do is to deploy the same service, duplicated, to multiple single computers.

    I don't have a concrete example of how it will be used, (yet; another reason I'm sorta flying blind with this) but one probable use will be to deploy (and update) a portal landing page to each unit.

    Search terms? Examples? Pointers?



  • If you want each system to have exactly the same arguments to the container, doing it with swarm is what you want.

    Also, you may want to run a local registry if you're uploading an image and then downloading it a lot of times.


  • Garbage Person

    @pjh In Kubernetes I think you’re looking for either a “deployment” with “anti-affinity” or a “daemonset”.


  • :belt_onion:

    @pjh One of the projects I used to be on had a Docker swarm (this was when it wasn't built in to Docker engine but was a combo of consul/swarm containers running on each node) that was provisioned by Puppet, then we used Jenkins with the docker plugin to deploy to tcp://<swarm master>:2375. That would bring up individual instances on each node (if you specified a fixed port number, obviously it wouldn't deploy two on the same node) (then we load balanced, but you wouldn't have to).

    I'm not sure if that use case works anymore, though, as you said. :( The paradigm changes too rapidly.


  • Discourse touched me in a no-no place

    @ben_lubar said in [Docker] Again. Deploying docker containers?:

    Also, you may want to run a local registry if you're uploading an image and then downloading it a lot of times.

    All the systems concerned are geographically separate. Most of them move, remember.



  • @pjh said in [Docker] Again. Deploying docker containers?:

    Now I need to find a method whereby I can automatically deploy the same setup to multiple units and have the containers start up correctly. And update the containers where necessary.

    Ansible + docker-compose

    e.g.:

    playbook.yaml

    ---
    - hosts: dockerHosts
      remoteUser: user
      become: yes
      becomeMethod: sudo
    
      vars:
          yourVars: "goHere"
    
      tasks:
        - name: Start Docker Containers
          docker_service:
            state: present
            project_src: "location/of/docker-compose.yaml"
    

    docker-compose.yaml

    version: "3"
    services:
      nameOfService:
        image: imageLocation/imageName
        ports:
          - "12345:8080"
        networks:
          - sharedApplicationNetwork
        depends_on:
          - anyDependencies
        environment:
          - ENVIRONMENT_VARIABLES=yes
    
    networks:
      sharedApplicationNetwork:
    

    ansible-playbook playbook.yaml


  • Discourse touched me in a no-no place

    @jazzyjosh, I'm still missing the bit where a new unit reboots, comes online, and can somehow figure out that it should be running a particular container/service/stack by itself.

    All the stuff I've come across so far seems to presume that it is a node joining a group of other nodes...

    Unless the unit itself is the thing that should be controlling all 1 nodes present, in which case /it/ needs to figure out what it should be running.



  • @pjh said in [Docker] Again. Deploying docker containers?:

    @jazzyjosh, I'm still missing the bit where a new unit reboots, comes online, and can somehow figure out that it should be running a particular container/service/stack by itself.

    All the stuff I've come across so far seems to presume that it is a node joining a group of other nodes...

    Unless the unit itself is the thing that should be controlling all 1 nodes present, in which case /it/ needs to figure out what it should be running.

    Hmm.. I just checked our playbooks, and we just start everything up in ansible with the docker_container module and specify restart_policy: always. That way if the machine is rebooted dockerd is configured to start the container.

    Granted, you seem to be deploying the containers wrapped in Kubernetes, so I'm not sure how that is impacted.

    To make sure I'm understanding you, when the machine comes up, it should come up as a node and connect to a controller, unless it can't find it, in which case it should become the controller.


  • Discourse touched me in a no-no place

    @jazzyjosh said in [Docker] Again. Deploying docker containers?:

    Granted, you seem to be deploying the containers wrapped in Kubernetes, so I'm not sure how that is impacted.

    They're not being deployed by anything at the moment, I have - more or less - a clean slate.

    To make sure I'm understanding you, when the machine comes up, it should come up as a node and connect to a controller, unless it can't find it, in which case it should become the controller.

    No - at least not how I imagine it (I could be coming at this wrong of course.)

    I'll try and provide more detail without getting too doxy about it...


    Each machine is in a vehicle. It provides services to things on that vehicle, but not to things outside of that vehicle.

    Previously, to provide those services (DHCP, NTP, landing pages, internet connectivity among others) it started things directly on the host, such as dhcpd, ntpd, lighttpd - the configuration of which was rather hard-coded - or at least not easily changed.

    Some of these things are now to be run in containers. As are possibly any other things our customers (the vehicle owners) may require (they provide the docker images, we simply provide somewhere to run the containers from those images.)

    Each vehicle has it's own machine, and machines (generally) don't/can't talk to each other, but can 'phone home' to a central set of servers to acquire configuration, send telemetry, provide remote access when necessary.

    Groups of vehicles can be placed into fleets, with customers having one or more fleets.

    What I need is a solution whereby I can centrally configure a fleet so that each machine will run, independently of other machines, a set of services/containers (to provide NTP and a landing page e.g.,) and to update those containers (by updating the landing page e.g.)


    All I have working so far is docker itself where I can manually pull images, start containers, and upload images.

    I'm currently trying to get docker-compose into the build (different set of problems, due to the toolchain we're using now, but it's keeping me occupied at the moment.)


  • Discourse touched me in a no-no place

    @pjh said in [Docker] Again. Deploying docker containers?:

    I'm currently trying to get docker-compose into the build (different set of problems, due to the toolchain we're using now, but it's keeping me occupied at the moment.)

    Got that fixed, if it matters to anyone other than me and my boss..


Log in to reply