DevOps - deploying multiple instances of a microservice on one box.
-
I work for a large corporate (with all the restrictions that implies). We work on Ubuntu with microservices written in Java (Spring Boot) with RabbitMQ and MongoDB*. I am not an architect and I don't get much say in the "big picture" planning.
My division has an internal "product" (for want of a better word) that is broken into several inter-related microservices. These obviously speaks to each other (mostly via RabbitMQ) but also, being "products" should be able to be deployed independently. As a team we like a (modified) version of Git Flow, so we have several stories( = different versions of a microservice) continuously deploying to our DEV server via Jenkins. I want the deployment to the DEV server so as to avoid the "it works on my machine" problem, and also we can run automated tests on the branch.
This is where I need advice.
Lets say the microservice is a REST API that takes in data, processes it, writes to a MongoDB collection, and places something on a queue.
The obvious way to deploy and test this would be to use a container, like Docker, and deploy the service, plus a seeded MongoDB instance and a RabbitMQ instance. But we do not have Docker and that is not going to happen.
So: I have a single DEV server which is running MongoDB and RabbitMQ.
I have solved this by doing the following, which my boss thinks is hilariously bad:
We use Jira so each story has a tag like
ABC-123
. We use that tag to name our branches, eg,ABC-123_fix_for_some_issue
When I check in my branch, Jenkins is triggered to build it, run all the unit tests and some functional tests.
If the tests pass, Jenkins (via bash script) extracts the
ABC-123
part of the branch name. The executable Jar file is deployed to the correct place on the DEV server using the SSH plugin, renamed toservicename-ABC-123.jar
, and then aservicename-ABC-123.conf
entry is created in the /etc/init folder. This .conf file is written by bash so we can pass config vars to the Jar:exec java -jar /etc/scripts/servicename-ABC-123.jar --spring.profiles.active=dev --server.port=10123 --branch.name=ABC-123
I then issue
sudo start servicename-ABC-123
and the server port and branch name over-ride Spring properties. Specifically, the RabbitMQ queue name in the properties file would have a suffix-ABC-123
added so that it is unique, a MongoDB collection is created with a name and the suffix-ABC-123
, and the port to which the service binds is10123
(this port is what my boss finds most hilarious. In order to have several instances of the service running, they need different ports. My solution is to extract the "123" part of the story name and add 10 000. This is pretty hacky but guarantees that each feature has a unique port.)
In this way I can safely point our automated test suite (SOAPUI) to
http://dev.server.url:10123
, and monitorcollection-name-ABC-123
andqueue-name-ABC-123
. Another feature branch could be onhttp://dev.server.url:10125
,collection-name-ABC-125
andqueue-name-ABC-125
without interfering...The system does work quite well, despite its apparent shakiness. But I'd like some criticism and suggestions on how to do this better given the restraints of having only one server on which to play.
Note: the Jenkins job that gets run when the feature is merged into the "develop" branch tidies up: stops the service, deletes the queue and collection, removes the .jar and deletes the .conf file - I am not leaving a mess of random unused stuff.
* As you can see, that sentence includes several "buzz-word" technologies that were cool in 2014 or so.
-
@Anthony-Buckland Around here people use glassfish to host the services and do everything manually without any scripts AFAIK
-
Don't you have a
develop
branch? If you're using git flow, it's probably that branch that should be on this kind of rapid testing schedule,with feature branches being work-in-progress ones that don't really need an entire suite of tests until the developer says they do.Otherwise it seems okay as long as you can properly isolate the services from each other. And, if you're deploying from feature branches, guarantee that an automated test suite won't have a service replaced halfway through.
-
The theory is that I'll roll out a similar system to the QA box; our QAs like to test the feature on the
feature
branch, then do an integration test after merging (thefeature
-->develop
merge is their responsibility) so ideally I'd like to know that the automated tests are run on the instances on QA.
-
@Anthony-Buckland said in DevOps - deploying multiple instances of a microservice on one box.:
But we do not have Docker and that is not going to happen.
Can you expand on this? I lend some operations time to a development team with a product and architecture that sounds very similar to yours, and Docker suits the use case very well; we'd publish a service image with tag ABC-123, push it to the registry, pull it to an integrations host, and start it with -P which maps an ephemeral port to each of the exposed ports so you don't have to set environment variables or JVM parameters to change server.port. (We do need to set spring.profiles.active, though, because the confounded developers always want their local profile to be the default :).)
It also means we don't need to give our Jenkins agents SSH or sudo(!) access, just a client certificate to hit Docker and the registry.
-
@heterodox said in DevOps - deploying multiple instances of a microservice on one box.:
@Anthony-Buckland said in DevOps - deploying multiple instances of a microservice on one box.:
But we do not have Docker and that is not going to happen.
Can you expand on this?
Probably the answer is simple: corporate policy. Software needs to be deprecated for at least 3 years before you can adopt it.
-
@Anthony-Buckland Your boss is right, it's bad, but the badness starts with the "everything on one box" policy. Put each deployment on its own box/VM, use a gateway to direct traffic to port 80 on the correct box, and move on with life.
-
@pydsigner said in DevOps - deploying multiple instances of a microservice on one box.:
@Anthony-Buckland Your boss is right, it's bad, but the badness starts with the "everything on one box" policy. Put each deployment on its own box/VM, use a gateway to direct traffic to port 80 on the correct box, and move on with life.
Many boxes is great until you stop living in FOSS land and start having dependencies with per machine pricing.
-
@Weng said in DevOps - deploying multiple instances of a microservice on one box.:
Many boxes is great until you stop living in FOSS land and start having dependencies with per machine pricing.
Or per processor core and machine, with the exact calculation method changing every year.
Did someone say Oracle?
-
@Anthony-Buckland What I do for this sort of thing is have a Tomcat that acts a host. I then push WARs in from the Jenkins build (running on another host) using a Cargo deployment step. Each WAR goes in with a different name, none of which are
ROOT
, so I can have as many as I want without much screwing around. It also lets me avoid having to fiddle around much with SSL configs (since my services typically have to be rather security-aware) as that's all set up at the Tomcat container level rather than on an individual-service level.The downside from your perspective is that you have to stop thinking in terms of deploying as being in terms of lots of hosts. If that bothers you a lot, slap an nginx proxy in there. (That's probably a good idea anyway, since it lets you do all sorts of other cunning things.)
That's my $0.02 anyway…
-
@Weng said in DevOps - deploying multiple instances of a microservice on one box.:
@pydsigner said in DevOps - deploying multiple instances of a microservice on one box.:
@Anthony-Buckland Your boss is right, it's bad, but the badness starts with the "everything on one box" policy. Put each deployment on its own box/VM, use a gateway to direct traffic to port 80 on the correct box, and move on with life.
Many boxes is great until you stop living in FOSS land and start having dependencies with per machine pricing.
Is something in the mentioned tech stack priced per-machine?
-
@heterodox said in DevOps - deploying multiple instances of a microservice on one box.:
we don't need to give our Jenkins agents SSH or sudo(!) access
Nothing wrong with using sudo for that. If your Jenkins agent has its own user ID, you can write a sudoers rule that gives it password-free permission to use only those specific commands that you need it to, which is exactly the kind of thing sudo was designed to let you do (and in fact the main thing it used to be used for before Canonical ham-fisted it into a kind of synonym for "please").
-
@pydsigner said in DevOps - deploying multiple instances of a microservice on one box.:
Put each deployment on its own box/VM, use a gateway to direct traffic to port 80 on the correct box, and move on with life.
Personally I can't see anything hilarious about choosing a port per service instance that becomes less hilarious just because you're choosing an IP address instead.
-
@Anthony-Buckland said in DevOps - deploying multiple instances of a microservice on one box.:
the following, which my boss thinks is hilariously bad
Your boss is a complicator and you're using gloves.
Gloves are good. Gloves work.
-
@kt_ said in DevOps - deploying multiple instances of a microservice on one box.:
Probably the answer is simple: corporate policy. Software needs to be deprecated for at least 3 years before you can adopt it.
30 years for banking and payment industries
-
@dkf said in DevOps - deploying multiple instances of a microservice on one box.:
I then push WARs in from the Jenkins build
You don't need to update WARs, because war... war never changes
-
@Anthony-Buckland I don't want to be a dick but is sounds horrendously complicated.
Everywhere I have worked with Git.
- Dev
- PR
- Test
4 . Live.
Or something like that.
It works well
-
@lucas1 said in DevOps - deploying multiple instances of a microservice on one box.:
@Anthony-Buckland I don't want to be a dick but is sounds horrendously complicated.
Everywhere I have worked with Git.
- Dev
- PR
- Test
4 . Live.
Or something like that.
It works well
For me it's generally:
- Dev
- Test
- PR
and that's it.
-
@pydsigner said in DevOps - deploying multiple instances of a microservice on one box.:
@Weng said in DevOps - deploying multiple instances of a microservice on one box.:
@pydsigner said in DevOps - deploying multiple instances of a microservice on one box.:
@Anthony-Buckland Your boss is right, it's bad, but the badness starts with the "everything on one box" policy. Put each deployment on its own box/VM, use a gateway to direct traffic to port 80 on the correct box, and move on with life.
Many boxes is great until you stop living in FOSS land and start having dependencies with per machine pricing.
Is something in the mentioned tech stack priced per-machine?
No, but I have dependencies that carry 7-figure pricetags that I don't name because it would basically pinpoint me as "The one guy in the entire world who does that". OP could well be in the same sort of boat.
-
@Weng said in DevOps - deploying multiple instances of a microservice on one box.:
No, but I have dependencies that carry 7-figure pricetags
-
@masonwheeler said in DevOps - deploying multiple instances of a microservice on one box.:
@Weng said in DevOps - deploying multiple instances of a microservice on one box.:
No, but I have dependencies that carry 7-figure pricetags
Super-powerful industry specific shit. My one and only regret is that we were unable to convince management that buying the fucking company was a good play.
The company changed hands for a mere 10x our license fee. Fortunately, to another software vendor in our space, rather than to one of our competitors. And not one of their competitors, either (their competitors suck shit) but the super-rare synergistic acquisition. The only meddling in the 3 years since the deal closed has been adding some optional integration for SMB use-cases that doesn't apply at all if you're operating at my scale.
-
@fbmac said in DevOps - deploying multiple instances of a microservice on one box.:
@kt_ said in DevOps - deploying multiple instances of a microservice on one box.:
Probably the answer is simple: corporate policy. Software needs to be deprecated for at least 3 years before you can adopt it.
30 years for banking and payment industries
Check that, 31.
Filed under: did someone say 300?
-
@heterodox - exactly as @kt_ suggests - while some teams are just starting with investigating Docker, corporate policies are far behind. Note, for example, the use of MongoDB, which wsas "cool" about 5-6 years ago and got adopted as a corporate standard about 2 years ago. This is not ideal for my team, as our data is very much suited to a traditional RDBMS... which we cannot have.
@dkf - my original plan was to use Apache as a reverse proxy, which means I can insulate the wierd port stuff behind a customised URL - the reason is to make automation pointing to the URL simple. We are using the embedded Jetty so we have executable Jar files rather than Wars deployed to Tomcat (which to me makes more sense, but again, corporate policy...)
@lucas1 , @ben_lubar - yes. Yes it is complicated but, experience has shown that DEV-->PR-->Test-->Live does not work in the corporate environment. We are heavily dependent on other teams and other teams on us; so we need the full Dev-->PR-->QA-->Staging-->Production pipeline, and (for reasons which I am not at all happy about) some other teams use the Staging environment as the "test" environment. In order to isolate our team from being the one getting blamed when the inevitable happens, I want to ensure that by the time our QA has signed off on something it is utterly correct in every conceivable way.
I'm willing to do some complicated automation setup in order to do this. I'd prefer simplicity & speed of deployment (after all as a corporate , one of our "standards" is also continuous deployment) - but I have to work with what I am given.
Thanks everyone for your input.
-
@Anthony-Buckland said in DevOps - deploying multiple instances of a microservice on one box.:
@lucas1 , @ben_lubar - yes. Yes it is complicated but, experience has shown that DEV-->PR-->Test-->Live does not work in the corporate environment.
Ah, sorry, you don't have access to the top secret orange forum, so you don't know that I've been complaining about my changes to a project I'm working on never being put into the live (closed beta) version.
Inside joke.
-
@Anthony-Buckland said in DevOps - deploying multiple instances of a microservice on one box.:
my original plan was to use Apache as a reverse proxy
That can work, but I find nginx easier to comprehend. YMMV. The only reason for not doing this would be if you needed complicated access to the SSL session, perhaps because you're doing something with client-authentication and SSL extensions, but if you are you're a braver man than I am.
embedded Jetty
I've seen that a lot, but it usually just means that configuring the things is an even worse headache since almost literally any squirrelly craziness can be made to function somewhat with that. Stuff that's forced to be a proper WAR tends to be more disciplined.
-
@Anthony-Buckland said in DevOps - deploying multiple instances of a microservice on one box.:
@heterodox - exactly as @kt_ suggests - while some teams are just starting with investigating Docker, corporate policies are far behind.
Interesting. I guess I'm just not used to the idea of corporate policies constraining what technologies you can use, as I've never been subject to such policies. The idea is "Whatever you want to use as long as it'll get the job done and won't blow the budget." Hell, it's built into Windows Server 2016 so you could use it on the DL if your platform supports Windows.
Now, if it was a client environment and something like FIPS requirements prevented you from moving forward (since Go's crypto library isn't FIPS-validated and AFAIK never will be), I'd get that.
-
@dkf said in DevOps - deploying multiple instances of a microservice on one box.:
The only reason for not doing this would be if you needed complicated access to the SSL session, perhaps because you're doing something with client-authentication and SSL extensions, but if you are you're a braver man than I am.
Funnily enough I'm having a cunt of an issue with client authentication and SSL session caching on Windows right now. Not feeling brave, though; just pissed. :P