Build process design


  • Discourse touched me in a no-no place

    Not strictly 'Coding' but related. But I do need help. At present, my main product's build setup is structured like this:

    • One enormous Megasolution with ~300 projects (roughly 70 of them are actual useful output, the rest are test mocks and such)
    • Each check-in triggers a TeamCity build of the entire megasolution (this takes ~15 minutes). Each Useful Output project gets Octopacked into a Nuget package and tossed onto the TeamCity integrated Nuget server.
    • Each successful TeamCity build triggers a deploy of the entire megasolution to the dev servers via Octopus Deploy. Octopus pulls the packages by version from the NuGet repository and installs them on the dev environment. Like 20 minutes.

    I'd like to get to the point where each Useful Output has it's own individual solution (dependencies handled via Nuget) and the build output lands in a non-Teamcity Nuget server (TC's Nuget never seems to hold on to older versions as long as I need it for). The deployment process isn't too bad as-is, though it is slow because of the sheer number of packages that have to be deployed. Perhaps a metapackage could be built for the purposes of deployment?

    However:

    1. Adding projects happens often. As it is, we have to add an Octopus step every time, which sucks. Having to add a new step to TeamCity to build a new SLN might be unbearably nasty.
    2. Related, I'd like the Teamcity builds to be less time consuming - by building only the necessary project(s). But I can't figure out how to do that without having dozens of build configurations in TC (which is even worse than adding a new build step for every project)
    3. Infrastructure (particularly disk space) ain't cheap in this enterprise. Think "six figures for 5tb". So if there's some way to tag particular versions in the Nuget server package as "This went to prod once!" and therefore mark it for infinite preservation while automagically culling older, non-important versions, it'd be cool.

    So... Halp!?

    Edit: I am by no means married to these tools. I didn't pick them. We're using TeamCity Pro (which is the free one) and Octopus Enterprise (which is by no means cheap). If it's genuinely better for what I want to do, I could very much make the case for moving to the the Inedo Proget/Buildmaster toolchain, for instance.



  • I have worked extensively with teamcity and big projects/solutions (50-100 projects per solution, multiple hour build times per solution - mostly due to obfuscation and "unit" tests which used databases that couldn't be removed because reasons), so hopefully I can help.

    What is it about multiple build configurations that concerns you? I don't know what your steps look like, but I would have thought a template project would let you do most of the heavy lifting. We set up our scripts to search the VCS check out folder for *.sln and build all of them, then pick up the outputs from a specific relative location and put them where they needed to be (I guess in your case this would be passing them up the chain to be packaged)

    I don't know Octopus Deploy so I couldn't comment on how hard/easy that is to automate configuration for, but certainly teamcity building solutions I was able to churn out a new build configuration or reconfigure an existing VCS folder (putting build scripts in the right places, setting up appropriate tag and branch folders etc) in a few minutes that would then happily churn away ad infinitum.

    I recognize the free version of teamcity may have a build configuration limit, which is a legitimate reason to want to avoid it, but having used the paid version we found setting up the chaining correctly (pre-requisites for each build configuration etc) quite easy.

    Is there anything I'm missing about this that means this route wouldn't work for you?



  • What I was thinking of doing was splitting apart a solution into smaller solutions that formed a single library that was used in the mega solution, and then have builds deployed to a nuget server. The not so mega solution would pull these dependencies in on a build.

    That was the idea never got to implement it.


  • Discourse touched me in a no-no place

    That's basically what I want. I was utterly unaware that the template feature existed.

    I suppose there's no getting around the teamcity enterprise license, but all that'll do is annoy my boss.

    Unless anyone has practical knowledge of buildmaster in this type of environment.



  • looking at buildmaster - would you not have exactly the same problem there? the concept of applications seems slightly broader, but you might find you still hit that limit with what you are trying to do


  • Discourse touched me in a no-no place

    Buildmaster is $2k annually and does both build and deploy.
    Teamcity is 2k annually and does build.
    Octopus is 2k annually and does deploy.

    Saving 2k in licenses isn't a big deal compared to having to retool entirely, but if it's above and beyond superior, it's worth thought



  • I suppose @apapadimoulis could answer buildmaster questions, assuming he's summonable?



  • Here's my advice.

    Re-envision your entire problemspace; what is the actual business case for deployment? How is the application structure? Is this a SOA-type system, where you can deploy a single one of your 70 useful outputs... and have it work fine with no system interruptions? Or, are you just taking these 70 things and deploying them all simultaneously? Because that's just taking one problem and replacing it with into 70.

    70 actual, individually and separately deployable components of a single system is quite rare. You need a team of teams to manage it, and even then, it's a total clusterfuck that very few organizations pull off successfully. My guess -- based on your short description, and the fact that 99.99% of developers do this -- is that the solution is way overcomplicated.

    Anyway, once you've rethought your problem space, now think of how to get there from here. This is one the top selling points of BuildMaster -- you can very easily replicate whatever process your currently have (even a fully manual one), and easily move towards something better over time. You can import TeamCity builds artifacts, and then deploy them easily.

    If your find your problem is actually as complex, then BuildMaster has the features to accommodate. And I don't mean, a box where you can type in a powershell script to do whatever you need to. Of course it has that, but they've been adding some seriously advanced stuff WRT to the deployment plans, server groups, pools, all those things.

    Honestly I'd leave NuGet out of it, because it's a total mess of a platform. So much so that I was seriously considering writing a feature article on it. NuGet is "ok" for one thing, and one thing only -- managing dependencies in .NET. Don't use it for applications (relevant rant here), or it's going to force you to think in terms of "how can I put my application into little NuGet packages."

    If you post more details of the problemspace (aside from the technical implementation you have now), I think we can all give some advice on how to slice it. I've seen dozens of common patterns, and chances are you fit nicely into one of them.



  • @apapadimoulis said:

    So much so that I was seriously considering writing a feature article on it.

    At minimum, could be some good sidebar material there. I don't recall hearing the acronym JSON-LD before (I'm only just this week actually using JSON for anything, 'cause that's just not normally the area I work in), so that sounds like a good sidebar post.

    I can understand why the NuGet article never happened, but I'd be interested to hear more about it in the sidebar or even in General. I don't (currently) do .NET, so I have no use for NuGet, but I'd still be interested in comparing it with, e.g., apt-get.

    @apapadimoulis said:

    Don't use it for applications (relevant rant here),

    Have you looked at what MS is doing with OneGet? It's based on Chocolately, supposedly, which is on top of NuGet (or so I got from one of your linked google groups posts).



  • @boomzilla said:

    I'd be interested to hear more about it in the sidebar or even in General

    Or, perhaps as a derailment here? TLDR; it's a poorly-built solution to a misunderstood problem.

    I wrote an article that describes "misunderstood problem" parts. Microsoft did a great job of building C# off of Java, yet for whatever reason, the NuGet team did almost no research into the space. Well, that's not entirely true -- they looked at Ruby and said, "zomg let's be open source like them, then the devs will love us", and then they tried to copy RubyGems. Ruby is as close to .NET as COBOL is to JavaScript, so that worked out very poorly.

    From the "poorly built solution" side, their implementation choices were mind boggling. It's BzzDD (Buzzword-Driven Development) mixed with "let's enterprise the fuck out of this code." Few things.

    • The problem domain, again, is dependency management. The primary business entity is a package, which has multiple versions, and each version and a number. Etc. Not a complex model. Aaaand to model this domain they chose... wait for it.. RS-fucking-S. Yup. That web-syndication format designed for blogs and articles. But the good news is, you can point your Feed reader to NuGet.org/api and see the lastest packages people published without even needing NuGet client tools
    • But hey, RSS is based on XML, and you can put ANYTHING in XML, right? And so they did... with comma-separated, pipe//colon-separated, and all sorts of things inspired from our very own CodeSOD posts
    • Of course, since RSS is actually designed for articles, and not for really anything else, there's no default searching capability; this is a problem, because that's a fundamental requirement of a package manager, Not to worry, they chose ODATA! The most obtuse and unintuitive "open" platform that Microsoft built since their "OpenXML" or whateverthefuck Office documents use
    • Speaking of Office documents... guess which format the packages are in? Microsoft's very own BACPAC format. It's basically a zip file, except it has a bunch of other hidden folders and zip stream entries that are used for metadata, and are quite complicated to use
    • Fortunately, they don't actually use any of BACPAC's built-in metadata facilities; instead, they just keep a ".nuspec" file in the root of the zip file to contain the package metadata; so, this means all BACPAC format offers is assurance to the BACPAC team that someone is actually using their format
    • Although it has an XSD, the nuspec file is undocumented; it has a URI that leads to a 404, but you can dig thru the-obfuscated-by-design codebase to find the multiple versions of the same schema file, and then try to make one that validates against that
    • Back to ODATA... it's like SQL, but as a querystring. As you might imagine, that still doesn't solve the technical problems, so they had to implement a number of other undocumented API endpoints (called ODATA functions) that do things like, "/api/GetPackageByID?Id=X" instead of using the "native" ODATA calls like "/api/Packages(Id='X')/$value"
    • And this just covers the server-side; the client tools are.... much worse

    @boomzilla said:

    Have you looked at what MS is doing with OneGet?

    Don't even get me started... oh fuck. you did.

    OK I'll be brief this time.

    • "Hey, guys, so we're losing a lot of market share to Linux?"
    • "Is it the price? Hmm... we can't make Windows free"
    • "No I don't think it's that, I think, people just don't like... Windows."
    • "What if we made it... Linux?"
    • "I think you're on to something."
    • "It's brilliant! I'll get the Visual Studio team on it. They built TFS, but no one likes it, so let's just put Git in there, and everyone will."
    • "Brilliant! I heard people like Docker too?"
    • "Well that' fundamentally doesn't make sense on our operating system architecture... but I'll get the Kernel team on it!"
    • "What about, command line? Can we just make everything command line?"
    • "Oh shit, someone already built apt-get for Windows. Fuck. We need that."
    • "How about we sorta copy it, but sorta base it on it. Then we'll call it OneGet."
    • "Brilliant, the community will love it!"


  • @apapadimoulis said:

    I wrote an article that describes "misunderstood problem" parts.

    Thanks, very interesting.


  • Discourse touched me in a no-no place

    @apapadimoulis said:

    If you post more details of the problemspace (aside from the technical implementation you have now), I think we can all give some advice on how to slice it. I've seen dozens of common patterns, and chances are you fit nicely into one of them.
    You just want to hear TRWTF :smile:

    Architectural Summary: SOA on Crack as envisioned by someone who read a book once about state machines.

    There are n "services", each of which does a specific task in sequence according to a script. The "script" isn't executed by any sort of central controller, each service just reads the first item off the script and uses that as its input. It then knocks that off the list and passes it, along with a data structure containing current state data directly to the next service.

    Each service is implemented as an individual library project. The primary class in the service inherits from and implements a bunch of interfaces contained in a set of "core" libraries that implements all the stuff involved in dealing with adjusting the state and script, doing logging, etc.

    Each service also includes a reference to a "Generic service exe". This project's build output is a Windows service that bootstraps all the actual communications, connects to the cluster, etc. and then loads a "service" DLL to run code from when it receives requests.

    Some particularly complicated service libraries reference each other directly (to avoid having to build certain event sequences into scripts, and reuse functionality that's already been built), so the dependency graph has a bunch of sideways shit going on, too, so in order to facilitate this the "parameter" data structures for each service do not live in the individual service library, but in a single "parameter" library.

    If any of the core libraries, the service EXE, or the parameter library changes, a deployment must be of the "everything all at once" variety.

    If an individual service changes, the deploy needs to follow the dependency graph. A service is usually a leaf node, except in the aforementioned edge cases, so that's usually a single service.

    In reality, the "build and deploy everything all the time" thing worked for awhile because we were on the initial push to get the new version of the platform built, and that involved a HUGE change velocity and reengineering the entire world. Now change velocity is only high in small pockets, and it hurts.

    And that's just our modern environment. Our legacy environments are similar, but even more of a clusterfuck (and lack automated build/deploy anyway. And if you go more than 1 major iteration back, they may lack current source code)

    @apapadimoulis said:

    You need a team of teams to manage it, and even then, it's a total clusterfuck that very few organizations pull off successfully.
    Or 3 developers and a team lead.



  • @Weng said:

    Architectural Summary: SOA on Crack as envisioned by someone who read a book once about state machines.

    Ah, yes! The "SOA... brilliant!" pattern as I've come to refer to it. It does sound a bit like a clusterfuck, but not the worst I've seen.

    Everything's in a megasolution, so NuGet (as a dependnecy mgr) really isn't getting much value. You could re-engineer your solution to use NuGet packages instead of project references, but I think that'll just give you more headaches. NuGet does not support the concept of SNAPSHOT builds, so aside from doing some unholy hacks that will break every time you decide to make the smallest change, the NuGet workflow will be "check in library code on Lib.sln. Wait for build to copmlete. Update packages on App.sln. Try it then."

    I guess in BuildMaster, check out this specifics document that shows how you can make these chained build/deployments. It's a little outdated from how we do it now, but the ideas are the same. I think what you want is the ability to say "ok, let's build/deploy a new version of ServiceX; create release 3.2, select 3.0, from the Core dropdown, and press build," and then BuildMaster will build that and the necessary children?

    You can very likely just keep your megasolution, and then use build-/deployment-time dependency management within BuildMaster (imported deployables) to make sure your SOA stuff is chain-built/deployed with minimal changes to your dev workflow.


  • Discourse touched me in a no-no place

    @apapadimoulis said:

    Ah, yes! The "SOA... brillant!" pattern as I've come to refer to it

    FTFY.

    @apapadimoulis said:

    so aside from doing some unholy hacks that will break every time you decide to make the smallest change, the NuGet workflow will be "check in library code on Lib.sln. Wait for build to copmlete. Update packages on App.sln. Try it then.
    Oh god, I hadn't even considered that. WTF-y atrocity averted.

    Sounds like Buildmaster is in fact capable of the sort of workflow that I'd like, but the question is this:
    How much time and effort goes into the actual mechanics of getting the chained builds set up?

    You see, when I first took over my team (and hired everyone from scratch because it was down to a team of exactly 0 as of the day I started - the remainder of the team quit because they didn't want to work for my boss), I wanted a build/release manager. Someone to actually take care of all this stuff.

    Turns out, you can't hire those on the east coast. Hell, the agency hadn't even heard of the term (Or in fact the entire concept that such a thing could be time consuming enough to merit hiring someone to do it). Couldn't find me a single candidate at any price (and directly recruiting? THAT'S AGAINST POLICY YOU CAN'T DO THAT!)

    So, one of my many hats on my understaffed team includes release manager. When I last reconfigured TeamCity/Octopus (to get it from 'this is crazy and ass backwards' to 'this is okay', it swallowed a month of my weekends filling out web forms. That's not even counting the time spent figuring out what I actually needed to do.

    I guess there's nothing stopping me from doing a suck-it-and-see POC over Thanksgiving or whatever (Family? FEH!)



  • @Weng said:

    How much time and effort goes into the actual mechanics of getting the chained builds set up?

    Obviously impossible to say. Compared to your existing stack (based on feedback from customers who do a simultaneous POC of both), BuildMaster is significantly easier to set up. We invest a lot in UX and whatnot to ensure that's the case. But there's also a little "Chat" button right in the software that lets directly speak to an engineer (sometimes me!), so when our UX fails to be intuitive (as it often does), just ask for help.

    I think the hard part is done. That is, figuring out how you want things, and actually understanding how to do automation.

    I'd give it a few hours. Understand the concepts, and focus on setting those up. The technical challenges ("why can't it establish an SSL connection from server A to B on my network") can suck up days, or for folks like this (a "low information user" as we call them), probably their entire life. Just bypass those with a dummy step. You can always fill in the gaps later.

    @Weng said:

    and directly recruiting

    You don't have the expertise of HR to not be racist, sexist, or ageist. Only half-kidding.



  • I had no idea what a clusterfuck nuget is underneath the surface. Thanks!

    Although I disagree about OneGet. Windows needs a real open package manager. Don't mind the command line, GUI-s will come if they get the architecture right. Which they won't, judging by the nuget story. Sigh.



  • @Weng said:

    You just want to hear TRWTF

    Architectural Summary: SOA on Crack as envisioned by someone who read a book once about state machines.

    There are n "services", each of which does a specific task in sequence according to a script. The "script" isn't executed by any sort of central controller, each service just reads the first item off the script and uses that as its input. It then knocks that off the list and passes it, along with a data structure containing current state data directly to the next service.

    Is it bad that I sort of thought "Oh, neat!" when I read this part? :-)


  • Discourse touched me in a no-no place

    The script is an xml file. Very enterprisey.



  • Was this dragon invented before or after the release of Windows Workflow? If I understand the "real" problem correctly, this would be case #2 I have heard of where Windows Workflow is usable/actually-an-option and very much a good idea.


  • Discourse touched me in a no-no place

    What the hell is Windows Workflow and why have I literally never seen it on a .net stack diagram until now? I swear you made it up and it spontaneously popped into existence.

    This hellbeast has been cooking since 2003. The only substantial changes came in 2009 when MSMQ was replaced with WCF.



  • WF follows Fight Club Rules. One does not speak of WF.

    But seriously the only reason I know it exists is a project I had to interface with used it.



  • I don't quite understand why Windows needs a package manager? Like most of what you need built-in, like IIS or Active Directory? And the rest like Office of Sql Server, I dunno, do you just run the installer?

    I also don't understand the Windows App Store. Well, other than to get the shitty version of something that runs full screen.

    That said, I don't think there's anything wrong with it per se. just There is something wrong with building something on top of NuGet however, however. It's like, using Feces as a foundation footers instead of concrete.


  • Discourse touched me in a no-no place

    @MathNerdCNU said:

    WF follows Fight Club Rules. One does not speak of WF.

    Way back when WCF was first publicly mentioned, along with several other packages, Windows Workflow was mentioned as well. I think I have a Wrox book on it somewhere.



  • @apapadimoulis said:

    I don't quite understand why Windows needs a package manager? Like most of what you need built-in, like IIS or Active Directory? And the rest like Office of Sql Server, I dunno, do you just run the installer?

    I also don't understand the Windows App Store. Well, other than to get the shitty version of something that runs full screen.

    Quicker installation, uninstallation? Scripted installs? Singular update mechanism for all software (only MS stuff is allowed into Windows Update, as far as I know)? Safety?

    Or how about discoverability? Let's say you need a flac to mp3 converter for Windows. The only way to find one these days is type that into google and then browse through dozens of SEO-ed to hell little sites, outdated archive sites, no clear info on the shareware limitations etc. Having some kind of uniform repository or store system would help a lot.



  • In the past using chocolatey to setup machines quickly. We have a new developer, I run the powershell script and he has everything installed on Windows to get started developing. Also I can use the similar script to run on a VM to setup a machine quickly or even have the script run as part of the build process.

    If I can have the same thing but all the packages are official I don't have to maintain my own local repo.



  • @apapadimoulis said:

    I also don't understand the Windows App Store. Well, other than to get the shitty version of something that runs full screen.

    I just hope MS doesn't start doubling-down on the desktop OS app store concept after the Mac App Store has proved to be a "mostly-failure". So far Steam is the only app store on a full desktop OS that's been reasonably successful, and MS can't compete with Steam. They already tried and failed.

    But hey, I worked on Intel AppUp so what do I know, haha.


  • BINNED

    @blakeyrat said:

    the Mac App Store has proved to be a "mostly-failure"

    I haven't had a lot to do with macs in several years, so have no knowledge of the app store. What's wrong with it? Don't new versions of OSX block installation from non app store sources by default?


  • Discourse touched me in a no-no place

    @Jaloopa said:

    Don't new versions of OSX block installation from non app store sources by default?

    People leave that turned on?


  • BINNED

    I imagine the less tech savvy people do unless they get a friendly geek to set it up for them



  • @blakeyrat said:

    I just hope MS doesn't start doubling-down on the desktop OS app store concept after the Mac App Store has proved to be a "mostly-failure"

    What makes you think Mac App Store has failed? All the app developers are still being pushed into the store, like it or not. Otherwise, they are left out in the cold, with their apps requiring hidden option switches and similar crap to be installable. Nothing has changed on that front, AFAIK.


  • Discourse touched me in a no-no place

    @Jaloopa said:

    I imagine the less tech savvy people do unless they get a friendly geek to set it up for them

    Fair enough. Until you want to install something, you shouldn't be doing so. But when you want to, it's a setting in just about the most obvious place. (Main Preferences Panel → Security & Privacy → General tab.)



  • @Jaloopa said:

    What's wrong with it?

    It's too dictatorial. It doesn't allow any flexibility in pricing at all-- you want to offer a competitive discount if the user has a competing product installed? Can't do it. Cheaper upgrades than full purchases? You're fucked. Apple demands it be so.

    There have been a few articles recently about high profile applications either putting up big warnings that say App Store customers might be getting ripped-off through no fault of their own, or simply departing the App Store altogether. I didn't save any links though.


  • BINNED

    Sounds like it was designed around the consumer market rather than enterprise then?
    I can't say I'm surprised to hear about Apple being dictatorial, that's pretty much their raison detre



  • @Jaloopa said:

    raison d'être

    FrenchedTFY



  • @blakeyrat said:

    It's too dictatorial. It doesn't allow any flexibility in pricing at all-- you want to offer a competitive discount if the user has a competing product installed? Can't do it. Cheaper upgrades than full purchases? You're fucked. Apple demands it be so.

    Shouldn't you be thanking your Apple Overlords for gracefully informing you while you were almost Doing It Wrong™?



  • @dkf said:

    People leave that turned on?

    I leave it on on the default setting which allows MAS apps and other signed apps.
    Since

    • Most apps I download are signed
    • There is an escape hatch to start unsigned apps; using it once is sufficient, the application will then be allowed to start without any nagging (until it’s modified I suppose)
    • Unsigned CLI apps are allowed as long as they are started from a terminal
    • Unsigned applications can be started from Xcode during development

    it doesn’t bother me at all.


Log in to reply
 

Looks like your connection to What the Daily WTF? was lost, please wait while we try to reconnect.