Continuous (smoke) delivery?



  • One of our "company goals" (for the engineering team as a whole anyway) is to move my particular team's product (which is the much simpler, much newer one) over to Continuous Delivery/Continuous Deployment.

    Ok, fine. I don't have too much embedded bias here in either direction.

    So the architect wanted us to watch

    https://www.youtube.com/watch?v=skLJuksCRTw

    . I'm about half-way through, and getting warning flags. Lots of assertions and nice phrases, very little arguments or evidence. And lots of what feels like snake oil. Sure, I don't have a problem with short release cycles. But the idea that the best way to figure out if something is wanted is to release it goes against my experience so far--there are lots of people who don't want their cheese moved constantly. And don't want to pay for a "beta, unstable" software. THat was feedback we got during our beta phase: "Come back when you have something that isn't constantly changing in big ways."

    I think there's a balance to be struck between optimizing for MTBF and MTRS. Especially for things that aren't idle entertainment, where going down for even a few hours has bad consequences beyond just losing money or breaching SLAs.

    Thoughts? Experiences with CD?



  • @Benjamin-Hall said in Continuous (smoke) delivery?:

    One of our "company goals" (for the engineering team as a whole anyway) is to move my particular team's product (which is the much simpler, much newer one) over to Continuous Delivery/Continuous Deployment.

    Ok, fine. I don't have too much embedded bias here in either direction.

    So the architect wanted us to watch

    https://www.youtube.com/watch?v=skLJuksCRTw

    . I'm about half-way through, and getting warning flags. Lots of assertions and nice phrases, very little arguments or evidence. And lots of what feels like snake oil. Sure, I don't have a problem with short release cycles. But the idea that the best way to figure out if something is wanted is to release it goes against my experience so far--there are lots of people who don't want their cheese moved constantly. And don't want to pay for a "beta, unstable" software. THat was feedback we got during our beta phase: "Come back when you have something that isn't constantly changing in big ways."

    I think there's a balance to be struck between optimizing for MTBF and MTRS. Especially for things that aren't idle entertainment, where going down for even a few hours has bad consequences beyond just losing money or breaching SLAs.

    Thoughts? Experiences with CD?

    You need really good automated tests on every level to pull it off successfully.
    And if you are doing something that is really important that it won't break, the CD part is probably a pretty bad idea. Especially if there are legal requirements for correctness. For web pages and unimportant stuff like facebook or youtube, the CD part is perfectly fine though. :mlp_shrug:


  • ♿ (Parody)

    @Benjamin-Hall said in Continuous (smoke) delivery?:

    But the idea that the best way to figure out if something is wanted is to release it goes against my experience so far

    Sometimes this really is the case. I often deal with users who simply cannot imagine doing anything other than what they've always done. Which maybe used to be some horrible manual thing in spreadsheets and notebooks. Then was some semi-kludge that was partly automated in our system in concert with their external spreadsheets, etc. It's not so much moving the cheese as it is getting them to realize that there is some cheese in the first place.

    That said, no one's life relies on my software.



  • @Carnage Our automated testing is...well...not comprehensive. On any of the unit tests, integration tests, or actual functional tests side. Especially since a chunk of the functionality we'd be CD'ing relies on interop with apps, which have to (by nature) have completely different, separate release cycles.



  • @boomzilla said in Continuous (smoke) delivery?:

    @Benjamin-Hall said in Continuous (smoke) delivery?:

    But the idea that the best way to figure out if something is wanted is to release it goes against my experience so far

    Sometimes this really is the case. I often deal with users who simply cannot imagine doing anything other than what they've always done. Which maybe used to be some horrible manual thing in spreadsheets and notebooks. Then was some semi-kludge that was partly automated in our system in concert with their external spreadsheets, etc. It's not so much moving the cheese as it is getting them to realize that there is some cheese in the first place.

    That said, no one's life relies on my software.

    My gut (and it's nothing more than gut, really) says that there are a couple different regimes. In an area without good competitors, you can move fast and break things. In an area with competition, you have to have a much bigger initial product and have good solid designs and concepts all along the way. Or else no one will want it even if it's an improvement. Because they won't be able to see that it's an improvement.

    And it's a chicken and egg situation--we've gone now about a year in developing this product (not counting the non-released state). Our feedback has been minimal, despite having some users. I'm not sure how releasing faster would change that, personally.



  • @Benjamin-Hall said in Continuous (smoke) delivery?:

    Especially for things that aren't idle entertainment, where going down for even a few hours has bad consequences beyond just losing money or breaching SLAs.

    IIRC, you're working on computer-aided dispatch software, or something of the sort, where people are depending on the software to get emergency services where they're needed in possibly life-threatening emergencies. "Move fast and break things" is a really, really bad development model for that sort of thing.



  • @HardwareGeek said in Continuous (smoke) delivery?:

    @Benjamin-Hall said in Continuous (smoke) delivery?:

    Especially for things that aren't idle entertainment, where going down for even a few hours has bad consequences beyond just losing money or breaching SLAs.

    IIRC, you're working on computer-aided dispatch software, or something of the sort, where people are depending on the software to get emergency services where they're needed in possibly life-threatening emergencies. "Move fast and break things" is a really, really bad development model for that sort of thing.

    The actual life-threatening emergencies part is the other product, thankfully. Ours is just the scheduling side. But that's also (as we've heard from users) somewhat time/mission critical, especially with COVID. And telling customers "just maintain a paper backup, because we can be down for an hour or so randomly" doesn't, to me, make the cut.


  • Considered Harmful

    @Carnage said in Continuous (smoke) delivery?:

    really good automated tests

    Note, this just means "actually good", but don't think that this lowers the bar.


  • Considered Harmful

    @HardwareGeek said in Continuous (smoke) delivery?:

    @Benjamin-Hall said in Continuous (smoke) delivery?:

    Especially for things that aren't idle entertainment, where going down for even a few hours has bad consequences beyond just losing money or breaching SLAs.

    IIRC, you're working on computer-aided dispatch software, or something of the sort, where people are depending on the software to get emergency services where they're needed in possibly life-threatening emergencies. "Move fast and break things" is a really, really bad development model for that sort of thing.

    Haha, you'd think so... but that's quite likely something someone surprisingly nearby literally does for a living, although most probably they strive for all the breakage to happen in test. Stable software tends to have merely had all its errors suppressed.


  • Considered Harmful

    @Benjamin-Hall said in Continuous (smoke) delivery?:

    In an area with competition, you have to have a much bigger initial product and have good solid designs and concepts all along the way.

    You never established "bigger", I could grant you perhaps "more compelling".



  • @Benjamin-Hall said in Continuous (smoke) delivery?:

    In an area with competition, you have to have a much bigger initial product and have good solid designs and concepts all along the way.

    Depends on your marketing team



  • @Benjamin-Hall said in Continuous (smoke) delivery?:

    And telling customers "just maintain a paper backup, because we can be down for an hour or so randomly" doesn't, to me, make the cut.

    Maybe I'm not looking at it The Right Way™, but I always saw CD not as "we must deliver all the changes all the time", but more as "we can deliver changes automatically whenever we need to". Important thing is that there's a completely automated way of delivering to production and no manual setup required. Ideally, you shouldn't have to be down at all.



  • @homoBalkanus said in Continuous (smoke) delivery?:

    @Benjamin-Hall said in Continuous (smoke) delivery?:

    And telling customers "just maintain a paper backup, because we can be down for an hour or so randomly" doesn't, to me, make the cut.

    Maybe I'm not looking at it The Right Way™, but I always saw CD not as "we must deliver all the changes all the time", but more as "we can deliver changes automatically whenever we need to". Important thing is that there's a completely automated way of delivering to production and no manual setup required. Ideally, you shouldn't have to be down at all.

    I was thinking more of the "Mean Time to Restore Service"--you shouldn't need to restore service. And any fix is going to take ~1 hour minimum just to do the tests to make sure your fix is the right one.

    And worse, there are things where you're not "down", but the data being displayed isn't right either. "Down" is easy to twigg to, but "it looks wrong for those customers" isn't.

    I don't have a problem with having an automated process to deploy, though.



  • @Benjamin-Hall so we're talking about bugs that slipped into prod here? How would CD make it worse? I would see it as having a faster/more consistent way to actually deliver your fix.



  • @homoBalkanus said in Continuous (smoke) delivery?:

    @Benjamin-Hall so we're talking about bugs that slipped into prod here? How would CD make it worse? I would see it as having a faster/more consistent way to actually deliver your fix.

    I'm not sure. I have a thought that "CD" means a bunch of different things.

    If it's just purely "you can hit a button and it deploys without manual intervention", that's one thing that I don't really object to.

    If it's "every merge or push deploys to production", then I've got objections. But I'm not sure they're well-founded. It's more like flags going up.

    And in this case, the dichotomy presented by the speaker was that you can either optimize your process to reduce the number of times things break or optimize to make it easy to fix them when they do break. Which seems to suggest (barring magical free lunches) that doing the second (which he says you should do unless you're building "space hardware or embedded medical devices") means that you'll be breaking things more often. He even approvingly makes a comparison between a BMW (which supposedly don't break, but when they do, it's expensive) and an (old) Jeep (which break all the time but are fast to fix). Personally, I think there's got to be a middle ground here.

    And there are a lot of things that bug me about this whole MVP thing--every time we (as in our team) has done that, we've pivoted to something else...and ended up with this half-done MVP that we'd intended to come back to once we got feedback...but we haven't gotten feedback. And in the mean time, our architecture and designs are warped by these half-baked ideas. Design is important, in my opinion. And CD and Agile and thoughtful design don't really go together well, from what I can see.

    So in my mind, CD (in the actual "move fast and break things" mode) is solving a problem we don't have, in a way that makes problems we do have worse. But I could be wrong. I'm a notable pessimist, so I may be being overly pessimistic.


  • Considered Harmful

    @Benjamin-Hall said in Continuous (smoke) delivery?:

    I'm a notable pessimist, so I may be being overly pessimistic.

    Not pessimistic enough. The tenets around CD assume a basic level of competence that is a matter only of myth to most people in IT. Most teams can barely read their own code.


  • And then the murders began.

    @Benjamin-Hall said in Continuous (smoke) delivery?:

    I'm not sure. I have a thought that "CD" means a bunch of different things.

    If it's just purely "you can hit a button and it deploys without manual intervention", that's one thing that I don't really object to.

    If it's "every merge or push deploys to production", then I've got objections. But I'm not sure they're well-founded. It's more like flags going up.

    In my head, a CI/CD environment looks something like this:

    • New changes occur in a feature branch
    • New changes are tested in isolation
    • Validated changes are merged to the main branch
    • Main branch gets deployed on a regular basis

    If that main branch deploys to production: you're doing CD.

    If that main branch deploys to a staging environment: you're just doing CI.

    And in this case, the dichotomy presented by the speaker was that you can either optimize your process to reduce the number of times things break or optimize to make it easy to fix them when they do break. ... Personally, I think there's got to be a middle ground here.

    I'm not sure there is a middle ground, though. Or at least, I'm having a real hard time picturing what it could be.

    And there are a lot of things that bug me about this whole MVP thing--every time we (as in our team) has done that, we've pivoted to something else...and ended up with this half-done MVP that we'd intended to come back to once we got feedback...but we haven't gotten feedback.

    That's a problem specific to your environment. We've been taking that MVP approach on some stuff I've worked on, and the feedback has helped shape where we go from there.

    Our audience has also been internal users instead of external, which probably helps.


  • Banned

    Continuous Delivery as in making a full package build and running full test suite (if any) automatically for every commit in the repo (or at least master branch)? That's the bare minimum that should be done for every project.

    Continuous Delivery as in making a new release everyday, possibly several times a day, and maybe even automatically deploying it everywhere? That's much more questionable.



  • @Unperverted-Vixen said in Continuous (smoke) delivery?:

    @Benjamin-Hall said in Continuous (smoke) delivery?:

    I'm not sure. I have a thought that "CD" means a bunch of different things.

    If it's just purely "you can hit a button and it deploys without manual intervention", that's one thing that I don't really object to.

    If it's "every merge or push deploys to production", then I've got objections. But I'm not sure they're well-founded. It's more like flags going up.

    In my head, a CI/CD environment looks something like this:

    • New changes occur in a feature branch
    • New changes are tested in isolation
    • Validated changes are merged to the main branch
    • Main branch gets deployed on a regular basis

    If that main branch deploys to production: you're doing CD.

    If that main branch deploys to a staging environment: you're just doing CI.

    Currently, our process goes like this:

    • New changes in a feature branch (or bugfix branch if fixing a bug) off of develop, tested in a sandbox environment for a regression/functional run.
    • Validated changes are merged into develop.
    • Every week, develop gets merged into a release candidate branch (called release) and that gets deployed to a staging environment for full integration tests and a full regression sweep.
    • Assuming those pass (which they usually do), release gets merged into master, which gets deployed to production, and then master gets back-merged into develop (usually a no-op unless there were hotfixes or changes in staging).

    But that release for our server-side environments (ie not the apps) happens for both products together (because one particular interface is entangled currently). So either Product 1 and MyProduct get released together or neither do.

    I think the goal is to disentangle them so we could do MyProduct releases on a different cadence. But they're talking "Continuous Deployment" (is that different from Continuous Delivery?) as the term, and the speaker we were told to listen to seems to be talking about deploying every darn time you make a change. Talking about things like "if your change set between deployments is just those 50 lines or that one configuration setting...". Which seems to me to be a deployment at the ticket level or smaller. Which concerns me.

    And in this case, the dichotomy presented by the speaker was that you can either optimize your process to reduce the number of times things break or optimize to make it easy to fix them when they do break. ... Personally, I think there's got to be a middle ground here.

    I'm not sure there is a middle ground, though. Or at least, I'm having a real hard time picturing what it could be.

    Surely there's a middle ground between "release once, ever, everything must be perfect" and "release a dozen times a day" (not an exaggeration, he actually talks about that latter case and his "space hardware" crack points to the former). Some time to test and make sure you're doing something that works, while not waiting for perfection either.

    And there are a lot of things that bug me about this whole MVP thing--every time we (as in our team) has done that, we've pivoted to something else...and ended up with this half-done MVP that we'd intended to come back to once we got feedback...but we haven't gotten feedback.

    That's a problem specific to your environment. We've been taking that MVP approach on some stuff I've worked on, and the feedback has helped shape where we go from there.

    Our audience has also been internal users instead of external, which probably helps.

    Having purely internal audience helps. Ours is a mostly not-tech-savvy, very busy group of people with schedules that have frequent interrupts. And often the people actually using it aren't the people we hear directly from. Often our users are volunteers. Oh, and the "happy path" is that they're mostly using it one big push a month, with ad hoc use in the middle (when things change). So I think our natural feedback cycle is slower.



  • @Gąska said in Continuous (smoke) delivery?:

    Continuous Delivery as in making a full package build and running full test suite (if any) automatically for every commit in the repo (or at least master branch)? That's the bare minimum that should be done for every project.

    I'm not sure about every commit. Maybe (if it's done automatedly) every push, but that happens as a matter of course (because our functional testing is in remote sandboxes, so you have to push, build and deploy to the sandbox to even see if it works for anything but trivial stuff). Although those builds are manually triggered, rather than automatic. But you can't really avoid doing one.

    Our automated tests, however, are mostly trash. And aren't being seriously run. I can definitely agree that there's room to improve there.

    Continuous Delivery as in making a new release everyday, possibly several times a day, and maybe even automatically deploying it everywhere? That's much more questionable.

    And that's what I'm afraid of. Because that's what would require lots of decoupling, and they're talking about moving more to having the developers do the production releases (at least for our product), which means some kind of actual production release. And we have 3 production environments (US, LATAM, and EU), each with different config (to some degree).



  • One of the things that's coming (based on the architect's documentation so far) is feature flags. And the more I read about those, the less I want to have to deal with them. Combinatorial explosion anyone? Plus, having to go back and clean it all up later seems like a disaster waiting to happen.


  • And then the murders began.

    @Benjamin-Hall said in Continuous (smoke) delivery?:

    • Assuming those pass (which they usually do), release gets merged into master, which gets deployed to production, and then master gets back-merged into develop (usually a no-op unless there were hotfixes or changes in staging).

    The merge from release into master sounds like a bad idea to me. The binaries deployed to production should be the same as the binaries deployed to staging. But I think that's getting off-track...

    I think the goal is to disentangle them so we could do MyProduct releases on a different cadence. But they're talking "Continuous Deployment" (is that different from Continuous Delivery?) as the term, and the speaker we were told to listen to seems to be talking about deploying every darn time you make a change. Talking about things like "if your change set between deployments is just those 50 lines or that one configuration setting...". Which seems to me to be a deployment at the ticket level or smaller. Which concerns me.

    In your case, where you have two entangled products, I would be concerned.

    For more straightforward webapps, it's not a big deal. The app I worked on a decade ago had ticket-level deployment, and I loved it. I'd like to be able to get back to that; right now the limiting feature is our inability to spin up feature branch environments on-premises, but our shift to Azure app services will hopefully let me introduce that.

    (And then the limiting feature becomes our auditors. sigh)

    Surely there's a middle ground between "release once, ever, everything must be perfect" and "release a dozen times a day" (not an exaggeration, he actually talks about that latter case and his "space hardware" crack points to the former). Some time to test and make sure you're doing something that works, while not waiting for perfection either.

    I'm not saying that having a manual testing step is a bad way to operate; but it makes it harder to fix things that are broken. A given release is unlikely to have just the one item in it, meaning that a fix has to wait on any other changes in the release to be validated too. (And if they fail validation, for them to be fixed and revalidated.) That's why I don't see a middle ground in the speaker's dichotomy.



  • @Unperverted-Vixen said in Continuous (smoke) delivery?:

    @Benjamin-Hall said in Continuous (smoke) delivery?:

    • Assuming those pass (which they usually do), release gets merged into master, which gets deployed to production, and then master gets back-merged into develop (usually a no-op unless there were hotfixes or changes in staging).

    The merge from release into master sounds like a bad idea to me. The binaries deployed to production should be the same as the binaries deployed to staging. But I think that's getting off-track...

    Generally we're (pretty) careful to make sure that master and release (before we build a new RC) are the same. So master is more of a "this is what was deployed" branch. And it's all containers, so it has to be rebuilt anyway. Different configurations and all.

    I think the goal is to disentangle them so we could do MyProduct releases on a different cadence. But they're talking "Continuous Deployment" (is that different from Continuous Delivery?) as the term, and the speaker we were told to listen to seems to be talking about deploying every darn time you make a change. Talking about things like "if your change set between deployments is just those 50 lines or that one configuration setting...". Which seems to me to be a deployment at the ticket level or smaller. Which concerns me.

    In your case, where you have two entangled products, I would be concerned.

    For more straightforward webapps, it's not a big deal. The app I worked on a decade ago had ticket-level deployment, and I loved it. I'd like to be able to get back to that; right now the limiting feature is our inability to spin up feature branch environments on-premises, but our shift to Azure app services will hopefully let me introduce that.

    (And then the limiting feature becomes our auditors. sigh)

    Yeah. If it was a pure web app + database, that'd be one thing. But we have

    • two "core" products, each with independent clients. There's one shared, entangled client that we're trying to decouple (a move I totally support), the web interface.
    • My product only has 1 main supporting service and a couple "product-agnostic" services that it uses (one to handle outbound emailing via a 3rd party, one to handle all of the account stuff that's shared), plus that entangled web "front end" (ok, PHP container that does direct DB access, plus Vue.JS layer that's kinda sorta not really an SPA, but it's also not really a proper paged website....yuck).
    • The other product has...a lot. Ranging from SMTP daemons to pieces to send to all the push services + actually make phone calls (via a 3rd party) + all sorts of other stuff. Some of which is quite...crufty.
    • Plus the shared account stuff, which has some of the most icky code I've ever seen and has lots of 3rd-party dependencies including payment services, etc.

    Currently, all of that (all the non-mobile-client stuff) is our Cloud release, which happens weekly. And we've gotten pretty stable about weekly releases. Apps go out a bit less frequently, about 2 every 3 weeks if there are changes.

    We have several "test" environments, including the ability to spin up a mini-(almost)-replica of production on the fly, which we use for functional testing for each feature ticket. Then they all get bundled together and deployed to our QA (really staging) environment, which ironically is the least like production (for legacy reasons).

    Surely there's a middle ground between "release once, ever, everything must be perfect" and "release a dozen times a day" (not an exaggeration, he actually talks about that latter case and his "space hardware" crack points to the former). Some time to test and make sure you're doing something that works, while not waiting for perfection either.

    I'm not saying that having a manual testing step is a bad way to operate; but it makes it harder to fix things that are broken. A given release is unlikely to have just the one item in it, meaning that a fix has to wait on any other changes in the release to be validated too. (And if they fail validation, for them to be fixed and revalidated.) That's why I don't see a middle ground in the speaker's dichotomy.

    We have two categories of fixes:

    • Regular defects. Those get prioritized and go in the normal flow along with new features, and we try to keep our defect count low (< 15 that aren't merged into develop).
    • Hotfixes. These are for things that are actively making customers' lives harder (ie we broke something with something we release). A good week has none of these, a bad one may have 2. These are branched off of master, merged into master and deployed (after being tested in a master-branch-built sandbox).
    • Bonus: Release-blocking defects, which are things caught in the staging environment that aren't in production. Those branch off of release, and (sometimes) cause a complete redo of the testing round. We try to avoid those where possible, lest QA get angry (and more importantly, lest they spend all their time there, not doing functional testing).

    Honestly, it's all fairly sane. There's gaps (our automated testing in all forms is lacking). But I'm not sure that moving to "deploy multiple times a day" or "have a nest of feature switches, controlled by other code that can be buggy as well" is going to improve much of that.

    Our feedback loop isn't dominated by dev time--it's limited by not getting feedback from users. And this won't help that at all, although our increased use of analytics (sorry) will help some...if people know what questions to ask.

    I'll spare everyone the rant about how being "data driven" and "scientific" and "using metrics" isn't actually a panacea, and often makes life worse. It turns out having been an educator taught me the incredible difficulty of coming up with good metrics, and the need to evaluate the metrics...which requires more metrics.



  • @Benjamin-Hall said in Continuous (smoke) delivery?:

    One of the things that's coming (based on the architect's documentation so far) is feature flags. And the more I read about those, the less I want to have to deal with them. Combinatorial explosion anyone? Plus, having to go back and clean it all up later seems like a disaster waiting to happen.

    I have the same feeling abut them. And it is just a feeling, I have no data to back it up. I just hate it when code starts resembling the UN courtyard.

    They are meant to be temporary and a way to turn off a new (possibly) buggy feature, but then removed as the feature stabilises. They tend to accumulate because noone takes time to actually clean them up. One of the upper managers actually had to make it a company goal in order to get them under control.


  • BINNED

    @Benjamin-Hall said in Continuous (smoke) delivery?:

    In an area without good competitors,

    competition is irrelevant if the users can't switch :thinking-ahead:



  • @Benjamin-Hall said in Continuous (smoke) delivery?:

    One of the things that's coming (based on the architect's documentation so far) is feature flags. And the more I read about those, the less I want to have to deal with them. Combinatorial explosion anyone? Plus, having to go back and clean it all up later seems like a disaster waiting to happen.

    I've worked in several projects that tried to use them. They are of limited use, sometimes using them requires a huge amount of code duplication and sometimes a small part of code is simply forgotten about and ends up active in prod. Feature flags are a poor man's feature branch. But they are useful sometimes.


  • 🚽 Regular

    @homoBalkanus said in Continuous (smoke) delivery?:

    They are meant to be temporary and a way to turn off a new (possibly) buggy feature, but then removed as the feature stabilises. They tend to accumulate because noone takes time to actually clean them up.

    weeps



  • @Benjamin-Hall said in Continuous (smoke) delivery?:

    Our feedback loop isn't dominated by dev time--it's limited by not getting feedback from users. And this won't help that at all, although our increased use of analytics (sorry) will help some...if people know what questions to ask.

    QFT. If your main bottleneck is getting affirmative non-feedback from users, then a way to more rapidly get affirmative non-feedback from users is not going to solve your problem. The solution is getting not-non-feedback. Or, a business analyst putting on their big-boy pants and acting as the users' agent.


  • ♿ (Parody)

    @Unperverted-Vixen said in Continuous (smoke) delivery?:

    @Benjamin-Hall said in Continuous (smoke) delivery?:

    And there are a lot of things that bug me about this whole MVP thing--every time we (as in our team) has done that, we've pivoted to something else...and ended up with this half-done MVP that we'd intended to come back to once we got feedback...but we haven't gotten feedback.

    That's a problem specific to your environment. We've been taking that MVP approach on some stuff I've worked on, and the feedback has helped shape where we go from there.

    Our audience has also been internal users instead of external, which probably helps.

    #MeToo. Often it's nearly impossible to get people to tell you what they want without having something tangible in front of them. The MVP thing works well when you really do deploy something to where users can get their hands on it and give you course corrections.



  • @boomzilla said in Continuous (smoke) delivery?:

    @Unperverted-Vixen said in Continuous (smoke) delivery?:

    @Benjamin-Hall said in Continuous (smoke) delivery?:

    And there are a lot of things that bug me about this whole MVP thing--every time we (as in our team) has done that, we've pivoted to something else...and ended up with this half-done MVP that we'd intended to come back to once we got feedback...but we haven't gotten feedback.

    That's a problem specific to your environment. We've been taking that MVP approach on some stuff I've worked on, and the feedback has helped shape where we go from there.

    Our audience has also been internal users instead of external, which probably helps.

    #MeToo. Often it's nearly impossible to get people to tell you what they want without having something tangible in front of them. The MVP thing works well when you really do deploy something to where users can get their hands on it and give you course corrections.

    Sure. But we've done that (gotten it in front of people) and gotten little or no feedback. Which says (to me) that getting things in front of people isn't the bottleneck. It's the getting feedback part.


  • ♿ (Parody)

    @Benjamin-Hall said in Continuous (smoke) delivery?:

    @boomzilla said in Continuous (smoke) delivery?:

    @Unperverted-Vixen said in Continuous (smoke) delivery?:

    @Benjamin-Hall said in Continuous (smoke) delivery?:

    And there are a lot of things that bug me about this whole MVP thing--every time we (as in our team) has done that, we've pivoted to something else...and ended up with this half-done MVP that we'd intended to come back to once we got feedback...but we haven't gotten feedback.

    That's a problem specific to your environment. We've been taking that MVP approach on some stuff I've worked on, and the feedback has helped shape where we go from there.

    Our audience has also been internal users instead of external, which probably helps.

    #MeToo. Often it's nearly impossible to get people to tell you what they want without having something tangible in front of them. The MVP thing works well when you really do deploy something to where users can get their hands on it and give you course corrections.

    Sure. But we've done that (gotten it in front of people) and gotten little or no feedback. Which says (to me) that getting things in front of people isn't the bottleneck. It's the getting feedback part.

    Oh, I get that, too. We had a customer project manager who was a control freak and was really careful about giving people permission to use certain new features. Probably a result of politics, but the bottom line was that we'd have stuff sitting around collecting dust for years and then they'd us use it incidentally when we'd do a demo of something else.



  • @boomzilla said in Continuous (smoke) delivery?:

    @Benjamin-Hall said in Continuous (smoke) delivery?:

    @boomzilla said in Continuous (smoke) delivery?:

    @Unperverted-Vixen said in Continuous (smoke) delivery?:

    @Benjamin-Hall said in Continuous (smoke) delivery?:

    And there are a lot of things that bug me about this whole MVP thing--every time we (as in our team) has done that, we've pivoted to something else...and ended up with this half-done MVP that we'd intended to come back to once we got feedback...but we haven't gotten feedback.

    That's a problem specific to your environment. We've been taking that MVP approach on some stuff I've worked on, and the feedback has helped shape where we go from there.

    Our audience has also been internal users instead of external, which probably helps.

    #MeToo. Often it's nearly impossible to get people to tell you what they want without having something tangible in front of them. The MVP thing works well when you really do deploy something to where users can get their hands on it and give you course corrections.

    Sure. But we've done that (gotten it in front of people) and gotten little or no feedback. Which says (to me) that getting things in front of people isn't the bottleneck. It's the getting feedback part.

    Oh, I get that, too. We had a customer project manager who was a control freak and was really careful about giving people permission to use certain new features. Probably a result of politics, but the bottom line was that we'd have stuff sitting around collecting dust for years and then they'd us use it incidentally when we'd do a demo of something else.

    We still hear "oh, you've got an app?" or "Oh, you can do <thing>?" pretty frequently. From CSRs and Marketing even!


Log in to reply