Third party integration... like pulling teeth



  • Third party integration... just... why... what?

    I've been involved in a lot of integration projects, my most recent with a large supermarket chain.

    Some common themes I've come across in 3rd party integration
    Around 9 times out of 10, the 3rd party doesn't know how their own system works.
    Their documentation that they ask you to implement to is a lie.
    They will not admit when a bug is a bug and they will not fix issues.
    Their support is full of non technical idiots.
    They can't get meaningful or even basic error reporting from their own system.
    They will make changes, breaking your fragile implementation to their brittle spec and then deny anything has changed.

    I could go on.

    My current dilemma
    The current integration is a simple flat file XML exchange based on an XML standard, yet despite producing files to spec, the third party cannot tell me why the files are failing.

    First of all - I wrote the integration to spec. The spec mentioned a test mode and I made sure to build this facet into the integration so that we could easily switch into production mode.

    I put failsafes in using common sense - no test file would be accepted into a production system, and vice-versa.

    Upon completion, I spoke to my contact in order to get the ball rolling.

    "So, what steps do we need to take to start testing this, I assume you will send files in test mode until we are happy it's working?"

    "oh... we don't have a test mode"

    I knew at that point that it was going to be a bumpy road.

    Next, my contact took several weeks to come back with an answer on whether the files were even getting into their system, despite me screenshotting and even downloading a copy of the files I'd uploaded to their FTP to prove that they should be able to see them.

    My contact ignored questions in emails I sent, answering mostly with pointless questions "can you send some more test files" (shouldn't we be looking at why the previous files went AWOL??)

    After weeks and weeks of messing about, project way over budget due to constant communication, I've still not found an issue with the files I'm sending - the most recent email was from my contact telling me that they had provided a corrected file based on one of my original files.

    I ran a text compare - both files were identical (apart from spacing).

    I give up.

    Has anyone else suffered from this nightmare?


  • Impossible Mission Players - A

    @charles-pockert said in Third party integration... like pulling teeth:

    Has anyone else suffered from this nightmare?

    Ask me when I'm not sober how well Oculus plays with Steam in the UE4 engine. It's still not fixed after a year (maybe more?) and best we can figure is that it's because Oculus is commandeering things from Steam or some such. Nobody is talking between the three. And no answers or fixes.


  • Discourse touched me in a no-no place

    @charles-pockert said in Third party integration... like pulling teeth:

    I ran a text compare - both files were identical (apart from spacing).

    Their regexps probably expect specific spacing.

    Of course they're parsing stuff with regexps and producing things with string substitution. That's the most idiotic option by far so that's what will be deployed (possibly as a hack done in Perl long ago that nobody dares touch now) and that's what you're stuck with.

    (Integration programming is horribly hard, especially as it is a BITCH ‌Enlightenment Developer to test as nothing actually behaves according to spec.)


  • Impossible Mission Players - A

    @dkf said in Third party integration... like pulling teeth:

    nothing actually behaves according to spec

    Spec? Where we're programming we don't need a spec!


  • Discourse touched me in a no-no place

    @tsaukpaetra said in Third party integration... like pulling teeth:

    Where we're programming we don't need a spec!

    That's good, because you've not got one. You might have something that says it is a spec, but that's pure LIES. Match the bytes that the thing expects or GTFO.

    Also it isn't usually any better when the other side uses a system that generates documents correctly, as it probably still has some complicated internal state model that it expects its client (likely some horrible javascript designed for IE6 and then hacked around with since then) to follow to the letter, yet you're wanting to use a sensible language that doesn't force you to deal with pulling bits of HTML pages and splatting them into iframes. And their app probably breaks horribly in other ways once you start to really get the integration working too. Had that once when I enabled some integration code I'd been working on and it worked correctly, rapidly dumping millions of files into a system that had been designed to work with at most a couple of hundred, and where the user-facing GUI had never had a proper paging system implemented. I actually had a senior developer shout at me over that, despite the fact that not one interaction I'd pushed in was technically wrong. Apparently they didn't realise that some scientific data acquisition systems generate a truly enormous amount of data, and that this has non-trivial consequences for other systems… 😆

    So yes, I truly hate integration coding. It provides all sorts of ways to show people that their assumptions are wrong, and that can cause a lot of friction.


  • Impossible Mission - B

    At my first programming job, we had a 3-tier architecture with a handful of middle-tier servers. I ended up being responsible for two of them, named IntegrationServer and AutomationIntegrationServer. (I never did find out what the difference was, or what it was that caused any given integration to be implemented on the one rather than the other.)

    So... yeah. I was The Integrations Guy for a system that, among other things, managed the majority of broadcast TV throughout the United States. No pressure. Most of it worked surprisingly well, though we did frequently have to deal with issues from one specific third-party vendor whose data was always a mess and tended to unexpectedly change in new and interesting ways.

    Probably the most memorable thing I did was help set up a massive, multi-party integration between several vendors, almost all of which you've probably heard of, as a part of bringing online support for DirecTV. Our side of things involved supporting two integrations, which I'll simply call Protocol 1 and Protocol 2.

    Somehow, despite there being most of the same people involved, it was like night and day. For Protocol 1, we got a detailed spec, a weekly conference call with managers and developers from each of the involved parties to keep everyone on the same page, a staging server to test everything and ensure it worked, and so on. We had that thing running smoothly more than a month before go-live, and when they turned it on in production, I didn't get a single bug report sent my way.

    Protocol 2, I got a basic spec... and that was pretty much it. I did my best to code to it, but my repeated requests for more (and more in-depth) information ended up going nowhere, with predictable consequences once we went live. I have never understood how it was possible for that to happen when it was the same people involved with both projects! undefined


  • Discourse touched me in a no-no place

    @masonwheeler When you have a real spec — by that I mean that if people find a variance between the code and the spec, they change the code to try to match the spec — then treasure it because you'll be able to make it work by adhering to something that is openly readable. It's all too uncommon. The reverse (“spec” is a zero-effort attempt to describe what the code is doing) is the much more common case, if you've got one at all that is. Lots of places (including the huge majority of hipster startups) don't seem to value even that much.



  • Ugh...One of our COTS components with which we have a pretty tight integration with is about to go End of Life, and upgrading to the next version wasn't feasible since they were going all cloud and we really need for it to be local. So I've be de-integrating it and just putting the functionality in our code. ZOMG it's so much simpler!

    It's not too bad, since we'd already made interfaces for doing all the things already and just talked to that system in the back end. So now it's mainly adding some tables and just saving shit locally instead of making a bunch of remote calls and dealing with their hare brained way of setting up the data.

    And actually, some of the stuff that system does is being taken over by a new COTS system, though it's actually a lot less than the old one was so the overall setup is probably going to be saner. Except that it runs on MS SQLServer, of course (we're on Oracle). Ugh.



  • The problem is when your integration partner places lower value on integration than you do (because they are bigger and/or a monopolist). So they don't allocate their best people to handle it on their end. Instead, you get half-competent life-timers or juniors/contractors who have no idea how the system works.


  • Impossible Mission - B

    @boomzilla said in Third party integration... like pulling teeth:

    the overall setup is probably going to be saner. Except that it runs on MS SQLServer, of course (we're on Oracle).

    Yes, that definitely sounds saner! 🚎



  • @charles-pockert said in Third party integration... like pulling teeth:

    XML
    I ran a text compare - both files were identical (apart from spacing).

    Reminds me of something I ran into while doing a co-op term with a government department. They had some process that used XML files, and their normal procedure to generate the files was to use Notepad to open some master "template" file, copy and paste several fields from some table in an internal webapp into the file, and send the result. So I wrote a little util which took as input one row from that table (still copy-and-pasted, but all at once rather than individual fields), and spit out the XML which could be saved into a file. We tried sending that file, and after a while (at the other end, I think someone had to manually upload the file to whatever system consumed it) we got the response that it had failed. And, just like in the OP, I compared a successful file with a failed one, and they only differed in spacing.



  • @charles-pockert My favorite was integrating with a healthcare insurance billing product that only allowed (via their API) plans to start at the start of a month and end at the end of a month.

    A lot of employers, when an employee leaves suddenly, will fudge the insurance to keep going through the end of the month, but we didn't (for reasons too long to get into).

    That's not any of your problems (although they also had a lot of your problems), that was the problem that the API creator made assumptions that had no factual grounding.



  • @boomzilla said in Third party integration... like pulling teeth:

    And actually, some of the stuff that system does is being taken over by a new COTS system, though it's actually a lot less than the old one was so the overall setup is probably going to be saner. Except that it runs on MS SQLServer, of course (we're on Oracle). Ugh.

    Heh...just heard status from one of the guys working on the new integration. We have to import a bunch of data. Their documentation apparently lists a variety of formats that you can use for the import. However, apparently that's all for an old version and almost all of it doesn't apply. Of course.


  • mod

    @charles-pockert said in Third party integration... like pulling teeth:

    Has anyone else suffered from this nightmare?

    Yeah. I've had nightmare third party integrations with multiple third parties. One of my worst starts here (lounge access required). The rest of that thread is peppered with various business WTFs and integration nightmares.

    ETA: Here's another one that isn't in the lounge.



  • @charles-pockert A few years ago, we started building an integration with a flight search/booking API. We asked how we could test their API without making a real booking. The reply was make a regular booking and email me the ID, I'll cancel it manually.

    I was willing to make our CI shoot an email to him after every test run, but the project stalled before we could make a single booking.


  • Discourse touched me in a no-no place

    @dkf said in Third party integration... like pulling teeth:

    Of course they're parsing stuff with regexps and producing things with string substitution. That's the most idiotic option by far so that's what will be deployed (possibly as a hack done in Perl long ago that nobody dares touch now) and that's what you're stuck with.

    Building XML with string substitution is one of my favorite ways to build XML.

    Because it's about 3000 times less expensive than fucking using any XML library ever invented.



  • @weng said in Third party integration... like pulling teeth:

    @dkf said in Third party integration... like pulling teeth:

    Of course they're parsing stuff with regexps and producing things with string substitution. That's the most idiotic option by far so that's what will be deployed (possibly as a hack done in Perl long ago that nobody dares touch now) and that's what you're stuck with.

    Building XML with string substitution is one of my favorite ways to build XML.

    Because it's about 3000 times less expensive than fucking using any XML library ever invented.

    Unless you have one that automatically converts objects into XML, like JAXB (that's obviously Java, but I'm sure C# has equivalent stuff).


  • Discourse touched me in a no-no place

    @boomzilla Yes, we have that. But I generally don't feel like having 30 classes around to generate XML that amounts to "put an integer here" which is most of the XML I generate.


  • Discourse touched me in a no-no place

    @boomzilla said in Third party integration... like pulling teeth:

    Unless you have one that automatically converts objects into XML, like JAXB (that's obviously Java, but I'm sure C# has equivalent stuff).

    The good thing about JAXB is that you can run it in streaming mode. If you can arrange to deliver the collections of POJOs in little bits, you can ship a vast constructed document out with only a (comparatively; it is Java after all) tiny amount of memory in use in the middle. Or (at a slightly higher level) you can set up your own streaming handler and really knock the memory usage down to zippo.

    All that assumes you've got a decent JAXB implementation, but the default ones are capable of this sort of thing so that's actually fairly likely.



  • @weng said in Third party integration... like pulling teeth:

    @dkf said in Third party integration... like pulling teeth:

    Of course they're parsing stuff with regexps and producing things with string substitution. That's the most idiotic option by far so that's what will be deployed (possibly as a hack done in Perl long ago that nobody dares touch now) and that's what you're stuck with.

    Building XML with string substitution is one of my favorite ways to build XML.

    Because it's about 3000 times less expensive than fucking using any XML library ever invented.

    That's like saying that my favorite way of lighting my hair on fire is with a match because lighters cost too much.

    Either way, you end up getting burned, with smoke coming off of your scalp.

    Filed Under: Or, going back to the topic of setting your hair on fire...


  • Notification Spam Recipient

    @weng said in Third party integration... like pulling teeth:

    @dkf said in Third party integration... like pulling teeth:

    Of course they're parsing stuff with regexps and producing things with string substitution. That's the most idiotic option by far so that's what will be deployed (possibly as a hack done in Perl long ago that nobody dares touch now) and that's what you're stuck with.

    Building XML with string substitution is one of my favorite ways to build XML.

    Because it's about 3000 times less expensive than fucking using any XML library ever invented.

    I'd bet you the Rust ones are pretty good.


  • Discourse touched me in a no-no place

    @pie_flavor said in Third party integration... like pulling teeth:

    I'd bet you the Rust ones are pretty good.

    I'd accept that only if by “pretty good” you mean “provably never accesses memory wrongly or generates leaks”.

    There are plenty of other ways to behave poorly that Rust can't protect against in any way more systematically than any other programming language. There's a whole hierarchy of bugs from very low level ones up to very high level ones, and Rust definitely doesn't stop any of the higher ones. It can't; Rust doesn't have a sufficiently complex type theory in order to do that (and if it did, it'd be a ridiculously complicated theorem prover, as it is entirely possible to tie program correctness to higher mathematics).

    People write correct code in many programming languages. Rust's… peculiarities… only help with a very small part of the problem.


  • Notification Spam Recipient

    @dkf said in Third party integration... like pulling teeth:

    @pie_flavor said in Third party integration... like pulling teeth:

    I'd bet you the Rust ones are pretty good.

    I'd accept that only if by “pretty good” you mean “provably never accesses memory wrongly or generates leaks”.

    There are plenty of other ways to behave poorly that Rust can't protect against in any way more systematically than any other programming language. There's a whole hierarchy of bugs from very low level ones up to very high level ones, and Rust definitely doesn't stop any of the higher ones. It can't; Rust doesn't have a sufficiently complex type theory in order to do that (and if it did, it'd be a ridiculously complicated theorem prover, as it is entirely possible to tie program correctness to higher mathematics).

    People write correct code in many programming languages. Rust's… peculiarities… only help with a very small part of the problem.

    I just mean that it's probably extremely memory efficient.


  • Discourse touched me in a no-no place

    @pie_flavor said in Third party integration... like pulling teeth:

    I just mean that it's probably extremely memory efficient.

    That's not necessarily a good thing, FWIW. Being extremely memory efficient probably means you're doing more allocations (which is very time-costly) and when dealing with large amounts of XML, the real efficiency is in not having it all in memory at once at all. OTOH, the steps you take to deal with large XML are ones that actually slow down processing of small documents quite a bit; large data chunks really do need different approaches precisely because there's a threshold beyond which keeping everything in memory is utterly terrible. Where that threshold is… depends mostly on the deployment as it depends on what else is going on in the application.

    As I said earlier, there's a hierarchy of (possible) bugs; it's almost certainly an infinite hierarchy too as I see no reason for there to be an upper limit. Higher level languages make lower level bugs less of a problem, but there's usually a cost to doing so in terms of expressibility (and some of us really do need weird low-level stuff).


  • SockDev

    Third party integrations are always like pulling teeth. Always.


  • And then the murders began.

    @arantor said in Third party integration... like pulling teeth:

    Third party integrations are always like pulling teeth. Always.

    Not always. Sometimes they’re like pulling fingernails instead.


  • SockDev



  • @dkf said in Third party integration... like pulling teeth:

    Being extremely memory efficient probably means you're doing more allocations (which is very time-costly)

    Indeed memory allocations may be costly, but there are techniques which can address that to a very large degree with specialize allocators, block caches, etc.



  • At least there's an API.

    Back when I was a trainee at my company, I was working on an internal application for onboarding new people. Aside from the usual operations like adding them to Active Directory and creating an e-mail, the IT also wanted to add the users to the office printer.

    The printer didn't have an API for that, but it did have a web interface, kind of like the ones on routers. So after a few hours of playing with Fiddler and figuring out how to extract values from the incoming HTML of the intermediate pages the solution was in place - fragile as it could be, but working.

    As far as I can tell, both the application and the code I wrote are still in place.


  • Discourse touched me in a no-no place

    @arantor said in Third party integration... like pulling teeth:

    Third party integrations are always like pulling teeth. Always.

    We have one in flight that's literally only hard because....

    Uh. Their web service only exposes about half the functions, so we had to write our own web service that wraps the commandline version.of those functions.

    Nevermind. It's like pulling teeth.


  • Discourse touched me in a no-no place

    @thecpuwizard said in Third party integration... like pulling teeth:

    @dkf said in Third party integration... like pulling teeth:

    Being extremely memory efficient probably means you're doing more allocations (which is very time-costly)

    Indeed memory allocations may be costly, but there are techniques which can address that to a very large degree with specialize allocators, block caches, etc.

    Here's the thing: there are algorithms to manage the cost of some of the most common operations, provided you aren't to keen on being 100% memory efficient. As a case in point, most efficient buffer systems manage a notional “size” and “capacity” as different values, with size ≤ capacity as a guaranteed safety property (and you're only allowed to address elements up to the size, of course), so the cost of appending lots of items can be amortised constant time, as it turns out that the number of allocations and the number of copies of the existing elements are the real critical expensive underlying operations (the amortisation relies on increasing the capacity by a multiplicative factor while the density of reallocations relative to the size consequently goes down). Using precise allocation avoids the cost, but only if you can know the size capacity ahead of time and that's really annoying for things like strings; getting it wrong and only adding one item at a time incurs a whopper cost once you've got capacities in the range of hundreds of millions.

    There's many levels of correctness. Insisting on some kinds (exact minimal allocations) can cause trouble elsewhere (performance, which is a really important metric in reality). This sort of thing is absolutely a real concern.



  • @boomzilla this morning:
    0_1487860069046_Flail.gif

    So the migration I'd mentioned...We have lots of views that pull data from the other system into nice formats. Of course, that system has a habit of using key / value tables, so it definitely is handy. But of course, it's doing some complicated stuff. So when we had a big report or something where we only needed a bit of data that wasn't part of that mess, we'd just go directly to the source tables and ignore the view.

    Well, as part of our migration process, we've been updating (against my advice) reports to use the views, and then when we flip the switch on the migration we can update the views to look at our sane table structure.

    But this morning got a ticket for a report that's timing out now due to this.

    0_1488541243994_upload-8d3522ab-af92-4438-b9b7-31ca032110ed


  • Impossible Mission Players - A

    @boomzilla
    Fire up the indexing cannon, capitan!



  • @izzion Negative. Not an index problem.


  • SockDev

    @boomzilla said in Third party integration... like pulling teeth:

    @izzion Negative. Not an index problem.

    EAV table with

    0_1510840648113_0dd778dd-83c2-4a52-8f95-bb92eb22b400-image.png

    of rows?



  • @accalia Hmm...good question....let's see...

    Nope, currently only 2,075,390 rows.


  • Fake News

    @boomzilla OK, so 0.02 of the billions.


  • Impossible Mission Players - A

    @boomzilla
    Preposterous! All SQL problems can be fixed with indexes or rebuilding the tables or shrinking the log files! Requests from my users on my Kanban board tell me so!


  • kills Dumbledore



  • @jaloopa Well, not quite that bad, but enough to cause a problem when running a large enough report.



  • @boomzilla I think you're undefined. The proper way nowadays is to use NoSQL undefined


Log in to reply
 

Looks like your connection to What the Daily WTF? was lost, please wait while we try to reconnect.