Azure bites



  • So last couple of days I've been trying to deploy an azure function app⁰. It has been haphazardly cobbled together by pasting C#-scripts (.csx) straight into the portal and time has come to move it over to proper build process with .dll deployment and staging slot. And in the process we wanted to upgrade it to use managed identity instead of connection strings.

    • :wtf: 1: The way to configure that is inconsistent.
      • The SQL client gained support for using managed identity by simply replacing the username and password in the connection string with a different parameter¹. You adjust the connection string and it just works.
      • The eventhub² client also got (a more limited variant of) that feature [1], but for event triggers³ it does not accept connection strings without username and password and instead you have to define the environment variable with suffix __fullyQualifiedNamespace [2] and set it to just the qualified name of the ‘namespace’ (~ server). Of course you have to reimplement this illogic yourself if you also need to construct the client yourself, perhaps in some other function.
      • The storage clients⁴ don't get that feature at all, but for the bindings you can define the environment variables with suffixes __{blob,queue,table}ServiceUri [3] to the appropriate base URLs. So you are now defining several variables instead of one (a connection string is the same for all the services), and you again have to implement the logic yourself if you also want to construct the client yourself. And yes, it's URLs with protocol this time and just domain names in the previous one, why do you ask?
    • :wtf: 2: If you get it wrong, it does not log any error, it just does not work. Or at least we couldn't find any error in either the appinsights⁵ or the log service in the development interface.
    • :wtf: 3: So when it worked for us with events, but not storage, we were left blind, trying various combinations of the settings. Until I thought to check the version of the extension, and … well, the app is linked with the older version of that package. And that does not … apparently … support it
      • :wtf: 3.1: The documentation [3] does actually say it requires version 5 (and we are still using version 4), but it's kinda drowned in the wall of text. That's my general gripe with this Azure documentation—the useful information is always drowned in walls of quintuplicated text.
      • :wtf: 3.2: I didn't find any way to view the documentation for the older version of the library. Even though there are incompatible changes between them. At least I … now … noted the upgrading guide, but I wouldn't call that exactly easy to find either.
    • :wtf: 4: The newer version of the library that implements the bindings uses a different library implementing the actual connections. It's the third incarnation of the connectors package (WindowsAzure.Storage, Microsoft.Azure.Storage and now Azure.Storage). They are just creating more work for the poor sod colleague who writes the code of the functions and has to upgrade it over and over. And it's not new, I remember reading about it from Joel almost 20 years ago, though I'm not sure which article it was (this one comes close, but I think there was a better one).
    • bonus :wtf:: Apropos permissions. There is a “Contributor” role, that can do anything to a resource like storage account except a) grant permissions, and b) manipulate the data inside. But it can read the api token (and the connection string that includes it) and that allows full access to the data. So with that role, and it's the traditionally recommended role for someone to manage the resource, they can't e.g. connect another resource via managed identity, where each such connection can be seen in the resource configuration, but they can take the connection string and set it up that way with no checking at all.

    ⁰ The Azure incarnation of ‘serverless’.
    ¹ Authentication=Active Directory Default will try several ways to get a token including managed identity and azure cli, and there is a different option for user-assigned managed identity because then you have to tell it which one it should use.
    ² EventHub is effectively a hosted Kafka, though it defaults to AMQP 1.0 protocol.
    ³ Declared via annotation on an argument of the function to be called, so the client is constructed by the framework rather than by you directly.
    ⁴ A storage account groups four kinds of data: blobs (~files only accessible over https), files (also accessible over smb and possibly nfs), queues (simple, but cheap) and tables (indexed by primary key only, no joins).
    ⁵ The log and metric collection service based roughly on opentracing.



  • @Bulb That double-underscore prefix makes my C/C++ brain ... itch.

    Very few people outside the category "language uber-nerd" seem to have the slightest respect for (and/or knowledge of) the point in the standards for both C and C++ that all names with a double-underscore anywhere and/or an initial single underscore followed by a capital letter are reserved by the implementation for its own purposes.

    Granted, the specific names you cite are part of the system, but ...


  • Discourse touched me in a no-no place

    @Steve_The_Cynic They should reserve all names with a letter C anywhere in them.


  • Banned

    @Steve_The_Cynic said in Azure bites:

    all names with a double-underscore anywhere

    Nope, it's just the leading double underscore that's reserved.


  • Grade A Premium Asshole

    @Steve_The_Cynic said in Azure bites:

    standards for both C and C++ that all names with a double-underscore anywhere and/or an initial single underscore followed by a capital letter are reserved by the implementation for its own purposes.

    Not quite. The "double-underscore anywhere" thing is only true in C++. In C, __ is only reserved at the start of identifiers.


  • And then the murders began.

    @Bulb said in Azure bites:

    • bonus :wtf:: Apropos permissions. There is a “Contributor” role, that can do anything to a resource like storage account except a) grant permissions, and b) manipulate the data inside. But it can read the api token (and the connection string that includes it) and that allows full access to the data. So with that role, and it's the traditionally recommended role for someone to manage the resource, they can't e.g. connect another resource via managed identity, where each such connection can be seen in the resource configuration, but they can take the connection string and set it up that way with no checking at all.

    That's where you need to use Azure Policy to deny deploying a storage account with Shared Key authorization enabled. With that disabled, then everything is forced to authenticate with managed identity. :half-trolling:

    Ignore poor Azure Files sitting in the corner, sobbing.



  • @Steve_The_Cynic said in Azure bites:

    C/C++

    They are environment variables. And it's for .нет, which does not follow C/C++ conventions anyway.

    The functionapp configuration is a bit of a :wtf: too.

    See, in normal ado.нет, each service can declare a configuration class, and the dependency injection will automatically deserialize it from appconfig.json, with overrides from environment. Most application servers in all languages have similar mechanism. And in ado.нет, __ in the environment variable name is the scope separator. So Modules__DataAccess__ConnectionString corresponds to { "Modules": { "DataAccess": { "ConnectionString": "…" }}} and the DataAccess class will get a configuration object passed to its constructor that will have ConnectionString filled in. That makes sense.

    But in functionapps I haven't seen the mechanism to automatically bind the configuration (passed always in environment variables) to objects. The use of __ suggests they use it somewhere inside the framework, but all samples just tell you to Environment.GetEnvironmentVariable("…") it. Long live 👻.



  • An Azure fuckery thread eh? Welcome to a world of pain. You've barely scratched the surface. Between upgrading to newer version of libraries and shit buried under a ton of documentation, you're gonna have a great time. When you search for solutions, check if there is an issue logged for it on GitHub. That's how I solved 90% of my Azure issues.



  • @Bulb said in Azure bites:

    but it's kinda drowned in the wall of text. That's my general gripe with this Azure documentation—the useful information is always drowned in walls of quintuplicated text.

    Generally with MS documentation, this is the one I always have a problem with. There is plenty of docs but the relevant info is hidden under so much fluff you'd most probably miss it.

    Like how hard you hold it is always a matter of delicate balance when it comes to jerking off with sandpaper around your dick, so is relevant and easily discoverable documentation I suppose.

    GCP gets it right somehow IMO.



  • @stillwater said in Azure bites:

    You've barely scratched the surface.

    I know. I already have another two things to rant about! I just have to get around to posting them.



  • @Bulb said in Azure bites:

    @stillwater said in Azure bites:

    You've barely scratched the surface.

    I know. I already have another two things to rant about! I just have to get around to posting them.

    I've literally paused my work for this my guy. Please post it soon!!!



  • @Unperverted-Vixen said in Azure bites:

    That's where you need to use Azure Policy to deny deploying a storage account with Shared Key authorization enabled. With that disabled, then everything is forced to authenticate with managed identity. :half-trolling:

    … most of my use-cases need being able to grant temporary SAS tokens though. But if I can prohibit keys and still allow a “Storage Blob Data Delegator” to issue SAS tokens (even better if I can limit the expiration time), I would probably use it.


  • Considered Harmful

    @stillwater said in Azure bites:

    That's how I solved 90% of my Azure issues.

    I have a 100% effective solution- use AWS.


  • And then the murders began.

    @Gribnit That is an effective solution, because they make it too painful to build anything so you just quit your job.


  • Considered Harmful

    @Unperverted-Vixen said in Azure bites:

    @Gribnit That is an effective solution, because they make it too painful to build anything so you just quit your job.

    Can't hear you, you're not in my security policy!



  • I had a number of university clients in the Moodle era of my life who wanted to be on Azure because sweet sweet MS educational discounts.

    They all wanted to move off Azure in under a year because it was just a shitshow and gladly took the extra price from AWS because shit just worked. And it wasn’t actually that much extra in the long run.



  • @Gribnit said in Azure bites:

    I have a 100% effective solution- use AWS.

    That warrants a whole another thread. Don't get me started.



  • @Bulb said in Azure bites:

    @Steve_The_Cynic said in Azure bites:

    C/C++

    They are environment variables. And it's for .нет, which does not follow C/C++ conventions anyway.

    For sure. It was just a comment about the collision between my C/C++ brain and this stuff.



  • So … for one project we want to set up a separate Azure tenant and subscription. The reason is twofold: so that the customer (usually guest) accounts don't mix with employee accounts and so that the product (or the subsidiary created for it) can be potentially sold in future without disrupting it (since the “app registration” can't be moved).

    So, after some internal discussion it was approved and the colleague responsible for Azure asked the cloud service provider (CSP) to create it. It's been a month and … it's not done yet.

    • :wtf: 1: Azure is provided by Microsoft. You can take a credit card and sign up with them directly and have it working in a couple minutes, plus finding the relevant information (like what domain name you want to use in the account names and the right contacts to fill in). But if you don't want to pay with a credit card, you go through an intermediary, ‘cloud service provider’. I don't think they bring any other added value, and I don't see much added value in this, but apparently some managers do.

    • :wtf: 2: For two weeks we couldn't get hold of anybody in the CSP, because the contact we knew was out of office and didn't bother to provide automatic response with a contact to any substitute.

    • :wtf: 3: When we finally got a hold of them, we ran into another issue. The project is to be (also) sold through Azure Marketplace, so a colleague wanted to start testing the integration (on the existing subscription, with the ‘offer’ in private preview mode) and got error

      One or more of your subscriptions is a Cloud Solution Provider (CSP) subscription that cannot deploy Private Offers.

      Waaaat? Apparently you can actually provide private offers (because the partner registration is not tied to a specific tenant), but you can't purchase them! If you are using a CSP, that is.

    • So colleagues fished out or set up some testing subscription which can get those offers (though it probably couldn't be actually charged for them if they were not for $0) and we asked the CSP to proceed with setting up the production subscription. They … are hopefully working on it now.


  • And then the murders began.

    @Bulb That sounds like a really long-winded way to say “don’t use a Cloud Service Provider, do stuff yourself”.



  • @Arantor said in Azure bites:

    I had a number of university clients in the Moodle era of my life who wanted to be on Azure because sweet sweet MS educational discounts.

    They all wanted to move off Azure in under a year because it was just a shitshow and gladly took the extra price from AWS because shit just worked. And it wasn’t actually that much extra in the long run.

    We have a number of clients that want to use Azure because they are in the retail industry and Amazon is their competitor.

    Which, unfortunately, means that it does not matter how much money they sink in that endeavor; as long as not a single penny goes to Jeff Bezos, it's a win.


  • Discourse touched me in a no-no place

    @Bulb said in Azure bites:

    I don't think they bring any other added value

    The value brought is the ability to use invoices (a genuine BFD at many large organizations!) and pay on a slightly different schedule (such as yearly instead of strictly monthly).



  • @dkf Yeah, using invoices is a big effing deal. It's more expensive and more complicated, but companies playing corporate still do it that way.



  • @Unperverted-Vixen said in Azure bites:

    @Bulb That sounds like a really long-winded way to say “don’t use a Cloud Service Provider, do stuff yourself”.

    Yes, it is. Unfortunately some corporate types have a mental block when it comes to credit cards.



  • @Unperverted-Vixen said in Azure bites:

    @Bulb That sounds like a really long-winded way to say “don’t use a Cloud Service Provider, do stuff yourself”.

    On the other hand, some books on cloud architecture tell you that setting up and maintaining services like Kubernetes yourself will be an extreme pain in the ass - better rent it from M$.
    :surprised-pikachu:
    In the end, you will suffer, no matter how you decide.


  • And then the murders began.

    @BernieTheBernie said in Azure bites:

    On the other hand, some books on cloud architecture tell you that setting up and maintaining services like Kubernetes yourself will be an extreme pain in the ass - better rent it from M$.

    You should still rent it from M$.

    You just shouldn't hire some other company to sit between you and M$.



  • @Kamil-Podlesak said in Azure bites:

    as long as not a single penny goes to Jeff Bezos, it's a win.

    Your ideas are intriguing to me and I wish to subscribe to your newsletter.



  • @BernieTheBernie said in Azure bites:

    @Unperverted-Vixen said in Azure bites:

    @Bulb That sounds like a really long-winded way to say “don’t use a Cloud Service Provider, do stuff yourself”.

    On the other hand, some books on cloud architecture tell you that setting up and maintaining services like Kubernetes yourself will be an extreme pain in the ass - better rent it from M$.

    The term “cloud service provider” here means really an “intermediary to the actual cloud service provider”. That is, you can order the service directly from M$, who is the actual service provider, but (except for really big customers I guess) you have to use a credit card for paying. But since many companies don't like it, there are intermediaries that resell the M$ service under terms slightly more palatable to financial directors, and those are termed “cloud service providers” by M$.

    :surprised-pikachu:
    In the end, you will suffer, no matter how you decide.

    This layer of indirection is only getting in the way of the technical work for all the wrong legal and accounting (not financial; it is obviously a bit of extra cost) reasons.


    As for actual kubernetes, well

    1. The point of kubernetes is that you can rent it from M$ or Google or Amazon or at least a dozen other hosting providers and it behaves close enough to the same in each of them. So even if you rent it managed by the provider, it gives you the option to switch the providers any time.
    2. Installing kubernetes on your own servers isn't that bad. The problem is that as it evolves, you have to do relatively frequent upgrades and for each upgrade you have to review whether there are any changes to the installation procedure and new options to set. And there is a lot of options, so choose an installation method that fixes most of the choices for you; the common path is better tested.
    3. And if you have a lot of infrastructure (which is where you want to go from rented to owned one), there are some tools for managing multiple kubernetes clusters over a bunch of servers (like gardener or rancher). And by then you have enough admin staff to deal with the complexity too.


  • @Kamil-Podlesak said in Azure bites:

    We have a number of clients that want to use Azure because they are in the retail industry and Amazon is their competitor.

    Which, unfortunately, means that it does not matter how much money they sink in that endeavor; as long as not a single penny goes to Jeff Bezos, it's a win.

    They still have Google (that's an advertising company, it shouldn't be a competitor to them) and a lot of smaller cloud/hosting providers (probably around 20 options)—which won't have as many services, but going with just a kubernetes and maybe a database (almost everyone should have at least postgresql) and deploying yourself on top of that—which you want to avoid vendor lock-in—they would be plenty fine.



  • So Kubernetes, since it internally uses OAuth2, can be federated with another identity provider, and Azure supports enabling such integration with their AAD. Since the other identity provider can do anything it wants, the kubernetes client, kubect, just calls a helper to get the token. So Microsoft wrote a helper, kubelogin. So far, so good. But

    Using Azure CLI login mode

    az login
    
    # by default, this command merges the kubeconfig into ${HOME}/.kube/config
    az aks get-credentials -g ${RESOURCE_GROUP_NAME} -n ${AKS_NAME}
    
    
    # kubelogin by default will use the kubeconfig from ${KUBECONFIG}. Specify --kubeconfig to override
    # this converts to use azurecli login mode
    kubelogin convert-kubeconfig -l azurecli
    
    # voila!
    kubectl get nodes
    

    The az aks get-credentials command writes appropriate configuration into the config file for kubectl. So obviously you are logged in with az. But no, they can't just put the token in the config, or write the helper invocation that gets it from az which already has it. No, they configure it to get it via the device login flow. And immediately tell you to use a helper command to switch it yourself. :wtf: :angry:

    And yes, the transformation is somewhat non-trivial, because some other arguments need to be removed in addition to changing the --login.

    :wtf:² Most of their libraries and clients support automatically checking all potential ways to get the token—environment variables, managed identity, shared token cache, azure cli, interactive—and using the first one that looks like it should work. But no, this client just uses the one you tell it and that's it.



  • So a colleague has been trying to create an “api management” resource in Azure with terraform for quite a while now. It … just times out every time. So he created it manually and tried to import it. It … even that times out.

    Ok, the terraform-provider-azurerm is maintained by HashiCorp (who maintains terraform), not Microsoft, though of course they use Microsoft-provided client libraries, so it can be the fault of either of them.



  • @Bulb said in Azure bites:

    So … for one project we want to set up a separate Azure tenant and subscription. The reason is twofold: so that the customer (usually guest) accounts don't mix with employee accounts and so that the product (or the subsidiary created for it) can be potentially sold in future without disrupting it (since the “app registration” can't be moved).

    So, after some internal discussion it was approved and the colleague responsible for Azure asked the cloud service provider (CSP) to create it. It's been a month and … it's not done yet.

    • :wtf: 1: Azure is provided by Microsoft. You can take a credit card and sign up with them directly and have it working in a couple minutes, plus finding the relevant information (like what domain name you want to use in the account names and the right contacts to fill in). But if you don't want to pay with a credit card, you go through an intermediary, ‘cloud service provider’. I don't think they bring any other added value, and I don't see much added value in this, but apparently some managers do.

    • :wtf: 2: For two weeks we couldn't get hold of anybody in the CSP, because the contact we knew was out of office and didn't bother to provide automatic response with a contact to any substitute.

    • :wtf: 3: When we finally got a hold of them, we ran into another issue. The project is to be (also) sold through Azure Marketplace, so a colleague wanted to start testing the integration (on the existing subscription, with the ‘offer’ in private preview mode) and got error

      One or more of your subscriptions is a Cloud Solution Provider (CSP) subscription that cannot deploy Private Offers.

      Waaaat? Apparently you can actually provide private offers (because the partner registration is not tied to a specific tenant), but you can't purchase them! If you are using a CSP, that is.

    • So colleagues fished out or set up some testing subscription which can get those offers (though it probably couldn't be actually charged for them if they were not for $0) and we asked the CSP to proceed with setting up the production subscription. They … are hopefully working on it now.

    Aaaaaand… of course it was

    P-E-B-K-A-C

    … the marketplace offer has a checkbox to prevent is being ordered through CSPs. Why? :mlp_shrug: It was checked, and the colleague setting it up didn't realize it was.



  • @Bulb said in Azure bites:

    If you get it wrong, it does not log any error, it just does not work.

    Welcome to the cloud!



  • So we have this project here that has some assorted stuff, mostly logic apps, that are deployed using ARM templates. We created them so that all have the same two parameters used to derive the resource names—so we can use the templates on development, staging and production environments—, but some also need the reference to the sharepoint.

    Fine, so the parameters are the same, so let's do

    for arm_template in azure/*.json; do
      az deployment group create -n "$build_id-$(basename $arm_template .json)" -f "$arm_template" -p @arm-parameters.json
    done
    

    with the same arm-parameters.json. That's how we work with all the other deployment tools like helm and terraform—the same parameter has the same meaning for all components and those tools don't mind extra parameters.

    But no, az has to mind. It just spits

    ERROR: {"code": "InvalidTemplate", "message": "Deployment template validation failed: 'The template parameters 'sharepointSiteId' in the parameters file are not valid; they are not present in the original template and can therefore not be provided at deployment time. The only supported parameters for this template are 'environment, prefix, suffix'. Please see https://aka.ms/arm-pass-parameter-values for usage details.'.", "additionalInfo": [{"type": "TemplateViolation", "info": {"lineNumber": 0, "linePosition": 0, "path": ""}}]}
    

    and 🖕 exits with non-zero status. Aaaaaand, of course, yeah, that's documented behavior.

    Fine morons, I'll filter the values that are not needed out. That's what [jq] is for:

    jq --slurpfile t arm-parameters.json \
        '.parameters |= with_entries(select(.key | in($t[0].parameters)))'
    

    even looks pretty simple, see? Oh, my sweet summer child, you wish:

    jq: Bad JSON in --slurpfile t …/whatever.json: Invalid string: control characters from U+0000 through U+001F must be escaped at line 37, column 26
    

    Ok fourse :facepalm:, the template is not-strictly-a-json (the first issue it hit was a multi-line string, but it can also have comments and some other stuff) :headdesk:.

    🖕 Microsoft with a rusty spork!



  • @Bulb Is the problem described, that the files contain naked ASCII control characters, accurate?

    That said, in $JOB, we did have a problem that was eventually tracked to an naked NUL ('\000') character in a C source file. Gcc somehow didn't notice this, and did strange things as a result.



  • @Steve_The_Cynic said in Azure bites:

    @Bulb Is the problem described, that the files contain naked ASCII control characters, accurate?

    Well, yes. Where control character is the newline (U+000A) and is inside string. Because strict JSON does not allow embedding literal newlines in strings, only as \n, but these ARM templates do allow them, because otherwise the longer embedded expressions would be Ureadable™. These expressions have a very LISPy feeling, and the one I hit was broken over at least 5 lines to make it at least somewhat readable.



  • @Bulb said in Azure bites:

    for arm_template in azure/*.json; do
      az deployment group create -n "$build_id-$(basename $arm_template .json)" -f "$arm_template" -p @arm-parameters.json
    done
    

    So here's the :wtf:-worthy CodeSOD-style solution:

    for arm_template in azure/*.json; do
      params=$(
        while read param; do
            sed -n "/variables/q;s!.*\"${param%%=*}\".*!-p ${param}!p" "$arm_template"
        done <<EOS
    environment=stg
    prefix=bflm-psvz-
    sharepointSiteId=sites/mscerticon.sharepoint.com,some-uuid-bflmpsvz,other-uuid-bflmpsvz
    EOS
      )
      az deployment group create -n "staging-$build_id-$(basename $arm_template .json)" -f "$arm_template" $params
    done
    

    So what this line of line-noise, sed -n "/variables/q;s!.*\"${param%%=*}\".*!-p ${param}!p" "$arm_template", does?

    • /variables/q stops processing when we hit the "variables" member—because the template always has members parameters, variables, resources and possibly dependencies and they are always written in that order. So by stopping on the variables key we only process the parameters.
    • If we hit the line with the parameter name on it, we print the full parameter value including the -p option to be added to the command line.
    • Fortunately the parameter values don't need to contain spaces, so we can let shell split the result on whitespace.
    • The parameters themselves are actually substituted here by the CI/CD server from a different parameter, but that's not really relevant to the gross hack in question.

    When all else fails, use sed!


  • ♿ (Parody)

    @Steve_The_Cynic said in Azure bites:

    @Bulb Is the problem described, that the files contain naked ASCII control characters, accurate?

    That said, in $JOB, we did have a problem that was eventually tracked to an naked NUL ('\000') character in a C source file. Gcc somehow didn't notice this, and did strange things as a result.

    The real tragedy is that @blakeyrat is no longer here to gloat.



  • Azure AD bite:

    So some time ago Microsoft introduced this federated identity where you can log in to an app registration or a user assigned managed identity with an access token issued by another OAuth2 provider using the client credentials flow. So far, a good idea.

    :wtf: 1: This says:

    Creating a federation between two Azure AD identities from the same or different tenants isn't supported.
    Warum, kurwa‽

    … if they didn't explicitly prohibit it, it would allow:

    • Authenticating to a user assigned managed identity in one tenant with a managed identity in another tenant, therefore allowing the workload (VM, function app or similar) to access resources in the other tenant.
    • Authenticating to an app registration with managed identity, which would again allow the workload to access resources in other tenants, because an app registration can be granted access in other tenants, unlike managed identity, which can't.

    A highly requested feature for ages. But no, it's explicitly excluded and prohibited. In the dumbest way possible too; you can set it up and then it rejects the request when you try to use it.

    This by the way gives distinct advantage to Kubernetes (AKS) pods over function apps, app services, their container variants and VMs, because in Kubernetes the workload identity uses the Kubernetes' internal identity provider and that is supported for logging into app registrations.

    :wtf: 2: This federation depends on knowing the subject claim in the token. In most identity providers, this is the unique user identifier. But not in AAD. In AAD, the sub is a random jumble specific to the combination of the principal and the audience and apparently cannot be retrieved other than from an actual token (yeah, it's an older question, but I haven't found any newer with a different conclusion).

    … so if someone else wanted to do the same thing using their identity provider as primary, they'd have hard time divining the necessary parameters. Ok, Microsoft being 🍆s is kinda expected but still, the impossibility to just query the subject identifiers for a bunch of principals is annoying as :fu:.


  • Discourse touched me in a no-no place

    @boomzilla said in Azure bites:

    @Steve_The_Cynic said in Azure bites:

    @Bulb Is the problem described, that the files contain naked ASCII control characters, accurate?

    That said, in $JOB, we did have a problem that was eventually tracked to an naked NUL ('\000') character in a C source file. Gcc somehow didn't notice this, and did strange things as a result.

    The real tragedy is that @blakeyrat is no longer here to gloat.

    A naked NUL in source does hit the "Dude. WTF." button, especially in C.



  • @boomzilla Discord is :arrows:



  • @dkf said in Azure bites:

    @boomzilla said in Azure bites:

    @Steve_The_Cynic said in Azure bites:

    @Bulb Is the problem described, that the files contain naked ASCII control characters, accurate?

    That said, in $JOB, we did have a problem that was eventually tracked to an naked NUL ('\000') character in a C source file. Gcc somehow didn't notice this, and did strange things as a result.

    The real tragedy is that @blakeyrat is no longer here to gloat.

    A naked NUL in source does hit the "Dude. WTF." button, especially in C.

    That was my reaction when it was found. Félicitations to Philippe, though, for finding it.


  • Considered Harmful

    We don't always delete our SQL Server snapshots, but when we do, we delete them with the entire server!


  • Considered Harmful

    @LaoC :hanzo: in the other news yesterday.


  • Considered Harmful

    @Applied-Mediocrity said in Azure bites:

    @LaoC :hanzo: in the other news yesterday.

    :oh: I think I've been ignoring that thread since it got a bit too garagey.



  • @LaoC said in Azure bites:

    We don't always delete our SQL Server snapshots, but when we do, we delete them with the entire server!

    The funniest part is that they shot themselves in their own foot with the churn of refucktoring libraries. Because it was the mass-rename as they replaced the obsoleted version of the client library with a newer one in which they made the mistake.



  • So I was updating some X509 certificates in app gateways that were about to expire.

    Microsoft has a “certificate order” resource, which generates a certificate and gets it signed by Go Daddy, and that resource did re-generate the certificate and got it re-signed a month before it expired as expected. But then I still had to update the gateway, because:

    • The certificate order writes the certificate (as a pkcs#12 package) to a key-vault secret, and the gateway can pull the certificate from a key-vault,

    • but if you do it in portal (as you usually do the first time around) it can

      • only pull it from certificate, but the certificate order stores it as secret, and
      • will pull the current version and not update it when you update the key-vault.

      so obviously that's how it was set.

    • You can set it to reference key-vault and automatically fetch updates from there. But you can only do that through the REST API (with the command-line client or the powershell client or terraform). So you only find out it's possible if you have a hunch it should be and STFW (all else didn't fail, so why are you reading the docs?)

    • As a bonus, the setting via API only accepts references to secrets, but apparently looks up certificates as well, because, under the impression from the portal that it wants a certificate I uploaded the certificate as that, and the app gateway found it.

    So of course I first did it the manual way, which meant downloading the certificate from the secret and uploading it back as a certificate.

    • From the secret, you get a PKCS#12 package with unencrypted private key.
    • When uploading it to a certificate item, there is an option to put password to decrypt the private key, but it accepts leaving that empty if the private key is not encrypted.
    • But if you opt to use the option (that both the portal and the API also offer) to upload the certificate directly into the application gateway, the password is mandatory. So you have to encrypt the primary key. Which with openssl involves this magic incantation: openssl pkcs12 -in cert.pfx -passin pass: | openssl pkcs12 -export -out cert-password.pfx -passout pass:1234. Because the pkcs12 subcommand can't specify input and output formats separately, so it can only expand PKCS#12 to cert+key or compose cert+key to PKCS#12.

    So I eventually found out how to set it up so it updates automatically next time, but there was entirely too many :footgun-1:s on the way.


  • Discourse touched me in a no-no place

    @Bulb said in Azure bites:

    there was entirely too many :footgun-1:s on the way.

    Sounds like about the normal number for Microsoft.



  • @dkf I'd say it's slightly above average even for them, because in most cases the portal behaves consistently with the other clients (after all the portal calls the same API underneath).


  • And then the murders began.

    @Bulb said in Azure bites:

    • As a bonus, the setting via API only accepts references to secrets, but apparently looks up certificates as well, because, under the impression from the portal that it wants a certificate I uploaded the certificate as that, and the app gateway found it.

    Installing a certificate in Key Vault creates a hidden secret with the same name, with the contents of the private key. This allows you to grant an identity read access to the certificate but not the secret; they can then see the public portion (equivalent to the .cer file) but not the private portion (equivalent to the .pfx file).

    • But if you opt to use the option (that both the portal and the API also offer) to upload the certificate directly into the application gateway, the password is mandatory. So you have to encrypt the primary key. Which with openssl involves this magic incantation: openssl pkcs12 -in cert.pfx -passin pass: | openssl pkcs12 -export -out cert-password.pfx -passout pass:1234. Because the pkcs12 subcommand can't specify input and output formats separately, so it can only expand PKCS#12 to cert+key or compose cert+key to PKCS#12.

    Or just install it on your local machine via MMC and export it again. (Although that has a tendency to fuck up the cert chain. App Gateway is fine with it but SSL Labs will whine.)


Log in to reply