Azure bites



  • @BernieTheBernie said in Azure bites:

    :mlp_shrug: Azure sql is "serverless" :mlp_shrug:

    The Azure SQL Database indeed is. There is also Azure SQL Managed Instance for which you request a dedicated VM for your workload, and which behaves more, but not completely, like the server you install yourself.

    I could do a "backup" with the help of SSMS: for tables and views, have it build create scripts which I copied into a text file. And for contents, I selected output to file for the queries...
    :um-nevermind:

    For some database engines, like MariaDB or PostgreSQL, that is the normal way of doing backups, but for MSSQL that would be a lot of work when you have many tables and views and stored procedures and roles and whatnot.

    @BernieTheBernie said in Azure bites:

    The Azure SQL Server goes into sleep mode quite quickly. And then it takes about 55 seconds to wake it up...
    💤
    Had to prepare for that with a a couple of retries in my äpps.

    It's a shared cluster for a lot of customers, it shouldn't really be going to sleep. I think rather the connection sometimes slips through a crack in the load balancer or somewhere.


  • Discourse touched me in a no-no place

    @Bulb said in Azure bites:

    I think rather the connection sometimes slips through a crack in the load balancer or somewhere.

    The border load balancers/gateways for Azure are very aggressive. As soon as they think a connection has been idle for "long enough", they start blackholing everything to do with it. It's really important to not hold connections open and idle for a long time. Or even just not a really quite short time.

    This has caused me no end of trouble with HTTP optimisers inside GitHub Actions workflows.


  • And then the murders began.

    This post is deleted!

  • And then the murders began.

    @Bulb said in Azure bites:

    For some database engines, like MariaDB or PostgreSQL, that is the normal way of doing backups, but for MSSQL that would be a lot of work when you have many tables and views and stored procedures and roles and whatnot.

    And yet that's all a BACPAC is under the hoodinside the ZIP. Microsoft seems to think it's just fine, and ignore all these databases with multiple terabytes of data in them...

    It's a shared cluster for a lot of customers, it shouldn't really be going to sleep.

    If you pick the "serverless" mode, it does go to sleep, though the parameters for doing so can be configured.


  • Notification Spam Recipient

    @BernieTheBernie said in Azure bites:

    result

    Why the hell is it taking that long to complete? Is it waiting for the invocation to complete before itself completes? And why?


  • Notification Spam Recipient

    @BernieTheBernie said in Azure bites:

    @Bulb said in Azure bites:

    Sometimes you run that command and the task hangs until your patience runs out

    The Azure SQL Server goes into sleep mode quite quickly. And then it takes about 55 seconds to wake it up...
    💤
    Had to prepare for that with a a couple of retries in my äpps.

    Normally not an issue because IIS also sleeps and "warming up" usually takes 2 minutes... :tro-pop:


  • Java Dev

    @dkf My ~/.ssh/config has ServerAliveInterval 60. This sends keepalives inside the encrypted channel, confusing the production access VPN into thinking the session is still doing something. Otherwise it gets killed after 5 minutes or so. Just TCPKeepAlive yes is not sufficient.


  • Considered Harmful

    @Bulb said in Azure bites:

    For some database engines, like MariaDB or PostgreSQL, that is the normal way of doing backups, but for MSSQL that would be a lot of work when you have many tables and views and stored procedures and roles and whatnot.

    This "normal way" of using text format for table data is braindead wrong, and having fast and compact binary format backups is one of the reasons why I've elected to stay with MS Shit Queer Languid even though it costs bloody money.

    Anyway, wasn't @Mason_Wheeler doing perhaps not exactly, but something along those lines? 🤔



  • @Unperverted-Vixen said in Azure bites:

    and ignore all these databases with multiple terabytes of data in them...

    Fortunately a simple playground database: 2 tables with ~60 rows (constant) and ~2,000 rows (growing) resp., and a view joining the tables. That's not yet Terabytes...
    Btw, how compatible are those now fangled bacpac files? Could I recreae the database on a locally running MS SQL Server with that? Or Azure SQL only?



  • @Tsaukpaetra said in Azure bites:

    @BernieTheBernie said in Azure bites:

    result

    Why the hell is it taking that long to complete? Is it waiting for the invocation to complete before itself completes? And why?

    Seems to be milliseconds. It is a script running once per night looking at all my virtual machines, and deallocates them, if they are not deallocated. That may take a minute or so (yes, I set the call to deallocate to wait for completion), but sometimes the Azure Resource Manager seems a little sleepy, too.



  • @PleegWat said in Azure bites:

    @dkf My ~/.ssh/config has ServerAliveInterval 60. This sends keepalives inside the encrypted channel, confusing the production access VPN into thinking the session is still doing something. Otherwise it gets killed after 5 minutes or so. Just TCPKeepAlive yes is not sufficient.

    So long‽ I usually use 30.

    And yes, TCPKeepAlive yes does not help a iota, actually for two reasons:

    1. It only affects the transport layer, so it can only help with a masquerade, but not with a proxy which sits on the application layer.
    2. The default setting for TCP keepalive is to send a burst of packets after two hours of inactivity, which is much more than most masquerades are willing to hang on to an idle connection, so it does not help with those either in practice.

    I do use TCP keepalive, usually as curl --keepalive-time 20, when downloading from rate-limited connections, and it does seem to prevent the congestion algorithm from scaling the connections to a grinding halt as it is otherwise prone to do. But through proxies you definitely need protocol-level pings like the ssh ServerAliveInterval.



  • @Applied-Mediocrity said in Azure bites:

    @Bulb said in Azure bites:

    For some database engines, like MariaDB or PostgreSQL, that is the normal way of doing backups, but for MSSQL that would be a lot of work when you have many tables and views and stored procedures and roles and whatnot.

    This "normal way" of using text format for table data is braindead wrong, and having fast and compact binary format backups is one of the reasons why I've elected to stay with MS Shit Queer Languid even though it costs bloody money.

    Gzipped text is almost as compact as most “compact” binary formats, and has the benefit of not needing the project to maintain another format compatible across versions.



  • @BernieTheBernie said in Azure bites:

    @Unperverted-Vixen said in Azure bites:

    and ignore all these databases with multiple terabytes of data in them...

    Fortunately a simple playground database: 2 tables with ~60 rows (constant) and ~2,000 rows (growing) resp., and a view joining the tables. That's not yet Terabytes...
    Btw, how compatible are those now fangled bacpac files? Could I recreae the database on a locally running MS SQL Server with that? Or Azure SQL only?

    I think it works between the variants if you avoid any features the other variant might not have.


  • Considered Harmful

    @Bulb I would weep for you, but frankly I can't be arsed.


  • Discourse touched me in a no-no place

    @Bulb said in Azure bites:

    Gzipped text is almost as compact as most “compact” binary formats, and has the benefit of not needing the project to maintain another format compatible across versions.

    Unless you know something significant about the real information content, yes; gzip's great at repeating bit patterns, but has more trouble with other ways in which data can vary.

    A former project would transfer statistical descriptions of data together with RNG seeds and rebuild the random dataset on the other side in parallel. That was quite a few orders of magnitude faster than moving lots of random data (less than a minute instead of several hours to several days, depending on exact dataset size), and gzip wouldn't have helped at all; random data compresses terribly. Knowing your data is important.


  • Java Dev

    @Bulb said in Azure bites:

    @Applied-Mediocrity said in Azure bites:

    @Bulb said in Azure bites:

    For some database engines, like MariaDB or PostgreSQL, that is the normal way of doing backups, but for MSSQL that would be a lot of work when you have many tables and views and stored procedures and roles and whatnot.

    This "normal way" of using text format for table data is braindead wrong, and having fast and compact binary format backups is one of the reasons why I've elected to stay with MS Shit Queer Languid even though it costs bloody money.

    Gzipped text is almost as compact as most “compact” binary formats, and has the benefit of not needing the project to maintain another format compatible across versions.

    Binary formats are much faster to parse and do not carry an implicit SQL injection risk.


  • Banned

    @PleegWat said in Azure bites:

    @Bulb said in Azure bites:

    @Applied-Mediocrity said in Azure bites:

    @Bulb said in Azure bites:

    For some database engines, like MariaDB or PostgreSQL, that is the normal way of doing backups, but for MSSQL that would be a lot of work when you have many tables and views and stored procedures and roles and whatnot.

    This "normal way" of using text format for table data is braindead wrong, and having fast and compact binary format backups is one of the reasons why I've elected to stay with MS Shit Queer Languid even though it costs bloody money.

    Gzipped text is almost as compact as most “compact” binary formats, and has the benefit of not needing the project to maintain another format compatible across versions.

    Binary formats are much faster to parse and do not carry an implicit SQL injection risk.

    Pay no attention to that buffer overflow behind the curtain.


  • Discourse touched me in a no-no place

    @Gustav said in Azure bites:

    @PleegWat said in Azure bites:

    Binary formats are much faster to parse and do not carry an implicit SQL injection risk.

    Pay no attention to that buffer overflow behind the curtain.

    Or the super-:fun: of handling interoperable variable length data structures in binary.


  • Banned

    @dkf the most :fun: bits are unused parts of protocol related to features that were dropped early on, so the protocols were never even smoke-tested, that later on end up being used for another purpose. I worked with such a protocol once. You know variable-length C structures, right? With 0-sized array at the end (or 1-sized if you're :pendant:) that is actually allocated with possibly multiple entries? Guess what, a variable-length array of variable-length structures doesn't work too well. This fundamentally fucked up message that could never possibly work lived in the protocol for over 5 years until one day I, a junior dev with only one year under my belt, was tasked with subscribing to and handling those messages. It took me a moment but I did realize the message definition is nonsensical - nevertheless, after many hours I made it work with some minor pointer magic. I was too inexperienced to realize the other side didn't.


  • Discourse touched me in a no-no place

    @Gustav said in Azure bites:

    (or 1-sized if you're :pendant:)

    In some versions of the C spec, you can "unsized" arrays at the end of structs. That's useful provided you understand the padding and size calculation rules.

    The worst is where you have an "array" of variable length entities, where those entities consist of several variable length arrays. The arrangement needs careful indexing and careful address arithmetic and is rather platform-specific, and is just plain horrible overall in all languages I've seen... except it permits very efficient DMA engine use for writebacks as you can avoid writing constant data back and forth, so it turns out to be very worth the effort. Making the whole thing "nice" by making the types regular imposes a very large performance and space penalty. (Also, it's all running in a hardware interrupt handler. Of course it is.)

    It wasn't even the trickiest thing that group came up with. Not even close...


  • Banned

    @dkf said in Azure bites:

    The worst is where you have an "array" of variable length entities, where those entities consist of several variable length arrays.

    Described above. At this point you might drop the pretense, save yourself the trouble of defining structs and index into a flat char[] array directly.


  • Java Dev

    @dkf said in Azure bites:

    @Gustav said in Azure bites:

    (or 1-sized if you're :pendant:)

    In some versions of the C spec, you can "unsized" arrays at the end of structs. That's useful provided you understand the padding and size calculation rules.

    The worst is where you have an "array" of variable length entities, where those entities consist of several variable length arrays. The arrangement needs careful indexing and careful address arithmetic and is rather platform-specific, and is just plain horrible overall in all languages I've seen... except it permits very efficient DMA engine use for writebacks as you can avoid writing constant data back and forth, so it turns out to be very worth the effort. Making the whole thing "nice" by making the types regular imposes a very large performance and space penalty. (Also, it's all running in a hardware interrupt handler. Of course it is.)

    It wasn't even the trickiest thing that group came up with. Not even close...

    The original primary dev on my previous project was fond of these constructs. Zero or unspecified-length arrays are invalid in C89, so they would be specified with length 1. Luckily, the only place where we put them in arrays was where it could actually only have one element in the variable part. Might've been a nested struct too, I don't recall.

    At some point we got rid of most of them because they tended to hold human-readable strings, string size was bounded in practice, the bound wasn't too expensive, and the static code analysis tool which was imposed on us immediately started yelling critical buffer overflow issue on every single usage of the pattern as soon as it was introduced.


  • Banned

    @PleegWat said in Azure bites:

    @dkf said in Azure bites:

    @Gustav said in Azure bites:

    (or 1-sized if you're :pendant:)

    In some versions of the C spec, you can "unsized" arrays at the end of structs. That's useful provided you understand the padding and size calculation rules.

    The worst is where you have an "array" of variable length entities, where those entities consist of several variable length arrays. The arrangement needs careful indexing and careful address arithmetic and is rather platform-specific, and is just plain horrible overall in all languages I've seen... except it permits very efficient DMA engine use for writebacks as you can avoid writing constant data back and forth, so it turns out to be very worth the effort. Making the whole thing "nice" by making the types regular imposes a very large performance and space penalty. (Also, it's all running in a hardware interrupt handler. Of course it is.)

    It wasn't even the trickiest thing that group came up with. Not even close...

    The original primary dev on my previous project was fond of these constructs. Zero or unspecified-length arrays are invalid in C89, so they would be specified with length 1.

    This is still true in C99, and IIRC C11. But accessing elements past the 1st one is totally legal per the newer standards. Not in C++, though. Of course none of that matters because the compilers will do the right thing anyway.



  • @PleegWat said in Azure bites:

    Zero...-length arrays are invalid

    Serious question: in what case would you ever want to declare such a thing? Unspecified, sure. But "this array can never hold anything"? What's the use case?

    Edit: unless we're talking about "strings" (the C version anyway). But don't those always contain at least one character, the null character?


  • Discourse touched me in a no-no place

    @Gustav said in Azure bites:

    @PleegWat said in Azure bites:

    @dkf said in Azure bites:

    @Gustav said in Azure bites:

    (or 1-sized if you're :pendant:)

    In some versions of the C spec, you can "unsized" arrays at the end of structs. That's useful provided you understand the padding and size calculation rules.

    The worst is where you have an "array" of variable length entities, where those entities consist of several variable length arrays. The arrangement needs careful indexing and careful address arithmetic and is rather platform-specific, and is just plain horrible overall in all languages I've seen... except it permits very efficient DMA engine use for writebacks as you can avoid writing constant data back and forth, so it turns out to be very worth the effort. Making the whole thing "nice" by making the types regular imposes a very large performance and space penalty. (Also, it's all running in a hardware interrupt handler. Of course it is.)

    It wasn't even the trickiest thing that group came up with. Not even close...

    The original primary dev on my previous project was fond of these constructs. Zero or unspecified-length arrays are invalid in C89, so they would be specified with length 1.

    This is still true in C99, and IIRC C11. But accessing elements past the 1st one is totally legal per the newer standards. Not in C++, though. Of course none of that matters because the compilers will do the right thing anyway.

    In C99, you use a flexible array member (look it up; that is the formal phrase for them):

    struct Foo {
        size_t len;
        // more ordinary fields can go here if you want
        Bar ary[];
    };
    

    That is, an array without length goes as the last element (and no, it can't be elsewhere). The size of the structure includes nothing for the array, but has any required padding so that the elements of the array are placed correctly (if the structure also is). By contrast, you need nasty stuff with a length-1 array, and length-0 arrays were always only ever a GNU extension.

    This sort of thing is great because it allocates space much more efficiently than using several allocations, and it is quite a bit more cache friendly too.


    We weren't doing that. Or rather we were doing it partially... but not really. We wanted to have two flexible array members in a row (of different types with different widths, naturally) because that let us put the read-write parts contiguously. And we wanted an array of those where they were all of different sizes. Our needs required doing rather exotic stuff not normally envisaged by language designers. We could have made it all into a classic rectangular matrix (or maybe 3-tensor) — we had the stats to work out how large — but that would have been disgustingly sparse and slow (we tried!).


  • Discourse touched me in a no-no place

    @dkf Note, if you're doing these games then you need to be good at your allocation sizes and address arithmetic. Flexible array members make it a bit easier (well... quite a lot easier really!) but you still need care. It is very much up to you to ensure that you never access off either end of the space you've allocated, or read from uninitialised memory. But malloc() is very much not the only way to get such things; that's just a commonplace API...


  • Java Dev

    @dkf said in Azure bites:

    @Gustav said in Azure bites:

    @PleegWat said in Azure bites:

    @dkf said in Azure bites:

    @Gustav said in Azure bites:

    (or 1-sized if you're :pendant:)

    In some versions of the C spec, you can "unsized" arrays at the end of structs. That's useful provided you understand the padding and size calculation rules.

    The worst is where you have an "array" of variable length entities, where those entities consist of several variable length arrays. The arrangement needs careful indexing and careful address arithmetic and is rather platform-specific, and is just plain horrible overall in all languages I've seen... except it permits very efficient DMA engine use for writebacks as you can avoid writing constant data back and forth, so it turns out to be very worth the effort. Making the whole thing "nice" by making the types regular imposes a very large performance and space penalty. (Also, it's all running in a hardware interrupt handler. Of course it is.)

    It wasn't even the trickiest thing that group came up with. Not even close...

    The original primary dev on my previous project was fond of these constructs. Zero or unspecified-length arrays are invalid in C89, so they would be specified with length 1.

    This is still true in C99, and IIRC C11. But accessing elements past the 1st one is totally legal per the newer standards. Not in C++, though. Of course none of that matters because the compilers will do the right thing anyway.

    In C99, you use a flexible array member (look it up; that is the formal phrase for them):

    struct Foo {
        size_t len;
        // more ordinary fields can go here if you want
        Bar ary[];
    };
    

    That is, an array without length goes as the last element (and no, it can't be elsewhere). The size of the structure includes nothing for the array, but has any required padding so that the elements of the array are placed correctly (if the structure also is). By contrast, you need nasty stuff with a length-1 array, and length-0 arrays were always only ever a GNU extension.

    Yeah, malloc(sizeof Foo + (len - 1) * sizeof Bar)). I've personally always preferred malloc(offsetoff(Foo, ary[len])).

    This sort of thing is great because it allocates space much more efficiently than using several allocations, and it is quite a bit more cache friendly too.


    We weren't doing that. Or rather we were doing it partially... but not really. We wanted to have two flexible array members in a row (of different types with different widths, naturally) because that let us put the read-write parts contiguously. And we wanted an array of those where they were all of different sizes. Our needs required doing rather exotic stuff not normally envisaged by language designers. We could have made it all into a classic rectangular matrix (or maybe 3-tensor) — we had the stats to work out how large — but that would have been disgustingly sparse and slow (we tried!).

    As far as I recall we had two cases of these. One stored a pointer to the second array despite it being in the same memory block, the other stored an offset from the struct's base address and hid the nasties in a macro. I'm definitely glad the mandate for an ARM port never materialized there though, since the alignment was a mess.


  • Discourse touched me in a no-no place

    @PleegWat said in Azure bites:

    I'm definitely glad the mandate for an ARM port never materialized there though, since the alignment was a mess.

    I was dealing with code that was always for ARM. Alignment... was an issue because the first array was of short-sized elements and the second array was of int-sized elements, and ARM really wants those all to be correctly aligned (especially when running without an MMU). We coped.



  • @dkf said in Azure bites:

    ARM really wants those all to be correctly aligned

    ~10 years ago we had an application that ran on WindowsCE and Android. Both ARM, but on one of them—not sure exactly which it was—unaligned access worked, presumably handled in software in the trap, and on the other it caused hard crash.


  • Discourse touched me in a no-no place

    @Bulb said in Azure bites:

    ~10 years ago we had an application that ran on WindowsCE and Android. Both ARM, but on one of them—not sure exactly which it was—unaligned access worked, presumably handled in software in the trap, and on the other it caused hard crash.

    It depends on whether there is an MMU in the hardware. That's where ARM systems handle unaligned accesses (as well as other things like caches and virtual memory). Those are optional hardware modules, common on A-series cores (A = Application) and rare on M-series cores (M = Mobile, but really more like eMbedded). Which one you got depends on what the customer of ARM specified.



  • @dkf I don't think either WindowsCE or Linux would even run without an MMU. They both require both virtual memory mapping and memory protection. But it is possible that on some of the devices the MMU was simpler and didn't handle unaligned access.


  • Discourse touched me in a no-no place

    @Bulb said in Azure bites:

    @dkf I don't think either WindowsCE or Linux would even run without an MMU. They both require both virtual memory mapping and memory protection. But it is possible that on some of the devices the MMU was simpler and didn't handle unaligned access.

    That might have been an earlier version of the MMU. It was of academic interest to me only; we didn't have it. No L1 cache either; the memory of that speed was configured at a normal address. Everything on our platform ran out of it because it had superb access speeds, whereas the rest of memory was so slow it was used almost like a memory-mapped disk...

    Fun times.



  • @dkf Given that we hit the problem after quite a while developing the software, I'm pretty sure it was the newer device that did not like unaligned access.


  • Discourse touched me in a no-no place

    @Bulb Most of the time you can ignore alignment when coding in C or higher languages. Almost everything is either aligned by default or of a type where alignment is not expected (because it is smaller than a word). It's only when you start to play "clever games" that you can run into trouble.


  • I survived the hour long Uno hand

    @dkf said in Azure bites:

    @Bulb Most of the time you can ignore alignment when coding in C or higher languages. Almost everything is either aligned by default or of a type where alignment is not expected (because it is smaller than a word). It's only when you start to play "clever games" that you can run into trouble.

    If you’re not playing “clever games”, are you really a coder that’s any better than a framework slinger AI? :thonking:



  • @dkf said in Azure bites:

    It's only when you start to play "clever games" that you can run into trouble.

    This was deserialization code. Written by an idiot who considered himself clever.

    The serialization format needed to be compact, so it did not include padding to inline things, but when the value was an integer, it was read by casting the char * to the buffer to an int * or an unsigned *. So of course it wasn't always aligned.

    I fixed the code by doing a memcpy into the target variable instead—the (custom, of course) format wasn't great, but it was already debugged except for the alignment issue.



  • @Bulb said in Azure bites:

    an idiot who considered himself clever.

    Most of them do.



  • @HardwareGeek This guy was particularly notable, because he strongly preferred writing his own everything over learning the stuff that already existed.

    So he wrote his own lists, based on base classes (it was C++, but he didn't know templates, so the approach was C-with-classes), and his own strings that implicitly converted between utf-8 and utf-16 (or rather, kept both versions most of the time). The lists were a niche things, everybody else just used vectors, but the strings were all over the place … until it turned out he had zero understanding of complexity, so of course the amortized cost of append was quadratic. That's when I started replacing it with standard string, adding the missing unicode conversions as Windows CE never got C++11, and what is available in C++11 isn't comprehensive anyway.



  • @BernieTheBernie said in Azure bites:

    Since we do not do kink shaming here, I dare to admit:
    I love Azure's Secret Magic™

    I created a KeyVault, and stored the connection string for a SQL Server there. Then I added an app setting named secretconnection to a function app (which was registered in Entra ID and granted the privileges to read secrets from a keyvault) with the value of @Microsoft.KeyVault(VaultName=bernieskeyvault;SecretName=berniessecret).
    Now I can access it in the app with a simple Environment.GetEnvironmentVariable("secretconnection")

    It worx. It retrieves the value stored in the keyvault (i.e. the clear text of berniessecret).
    But only when the app is running on Azure. When I run it locally, the environment variable just gets resolved to @Microsoft.KeyVault(VaultName=bernieskeyvault;SecretName=berniessecret)....
    ☁ :yell-at-cloud:

    But then suddenly, the Secret Magic™ failed.
    What's happening here?
    Ah, perhaps I forgot to add the new äpp to the Secrets Users group of my KeyVault. Right. Let me add it.
    Still fails.
    :wtf: ?
    Let's ask  .
     : "The next morning it strangely worked, without any changes."
    I did not need to wait so long. After an hour, it worked.
    :um-actually: Bernie, haven't you ranted about EVENTUAL CONSISTENCY?
    That's how EVENTUAL CONSISTENCY works.
    :um-nevermind:



  • How did I see that something did not work correctly with the Secret Magic™?
    After a month of really free Azure services, I had to upgrade to a Pay as you go subscription to continue using my virtual machine (it is too :airquotes: powerful :airquotes: for the free tier which allows only 1 vCPU and 2(?) GB RAM, while I use 2 CPU and 8 GB RAM).
    So I could also "afford" the free tier of Twilio Sendgrid (100 emails per day - far more than I need). Did the Sendgrid integration in code, and it worked (i.e. I received the email). Then I placed all "secrets" into the keyvault, and I did not receive any emails anymore.
    But also no error messages. So I had to debug (i.e. additional log messages), and so the issue.
    But...
    Why tf did I not see an error message? Why does Sendgrid just keep quiet when the API Key is wrong?
    :um-actually: Did you check the return value of the Send method?
    BernieTheBernie: await client.SendEmailAsync(msg);
    :um-nevermind:



  • Next little :wtf:.
    I decided to let my Function Äpps send me the logs per email with Sendgrid. Added a logger which collects the messsages in memory, and then can give me a long string which will become the message body.

    Works, but....

    My Function Äpps are timer (crontab) triggered and run once per day. ONCE.
    But I received 8 emails during the last 12 hours, where the function did nothing. Contents of the mails:

    2024-01-15 08:05:18.297 [Information] Application started. Press Ctrl+C to shut down.
    2024-01-15 08:05:18.316 [Information] Hosting environment: Production
    2024-01-15 08:05:18.316 [Information] Content root path: C:\home\site\wwwroot\
    2024-01-15 08:06:15.963 [Information] Application is shutting down...
    

    :wtf: is Azure doing here? It starts the app at irregular intervals. The timer does not fire, and the application gets shutdown a couple of minutes (irregular interval) later on.
    :wtf_owl:
    ☁ 🌁 :yell-at-cloud:



  • @BernieTheBernie said in Azure bites:

    Ah, perhaps I forgot to add the new äpp to the Secrets Users group of my KeyVault. Right. Let me add it.
    Still fails.
    :wtf: ?
    Let's ask  .
     : "The next morning it strangely worked, without any changes."
    I did not need to wait so long. After an hour, it worked.

    :facepalm:, ock fourse, cached tokens.

    The functionapp server caches the access token and IIRC the expiration period for access tokens is indeed an hour. And the token contains groups, so if you add a principal to a group, it will only take effect after the service gets a fresh token, i.e. in an hour.


  • And then the murders began.

    @Bulb The expiration period on an access token is one hour. However, managed identities have their own caching, which means changes for their permissions can take up to 24 hours to take effect.



  • @Unperverted-Vixen Yeah, the famous two hardest problems of software engineering: cache invalidation and cache invalidation.



  • @Bulb you were off by one as well, the two hardest problems are cache, invalidation, cache invalidation and off by one errors (which might also cause or by cause by cache invalidation issues)



  • @Arantor what about naming things?



  • @Benjamin-Hall said in Azure bites:

    @Arantor what about naming things?

    And misplaced commas?



  • @HardwareGeek said in Azure bites:

    @Benjamin-Hall said in Azure bites:

    @Arantor what about naming things?

    And misplaced, commas?

    FTFY 🏆



  • @HardwareGeek said in Azure bites:

    @Benjamin-Hall said in Azure bites:

    @Arantor what about naming things?

    And misplaced commas?

    In my defence, I recently updated my iPad to iOS 17 and the autocorrect needs reteaching. It no longer knows the word fuckery :(



  • @Benjamin-Hall said in Azure bites:

    @Arantor what about naming things?

    And scope creep?


Log in to reply