Which RCS is the best James Blunt?


  • :belt_onion:

    @wft said:

    I'm starting to feel pretty much compelled to subscribe to their "Full pack" (I already have PyCharm and PHPStorm). Probably in spring.

    Same... I'm waiting for Clion to mature a little then I'll probably be all-in



  • @blakeyrat said:

    Ok; then you pay me for it.

    Then I choose currency and amount.


  • I survived the hour long Uno hand

    @blakeyrat said:

    Release early, release often

    I love that one. If you tell people "test early, test often," they'll bitch and moan until kingdom come about how useless unit tests are. "release early, release often" is a mantra adopted easily by people who want to try and shift the testing burden to the end user so they can save themselves the "tedium" of writing tests for their own goddamn code. What ever happened to pride in your work?



  • @blakeyrat said:

    But all open source projects do them. And they're actually proud! of some of them. Like release early, release often.

    Wrong. Given this very same PostgreSQL we are talking about in here, FreeBSD, Linux distributions. But I bet you only know about Linux kernel, so you can't possibly have a clue.

    Also, you keep forgetting that correlation does not imply causation. And you fail to show causation in your rants.



  • @wft said:

    Wrong. Given this very same PostgreSQL we are talking about in here, FreeBSD, Linux distributions. But I bet you only know about Linux kernel, so you can't possibly have a clue.

    I fail to see how this paragraph demonstrates my wrongness.



  • @blakeyrat said:

    I don't see how participating in open source makes the world better in any way

    Well, it is making the world better for me. And if all of these things were closed source I wouldn't have money to pay for all this.



  • So or all practical purposes PHP and MySQL go hand in hand, not being able to have one without the other. Looks like it's better to keep away from both.

    Recently I have a tree structured data which I guess could be handled via recursive queries. I have been introduced the method of left-right numbering of nodes which admittedly looks fast for retrieving the full branch of children nodes but sounds either prone to errors or requiring to implement a trigger for updating the whole left and right columns each time a node is added which sounds slow as the table gets bigger. Is it a really fast and efficient method for handling trees or is it merely a workaround for the lack of Common Table Expressions in MySQL?


  • :belt_onion:

    @Haloquadratum said:

    not being able to have one without the other

    O.o

    I use MySQL without PHP all the time.

    And I use PHP without MySQL too.

    I've hilariously never used them together to date though.......



  • Which server language and framewok do you use along MySQL? Genuinely interested.



  • Use php+mysql when cheap hosting has an impact for you.
    Use the Microsoft stack when money isn't a problem, with SQL server enterprise and stuff.
    If money isn't a problem, but you avoid Microsoft for any reason, use Oracle + Java.
    Very large and critical data are usually in Oracle databases.

    The rest may or may not have a reason to have not replaced the status quo yet.


  • :belt_onion:

    I've seen several in use. I personally use Java (quiet in the peanut gallery!) but there are connector libraries for pretty much any language out there



  • Largely depends on how frequently nodes are added/removed etc, though MySQL have CTEs wouldn't really fix this so much as I understand it.

    A tree is not really "relational" data and you're just turning it through an angle to make something useful without otherwise recursively querying.

    Alternative: store as if you were going to recursively query, then simply query for the entire table at load time, and recursively resort it in PHP.



  • If you don't need "infinite" recursion, you can just join the table to itself a few times. It sounds awful, but it's surprisingly fast.



  • Depends on the table size, and MySQL won't be entirely smart about it, though it won't be entirely fucktarded about it either.

    The point at which it becomes retarded is the threshold at which it won't keep all the current result set in memory. Which is sadly lower than it should be.



  • @blakeyrat said:

    Seriously. What's in it for me? What's my incentive?

    You're asking me to do something I currently charge six digits a year for for free. You're going to have to have a pretty damned good reason.

    You run this line a lot; it seems that among the many things you don't understand are the implications of the distinction between a software project conceived of and run to make money, and an open source tool conceived of and run to solve a problem for its author(s).

    @blakeyrat said:

    That's part of running a software product: identifying the areas where your product is weak, and improving on those at a higher priority than areas where it's already strong.

    PostgreSQL is not a software product. It's not sold. It needs no marketing team. It does not compete for dollars in the software marketplace. Improvements in PostgreSQL, like improvements to any piece of free software, arise from the intersection between people for whom the software would be a good fit if only it had features X Y and Z and people capable of implementing features X Y and Z, with a certain amount of input from people who want to implement X Y and Z because they think doing so is interesting.

    If PostgreSQL gets used as part of a money-making business, that reflects a bet made by the people running the business that PostgreSQL's lack of licensing will end up costing them less than its present lack of certain features available to users of commercial DB engines.

    @blakeyrat said:

    In open source you just ship shit for decades, confused as to why nobody's fucking using it.

    No. In open source you implement stuff that meets your own requirements, and if others find it useful as well, that's a bonus. It's the Motörhead principle.

    The requirement that Postgres was initially designed to meet was to be a platform for research in new database techniques. It succeeded in meeting that requirement. Everything else it's done since is gravy.



  • @blakeyrat said:

    I don't see how participating in open source makes the world better in any way.

    https://www.youtube.com/watch?v=Cf38gOcfrYQ&t=2s


  • Discourse touched me in a no-no place

    @flabdablet said:

    In open source you implement stuff that meets your own requirements, and if others find it useful as well, that's a bonus. It's the Motörhead principle.

    There's a huge number of open source projects that never find use beyond their creator or the immediate circle of people around them. That's the normal state. Some are much more useful to other people, and become much more visible. Some even transcend their original creator. Yet all of them start from the position of being something that one person wanted to do for themselves. They are all labours of love. The overriding principle is this: you can't please everyone, so please yourself.

    Sometimes people try to build a business on top of this. Sometimes that even succeeds. The code does not care; it is not tied to the business model.

    Blakey gives the impression of being someone who believes that nothing has value unless it is purchased. There are some managers at work who think like this as well; they'll merrily spend millions on something that could be got for (almost) free simply because they cannot conceive of anything other than an economic basis for every interaction.



  • I can get behind that posture. It doesn't has to be an abstract or general economic belief a priori: the point is that there are parts of programming like debugging or adding a slick user interface that are necessary, but not interesting or enough of an intellectual challenge for a programmer to do them by his own initiative in his free time, and the tried and tested way to get someone to do it is to pay him for it. Therefore, the policy of trusting only paid software because that is the way of guaranteeing it also includes the necessary features that no programmer would do it for free, instead of having to rely on a product that may shift the burden of debugging to the user itself and under the guise of open source or having to employ extra time to get it working at the exact needs because it was originally conceived to satisfy a bunch of narrow cases that were of interest by its creator.

    There's also the joker's saying "If you are good at something, never do it for free".


  • Discourse touched me in a no-no place

    Which sentence were you replying to? Who were you agreeing with?

    And did you remember to breath when typing out that over-long sentence? 😃

    @Haloquadratum said:

    There's also the joker's saying "If you are good at something, never do it for free".

    Some things I do for me. I then sometimes give you a chance to share.



  • @dkf said:

    Some things I do for me. I then sometimes give you a chance to share.

    Because, having done the thing for me, all the effort already put into it is (a) already paid for, else I wouldn't have done it and (b) a sunk cost. Therefore, giving the results away to other people in the hope that they may be useful, but with no such guarantee, is an exercise that costs exactly nothing.

    And who knows? Somebody else might find your contribution useful enough to improve it, or even fund other people to improve it.

    The fact that this happens often enough to be notable is free software's most remarkable feature. It would take extreme naive optimism to stick a new project up on Github with the prior expectation that it will quickly become dominant in its sector. Open source people are surprised when projects do take off, not when they don't.



  • @Haloquadratum said:

    there are parts of programming like debugging or adding a slick user interface that are necessary, but not interesting or enough of an intellectual challenge for a programmer to do them by his own initiative in his free time

    Debugging - the process of understanding a failure in order to fix it - has always been the most enjoyable aspect of programming for me, which is probably why I've ended up moving from designer/coder to technician/netadmin. I'm sure there are others for whom UI is the most enjoyable part. Programming is a broad church.



  • @flabdablet said:

    Therefore, giving the results away to other people in the hope that they may be useful, but with no such guarantee, is an exercise that costs exactly nothing.

    Nothing to you.

    It costs them, as they now have to sort through your shitty broken library and the other 47 people who also make shitty broken open source libraries in the vain effort to find one that kind of maybe half-works.

    And if you made an application, that means you've ignored all rules of usability, accessibility, etc. It's like saying "hey I built this gym, anybody can use it for free! But there's no wheelchair ramp. Fuck people in wheelchairs." Guess what? That's fucking illegal in any other context, but in software that's praised and lauded.


  • ♿ (Parody)

    @blakeyrat said:

    It's like saying "hey I built this gym, anybody can use it for free! But there's no wheelchair ramp. Fuck people in wheelchairs."

    Can you give us an equivalent list (as a handy reference guide) that's like saying, "hey I built this gym, anybody can use it for free! But there's no blakeyfeature. Fuck blakeyrat."



  • I can't even parse that sentence. List of what?



  • @blakeyrat said:

    It costs them

    Quite so. And exactly how much it costs them, compared to how much comparable commercial software might cost them, is a decision that can only be made by them.

    I also note in passing that every commercial library I ever encountered while working in embedded systems design was at least as shitty as the shittiest open source library I've ever had the misfortune to use. Shittiness is not a property exclusive to free software; paying somebody to do a job doesn't guarantee they'll do it properly. If that were untrue, this site would not exist.



  • @flabdablet said:

    Shittiness is not a property exclusive to free software;

    Who said otherwise?



  • @Yamikuronue said:

    @blakeyrat said:
    Release early, release often

    I love that one. If you tell people "test early, test often," they'll bitch and moan until kingdom come about how useless unit tests are. "release early, release often" is a mantra adopted easily by people who want to try and shift the testing burden to the end user so they can save themselves the "tedium" of writing tests for their own goddamn code. What ever happened to pride in your work?

    Wish I knew... Our management is PROUD they're getting onto the damn release train schedule. In my year-end review, I was criticized for being a bit negative (it was a fair criticism).

    edit: Forgot to mention, we're closed source.


  • Discourse touched me in a no-no place

    A release train can be OK, provided everyone remembers that if a fix or feature change isn't in a state where it works when the train leaves the station, they'll have to wait for the next one. No matter how much they've promised this will be done to anyone else. (Having frequent releases reduces the sting of missing one, of course.)

    If you can stick with that, it can work very well indeed.

    Otherwise, we'll be seeing you back here soon as the subject of a front page article, probably hacked around by the editors so much that even you won't recognise the submission.



  • @FrostCat said:

    In addition to the reason you mentioned, SQL Express has (IIRC) a 10GB database size limit.

    This is absolutely correct, and I can't stress this enough, but when the DB Size limit is reached, it almost invariably corrupts your DB making it basically impossible to recover your data (even if you have access to a fully licensed instance). So take regular backups and start to plan your migration to a fully licensed edition well before you get to that point.

    Aside from that, SQL Server's licensing model is just batshit crazy, and has become so complex you literally need to engage a lawyer to ensure you comply with it. Furthermore, it only runs on windows, so your licensing / hosting costs will be higher.

    From a dev's perspective, MSSQL is great. It does the right thing most of the time, has good tooling and the .NET ORM's are better than anything I've seen in the OSS world, but "express edition" really is crack-addiction marketing at it's best. Devs love it, use it prolifically, however by the time you realize you need to move to a licensed edition, you're already backed into a corner.

    From an operations perspective, scaling it is expensive (almost on-par with Oracle - and for a far inferior solution by comparison)

    On a serious note, if anyone knows of a consultant in the south pacific with proven experience in large scale SQL Server to pgsql migrations on a several hundred table schema with a few different C# application servers (all having a reasonable DAL) send me a PM.


  • Discourse touched me in a no-no place

    @caffiend said:

    This is absolutely correct, and I can't stress this enough, but when the DB Size limit is reached, it almost invariably corrupts your DB making it basically impossible to recover your data (even if you have access to a fully licensed instance). So take regular backups and start to plan your migration to a fully licensed edition well before you get to that point.

    :facepalm:

    Damn. Progress databases suffer this remnant of a 16-bit limitation or something where the database has to be made of multiple physical files if the DB is over 2GB because it can't access files greater than 2GB[1]. If you do something that would cause an extent to go over 2GB, you'll crash the database, which, admittedly, is lame, but it's almost guaranteed you will not suffer corruption. All you do is run a DB tool that adds a new extent. When you start the server, it always does a crash recovery check.

    [1] this isn't 100% true any more but that's not relevant to what I was talking about.


  • Discourse touched me in a no-no place

    @caffiend said:

    by the time you realize you need to move to a licensed edition, you're already backed into a corner.

    You should be able to--I realize you'd need to know this ahead of time--set some kind of watch on the database size or the file size to warn you before you hit the limit, I would assume.



  • @caffiend said:

    This is absolutely correct, and I can't stress this enough, but when the DB Size limit is reached, it almost invariably corrupts your DB

    Or you could use transactions. But hey whatever.



  • @FrostCat said:

    If you do something that would cause an extent to go over 2GB, you'll crash the database

    It's similar with SQL Server. I'm not 100% certain, but my guess is that the size limitation is enforced at a fairly low level, so write-ahead logging or other techniques used to ensure that data is recoverable fail in weird ways because only some writes stop working. Those which don't need to increase the size of the .mdf work, and writes to the .ldf work, but it never ends well.

    Thus far, I've had dozens of clients with this problem (despite recommendations against using the express edition to store unbound datasets which grow continually over time, and make no provision for archiving or partitioning). But when it happens, the consequences are usually catastrophic, and usually occur at really inconvenient times (like 2am on the saturday before a long weekend).


  • Discourse touched me in a no-no place

    @caffiend said:

    It's similar with SQL Server.

    In the case of Progress, it's a legacy limit of some kind from when the software was 16-bit. There's now a setting that lets you go past the 2GB file limit, but people with really big databases who also want the best performance tend to bung the extents onto a RAID array, making sure that each extent is on its own spindle, anyway. Well, they did that when disks were single-digit GB in size, anyway; these days I tend to work with DBs that are only a few GB at most so I'm not sure what the advice is.

    Fun fact: I once moved a highly transactional database with dozens of simultaneous users from a 30-disk RAID5 SAN to a pair of striped disks in the server, and nobody noticed any kind of performance hit.


  • Discourse touched me in a no-no place

    @caffiend said:

    But when it happens, the consequences are usually catastrophic, and usually occur at really inconvenient times (like 2am on the saturday before a long weekend).

    Yup, or in the middle of a payroll run, minutes before the bank's ACH deadline.


  • Discourse touched me in a no-no place

    @blakeyrat said:

    Or you could use transactions.

    SQL Express doesn't use transactions unless you ask for it?! :wtf: Every sane DB out there uses transactions, even if it has to magically make them and commit them behind the scenes for you.

    Sounds to me more like the code is built to only use the 32-bit file offset API, and when things get too large, it just merrily starts scribbling over the beginning of the file. Which is purest evil. (It might get slightly over the 2GB limit, depending on exactly how the seeks and writes are performed, but not by much.) Because it is such a low-level limitation, the transaction stuff never sees it at all; by the point when the transactions start failing, the DB is already corrupted and using a backup is the only reasonable answer.



  • @dkf said:

    SQL Express doesn't use transactions unless you ask for it?!

    ? No, not generally.

    @dkf said:

    Every sane DB out there uses transactions, even if it has to magically make them and commit them behind the scenes for you.

    That doesn't make sense. How can the database know (in a business logic sense) whether a transaction is finished or not? Telepathic database?

    The best it can do is assume a single query is a transaction.

    @dkf said:

    Sounds to me more like the code is built to only use the 32-bit file offset API, and when things get too large, it just merrily starts scribbling over the beginning of the file.

    I highly doubt Microsoft would make that mistake.

    I think it's more likely that caffiend is full of shit, and/or the corruption he refers to is corruption of business data caused ultimately by failing to use transactions. (So part of the business transaction was committed and another part wasn't.)



  • @FrostCat said:

    Yup, or in the middle of a payroll run, minutes before the bank's ACH deadline.

    Pfft, I'm an incorporated contractor, so my care factor for payroll isn't particularly high.

    Fun fact though, this time of year is great for contractors. Everyone's accounts payable department seem to go on holiday for 3 months, during which time aldous huxley's machine elves break into their offices and steal all your invoices from november and december, meaning you have to resubmit them all in february. This causes a knock-on effect whereby paying them all would exceed some departmental expenditure limit, meaning you get strung out until june.


  • Discourse touched me in a no-no place

    @blakeyrat said:

    That doesn't make sense. How can the database know (in a business logic sense) whether a transaction is finished or not? Telepathic database?

    In Progress, if you do your data access in the 4GL, you get transactions automatically, and similar to how variable scoping works in C-like languages.


  • Discourse touched me in a no-no place

    @caffiend said:

    Pfft, I'm a contractor, my care factor isn't particularly high.

    Snort. The employees you work with feel differently, I'll bet. (As far as I can tell, my company pays contractors on the same schedule as the employees, but we rarely use them.)



  • @FrostCat said:

    In Progress, if you do your data access in the 4GL, you get transactions automatically,

    What's the scope of the transaction? The current connection? How does that work with connection pooling? (If at all.)

    I don't know what "the 4GL" is and Wiki's page on it is delightfully vague and useless. I guess Progress has multiple query languages in different "generations"? And 4GL is a newer/older one than the default? Or...?



  • Unfortunately, I do most of my work in the banking sector, and like everything else in banking, their payroll systems use something that was designed before i was born, probably running on a VAX emulator, which is maintained on the principle of "don't touch it, lest you accidentally stop it from working, and the last guy who knew how to fix it died in 2008"



  • @dkf said:

    SQL Express doesn't use transactions unless you ask for it?! :wtf: Every sane DB out there uses transactions, even if it has to magically make them and commit them behind the scenes for you.

    This statement is incorrect. All editions of SQL Server since SQL Server 2000 wrap each unenclosed insert or update statement in an implicit transaction with the default isolation level. Read statements are the same, they are executed with the default isolation level, which in a vanilla installation is "read uncommitted"


  • Discourse touched me in a no-no place

    @blakeyrat said:

    What's the scope of the transaction? The current connection? How does that work with connection pooling? (If at all.)

    In Progress, you access tables in a manner similar to the way you access variables, and the scope of the transaction is the scope of the outermost record you access with a write-lock. Typically that's going to be either a loop statement or the function the access is in. If you don't want function scope, but something a bit smaller, you can use an optional transaction statement to reduce the scope. Traditionally in a Progress app you connect to the DB on startup and maintain a connection for the life of the application instance, although you can connect and disconnect (roughly) on the fly. So let's say, I dunno, I want to move every user to Florida for some reason:

    for each user exclusive-lock:
      user.zip = "33071".
      user.state = "FL".
    end.
    

    the scope is the for each statement. If there's an extra table I wanna access, I'd do it inside the for statement, and then that becomes part of the transaction automatically. If you want to expand scope, you can use the transaction keyword again:

    do transaction:
      for each table1 exclusive-lock:
        ..
      end.
    
      for each table2 exclusive-lock:
        find first table3 exclusive-lock no-error.
        ..
      end.
    end.
    

    That's one transaction.



  • Ok, but that's not even SQL at all.

    SQL Server is (oddly enough, given its name) a SQL database. It uses SQL.

    SQL.


  • Discourse touched me in a no-no place

    @blakeyrat said:

    I don't know what "the 4GL" is and Wiki's page on it is delightfully vague and useless. I guess Progress has multiple query languages in different "generations"? And 4GL is a newer/older one than the default? Or...?

    Sorry. Progress data access/query language is a fourth-generation language; there's only one, although they add features from time to time. One of the hallmarks of a major version number change of the product is new language features, like GUI in version 7, COM access in version 8, a native facility similar to MSXML in version 9, and so on.


  • Discourse touched me in a no-no place

    @blakeyrat said:

    Ok, but that's not even SQL at all.

    SQL Server is (oddly enough, given its name) a SQL database. It uses SQL.

    That's true. If you access a Progress database in SQL then you have to take a certain amount of responsibility for your transactions. If you write your code in Progress, you get transactions for free, including rollback. Also, if the database crashes mid-transaction, you can safely assume the entire transaction will be rolled back automatically the next time the database starts up.



  • @FrostCat said:

    If you write your code in Progress

    And why would someone actually want to do that?


  • Discourse touched me in a no-no place

    @caffiend said:

    And why would someone actually want to do that?

    Because it's the easiest way to access the data in a Progress database?



  • @FrostCat said:

    Because it's the easiest way to access the data in a Progress database?

    Have i been living under a rock or something, cause until this thread, I'd never even heard of a "progress database." I just assumed that either you or your autocorrect misspelled postgres, and you were referring to some stupidly old version of it when talking about file-size limitations.


Log in to reply