Peas, jest here mii out awn these won


  • FoxDev

    I'm serious, just hear me out. I don't want to see a million posts saying 'No' out of some idea that somehow I'm trying to spoil people's fun; I'm not. I just want to see if we can at least isolate some of the causes of server cooties, which we all know annoys everyone.

    So, here's what I'm thinking:

    Lock The Official Likes Thread (not forever!)


    Now you've picked yourself up off the floor, here's why:

    • We know for a fact that thread has caused issues before.
    • Every time there's a push for a landmark number, the entire forum suffers; we've all seen it.
    • The amount of load and traffic generated by simply viewing the thread is known to be well above what it should be.
    • The entire forum was down for over 40 minutes just moments after someone was catching up with Likes, and had already gone through two 15-minute downtimes within the previous two hours.

    Now, I know what some of you will say:

    How do we know t/1000 is the cause?

    To which I reply

    How do we know it isn't?

    At least this way, we'll know.


    I don't ask for unanimous support, or even widespread agreement; I'm just asking you to think about this idea before rejecting it.

    I will have this thread on mute for 24 hours, simply because I don't want to get involved with any flamewar that may brew up. Once that 24 period is over, I will revisit this thread and read to the end before replying. If, after careful consideration, the general consensus is against, I will drop the idea, and that will be the end of it.


    We now return to our regular spammingprogramming.



  • @RaceProUK said:

    The entire forum was down for over 40 minutes just moments after someone was catching up with Likes

    As said elsewhere,
    #[CITATION NEEDED]


  • BINNED

    @RaceProUK said:

    > How do we know it isn't?

    We wait for someone to check the logs. Seriously. That's the first thing we need to do before taking any action.

    Then, we fix this shit. Not because I'm in love with /t/1000 but because the suspected bug that's causing this is fucking stupid.

    And if having a thread like /t/1000 is against someone's ideology, I couldn't give a rat's ass. A bug is a bug is a bug.

    I'll fix the fucking shit if I have to. Even if we close the thread. And even if it never gets pulled. Out of personal fucking principle of not letting shit left broken because someone's pantyhose gets in a bunch when someone mentions 42k posts in a single thread.



  • @Onyx said:

    We wait for someone to check the logs. Seriously. That's the first thing we need to do before taking any action.

    QFT, :likelike:, etc.



  • @RaceProUK said:

    Lock The Official Likes Thread

    No.

    If a dog shits on the carpet, you don't just shampoo it and pet the dog.

    You hold the dog's nose right down in that shit.

    We need Discourse to fail, and to fail due to its own fucking failures, because Discourse is the shit.

    If you're going to campaign for something to reduce our pain, campaign to switch to forum software that isn't the shit.


  • kills Dumbledore

    Since I ignore the likes thread, it wouldn't affect me one way or the other.

    Having said that, locking it sounds a bit like capitulating to Jeff's "Nooo, stop that. It's not CIVILIZED!!eleven" toddler tantrums, so on principle, and because it's been an effective QA tool in finding some pretty silly bugs, I think it's best to keep it open



  • @Jaloopa said:

    Since I ignore the likes thread, it wouldn't affect me one way or the other.

    But it does affect you. If the rest of us are spamming /t/1000 and the whole site goes down, you can't see any of the other threads.



  • With a heavy heart, I have to concede to @RaceProUK's point. I know this sacrifice will be hard on everyone. We've all gotten used to the endless stream of delight that is the t1000 thread. But all good things must come to an end.

    So let as join hands in celebration as we, for one last time, scroll through kilometers of infinitiscroll powered drivel and remind ourselves of the good old times. The times when we could run 20 bots and not have the forums crash. When we were still pretending the Likes thread was about testing.

    Ahh, all the likes given. All the laughs had. Goodbye, old friend. You will be missed.

    Filed under: Kill the fucker, @boomzilla



  • One alternative is to keep breaking the forum in order to put pressure on the devs to fix it. In my opinion, they won't be swayed by that pressure. They've already told us that 42K posts is Doing It Wrong™, they've already told us to break the thread into smaller threads (basically telling us to implement pagination, which goes against their claim that pagination is Doing It Wrong™).

    And the other alternative is to fix it ourselves. And here's where @blakeyrat will (correctly) call us suckers for doing work for Jeff for free.

    I guess I'm saying we can't have both a usable forum and the likes thread at the same time.



  • @RaceProUK said:

    Once that 24 period is over, I will revisit this thread

    Hi @RaceProUK, I bet you're reading this early! You can't resist.... 😉



  • @NedFodder said:

    In my opinion, they won't be swayed by that pressure. They've already told us that 42K posts is Doing It Wrong™, they've already told us to break the thread into smaller threads (basically telling us to implement pagination, which goes against their claim that pagination is Doing It Wrong™).

    That is @wood that is not being swayed by that pressure -- Sam and Riking have been much more responsive to our issues/concerns, and if we can keep bypassing @wood while getting Sam and Riking to keep fixing things we find, we'll continue to make progress towards Discourse That Works™

    My vote is to keep /t/1000 open (even though I'm not active in it) and keep the heat under the Discodev seats to fix the perf issues.

    Filed under: If I used a time machine to fetch a version of Discourse that worked perfectly, would Blakey still rant about it?


  • BINNED

    @NedFodder said:

    I guess I'm saying we can't have both a usable forum and the likes thread at the same time.

    And you don't think that's bullshit? In a software FOR THE NEXT TEN YEARS! ?

    Again, I had my fun in there. I can have fun outside of it, too. I don't even participate that much any more, really.

    But what's next? Status? Oh, it's a silly thread, ok, fine. Bad ideas? Well, it was a bad idea to open it har-har.

    Oh, BTW, guess what? Nested quotes work. Well, as close to working Discourse gets. Wasn't that a no-no? A thing that won't get fixed? Do you know meta.d has a likes column? The one that got removed previously?

    You know what, I invested some time in this place. I made the original raw button. I helped with servercooties.com. I'd be making more shit right now but I don't have the time. I'm not bragging, just stating the facts because it enforces my point: I like this community. You guys helped me learn by showing me my mistakes. I can't teach most of you anything at this point (maybe, one day, when my beard goes grey, and even then I doubt it). So I try to make cool stuff for us to use.

    I like this community. There were talks about how it changed and went to shit. Maybe. I like it now still. But I don't like the defeatist attitude. Fucking hell people, we got banned by fucking Samsung because we kept shitting on their stuff because they deserved it. And now we're going to give up here, against some bikeshedding ideolog? What the hell happened to us?

    No. Fuck it. I'm a stubborn bastard. And that's probably the only reason I'm not completely agreeing with blakey here about changing software. This shit is either getting fixed or burned to the ground. Closing /t/1000 goes against that. If it happens, I'm likely out of here. Again, not because of the thread. Fuck it. I can back that up and read it in fucking vim if it so strikes me fancy. But because that's admitting defeat by shit.

    Fuck. That. Noise.



  • So I don't know if server cooties happen frequently or reliably enough for this to make sense, but what about temporarily locking it for diagnostic purposes?

    If it's locked and server cooties go away, it doesn't have to stay locked; if anything, that just gives us better data to argue that whatever bug is causing it should be fixed.

    But I don't know if it happens reliably enough for that to provide much help.


  • BINNED

    @EvanED said:

    what about temporarily locking it for diagnostic purposes?

    That can be gleaned from the logs much more efficiently anyway. It was always visible in the past anyway.



  • @tarunik said:

    My vote is to keep /t/1000 open (even though I'm not active in it) and keep the heat under the Discodev seats to fix the perf issues.

    👍

    @tarunik said:

    we'll continue to make progress towards Discourse That Works™
    I have my doubts that we'll ever get there, but (somewhat unsteady) movement in that direction does occur.

    @tarunik said:

    If I used a time machine to fetch a version of Discourse that worked perfectly, would Blakey still rant about it?
    That assumes that a working version of Discourse would ever exist.



  • /t/1000 has uncovered how many performance issues now? And despite what Jeff & Co may believe about how megathreads are Doing It Wrong™, virtually every forum I'm a member on has one or two megathreads, and in fact our favorite megathread is actually pretty dang small as far as megathreads go.

    This crap needs to be fixed. We were only the first to encounter these issues, other Discourse forums will run into the exact same thing in a few more years. And if we can manage to kill the site with a relatively small megathread on a forum with a relatively small user base, what would happen on real forums, for example ar15.com or overclock.net where they often have 5000+ simultaneous active users (and, I should add, perform quite well on ancient toxic hellstew PHP systems)?!?! Discourse's performance as it stands really limits it to tiny, relatively inactive communities. They can either fix the performance, or yell at us that we're Doing It Wrong™ and in doing so condemn themselves to never make it big.

    Lock it or not, I'm actually neutral on that. But I am proud that we've (ab)used /t/1000 to uncover so many issues, not because I like rubbing the DiscoDev's noses in their crap but because a huge part of my job is testing, and while /t/1000 is a rather silly method of testing it has proven surprisingly effective in uncovering actual issues in the software.



  • Keep it open. It is part of what makes WTDWTF what it is. Breakage and all.


  • Discourse touched me in a no-no place

    @EvanED said:

    So I don't know if server cooties happen frequently or reliably enough for this to make sense, but what about temporarily locking it for diagnostic purposes?

    I don't have a problem with this. Nobody says it has to stay locked, even if it is the cause of server cooties.

    I don't have a doglike in this huntthread, since I don't even participate in that topic.


  • Discourse touched me in a no-no place

    @mott555 said:

    not because I like rubbing the DiscoDev's noses in their crap

    Right. That's just gravy, so to speak.



  • I think someone mentioned this example before, but I can't find it via discosearch. This xkcd forum topic is currently at 92311 posts, or 3208 "pages", if you will. It's been growing for over two years, and has several new posts today. The site is quite responsive, even with most posts containing images. No "likes" though, maybe that's the problem here. /s

    http://forums.xkcd.com/viewtopic.php?f=7&t=101043


  • BINNED

    @nightware said:

    No "likes" though, maybe that's the problem here.

    That was a problem in the past. Got fixed. Currently it's Discourse being a non-paginated piece of shit that still uses what it calls "batches" of however many (20?) posts it loads as you scroll. That's fine. What the problem is that it queries and then sends you the ENTIRE list of ALL post IDs in the thread. For every 20 posts you read it actually fetches, in /t/1000's case, more than 42k records from the DB. Just so it can show you 20 of them.


  • Java Dev

    I don't /t/1000, but I still think we should not surrender.



  • @blakeyrat said:

    If a dog shits on the carpet, you don't just shampoo it and pet the dog.

    To me, the suggestion here is more like.

    Someone has shit on their shoe, and there's shit on the floor.

    1. Someone could have put shit on the floor and they stepped on it.
    2. There could be a pile of shit in another room, and the person stepped on that, then stepped on the spot in this floor.

    If we just clean the floor and say, look there's no more shit on this floor. We'll miss the shit in the other room until there's shit in every room.

    And that's with the dog shitting on more floors.

    And if you hold the dog's nose to the shit on this floor, the dog is going to be like "I didn't shit here, :wtf:" Then the dog is going to go around smelling shit, because he thinks that's what you want him to do. (if you blame paginate, and that isn't the problem, they'll be like, it ain't paginate, it's your thread, and they'll continue to blame everything but themselves for the shit).


    The answer is to follow the trail of shit.



  • @Onyx said:

    we got banned by fucking Samsung

    What? When? Where?


  • BINNED

    @PleegWat said:

    I don't /t/1000, but I still think we should not surrender.

    https://www.youtube.com/watch?v=SJ2hJezvd2I




  • FoxDev

    I know I said I wasn't going to check this, but I'm glad I did.

    The lock will not be permanent! A small detail I… kinda left out of the OP 😊

    Once we know if it's the cause or not, then we reopen regardless.


    I have edited the OP to reflect this omission; please, continue as before ;)


    And @xaade, that title edit wasn't cool. This is a serious issue I have raised here, and I want it treated properly.
    I have reversed your edit; I do not expect to have to do so again.



  • Heretical Hedgehog Suggests Science!



  • @RaceProUK said:

    The lock will not be permanent!

    Causation != Correlation.
    You could easily assume the bug if you lock the thread.

    Better solution is just to fix the known bug.

    @RaceProUK said:

    I do not expect to have to do so again.

    😑 Yes,... ma'am.

    But to be fair, you created a click bait title.



  • I never even looked in /t/1000, but with respect to bugs: People shouldn't cry—bugs must die.



  • @RaceProUK said:

    that title edit wasn't cool

    But is that what brought you back in here? 😄


  • :belt_onion:

    @mott555 said:

    And if we can manage to kill the site with a relatively small megathread on a forum with a relatively small user base,

    Not to mention that our megathread only has like 10 active contributors.

    Imagine a forum with a megathread that people actual bother with because the content isn't just stupid spam... 50+ people reading real content in a dischorse megathread would destroy all of dischorse as we know it........ SOMEONE DO IT.



  • that was epic



  • @NedFodder said:

    @accalia said:
    indeed, and i think it's working? i tent to get at most two notifications from the bots now, often just one.

    does that mirror your observations?

    Wait a sec... I thought the whole point of this thread was to stress test Discourse. Why dumb down the bots when we should be putting more pressure on the Discodevs to improve their product? I say turn the bots on full blast until they fix it so that we only get one notification per post!

    So I made this comment when people were complaining about getting multiple like notifications from the bots. The response I got was "meh", and all bot owners scaled back on the bot activity. So, if the consensus here is to fire away at /t/1000 until they fix it, why not turn the bots back on full blast?

    I don't care really, just looking for some consistency from the community.


  • Discourse touched me in a no-no place

    @Onyx said:

    For every 20 posts you read it actually fetches, in /t/1000's case, more than 42k records from the DB. Just so it can show you 20 of them.

    Have any of the discodevs explained why they'd do such a seemingly-stupid thing? (I'm open to the idea there could be a good, or even any, reason for it, but it doesn't seem like a good idea.)


  • Discourse touched me in a no-no place

    @RaceProUK said:

    I have reversed your edit; I do not expect to have to do so again.

    Now that's just trolling.


  • Discourse touched me in a no-no place

    @NedFodder said:

    So, if the consensus here is to fire away at /t/1000 until they fix it, why not turn the bots back on full blast?

    Because that could be the difference between sporadic and continuous problems.

    Also, fuck off discotoaster, etc.


  • BINNED

    @FrostCat said:

    Have any of the discodevs explained why they'd do such a seemingly-stupid thing? (I'm open to the idea there could be a good, or even any, reason for it, but it doesn't seem like a good idea.)

    I don't think there was a definite answer, but I have a hypothesis.

    First of all, I saw the Discourse database structure. It's 3NF, it has indexes set up. That's about it. No stored procs, no triggers, no foreign keys. Now, they have both a Postgres and a MySQL (WHY?!?) backend. Stored procs in MySQL are shit. MyISAM (which doesn't support foreign keys) is faster than InnoDB, IIRC. So, that's why that is like that.

    So... referential integrity and all that? Yeah, all in Ruby. Now, Postgres doesn't like OFFSETs much. MySQL's query optimizer loves to miss the sensible index to use. So, my guess is they decided to give up on trying to optimize queries for two separate RDBMSs (they use an ORM of some kind anyway) and do all the crunching in... wait for it... JavaScript! To reduce the server load, I guess?

    Now, since it takes you a while to read those 20 posts, and everything is updated live, there might be posts moved, deleted or whatever in that time. So, the most sensible thing? Do the whole thing over once you load the next batch, of course!

    Note that I'm guessing a lot here. But it sounds sensible to me.



  • @Onyx said:

    What the problem is that it queries and then sends you the ENTIRE list of ALL post IDs in the thread. For every 20 posts you read it actually fetches, in /t/1000's case, more than 42k records from the DB. Just so it can show you 20 of them.

    Seriously?
    My God man, I could eat a handful of iron filings and puke a better system than that - and I've never done a 'responsive' website in my life.

    That is so ****ing stupid. Who thought of that, and when are they changing it?

    It's such a blatantly "Brute-force" solution that it screams "We don't know what we're doing and/or don't want to spend any time on this product".
    Is there even any attempt to optimise anything at all?


  • Banned

    Just to explain some stuff here.

    We have this code:

    def filter_post_ids_by(sort_order)
      @filtered_posts.order(sort_order).pluck(:id)
    end
    

    Looks innocuous enough ... however

    176 samples are there just doing plucking, cause active record is just slow that way. Add to that, posts counts by user and a bunch of other stuff and yes the likes topic is causing extreme pain.

    The more severe part is that its crippling app code, block 3 web front ends and requests are stalled.


    I am fixing stuff as I go, in fact I think I may just put some hard limits in to protect the app and the db.

    But yeah ... the likes topic is definitely hurting stuff here.

    Will download a db now and see if there are any easyish wins, but I agree with @codinghorror here, the topic was there to "prove" you can break Discourse.

    You "broke" Discourse hence you "win", if it was me I would just archive and close the topic.


    That said, I am using information here to optimise these processes, it does take time and is complex work.


  • FoxDev

    42k posts in a thread shouldn't bring a forum down so regularly. Or at all.

    Here are two threads from a forum that runs IP.Board:

    Both around twice the length of the Likes thread. Performance issues: zero.

    There's another forum I used to frequent that uses XenForo. The longest thread I can find there is shorter than t/1000, but it's still over 32k posts. Performance issues: zero.

    Hopefully this explains why people find the issues Discourse has with long threads ludicrous.



  • @sam said:

    You "broke" Discourse hence you "win", if it was me I would just archive and close the topic.

    I am sorry, but if I can DOS attack your software from a single computer successfully, IMO, that it is a serious problem.


  • Discourse touched me in a no-no place

    @sam said:

    176 samples are there just doing plucking

    What does that even mean, for those of us who don't know Ruby or whatever that code snippet is? And what about it takes up so much resource?


  • Banned

    At some point it is not about being "right" its about being pragmatic.


  • Banned

    approx 176ms of CPU work on the server grabbing every post number in the topic (discounting cheap db work)


  • Discourse touched me in a no-no place

    @sam said:

    You "broke" Discourse hence you "win", if it was me I would just archive and close the topic.

    Sam, have you ever addressed the concerns of people who point out that plenty of forums legitimately have threads like that?


  • Banned

    Sure, I want this to work, perfectly.

    But... the reality is that out of our 200 customers the largest topic, globally is 5688 posts. So its not exactly my top priority at the moment.


  • FoxDev

    Some of this was written before your last post, and I'm not wasting it 😛


    @sam said:

    At some point it is not about being "right" its about being pragmatic.

    At some point you have to admit that somebody somewhere is going to use your software in ways you don't expect, and will inevitably break it.
    At some point you have to admit that forums have long threads.
    And at some point you have to admit that the core of Discourse is broken.
    @sam said:
    Sure, I want this to work, perfectly.

    I like you @sam; you're one of the good guys, and you've already fixed so much for us. And that statement only confirms it. So what I say here is not directed at you, but at this attitude that such fundamental issues are acceptable because, as seems to be a catchphrase, we're Doing It Wrong™. The thing is, we're not doing it wrong; countless communities have their forum games, and we're no different.
    @sam said:
    But... the reality is that out of our 200 customers the largest topic, globally is 5688 posts. So its not exactly my top priority at the moment.

    I'd say a fundamental bottleneck takes priority over whatever shiny is being asked for. And before the inevitable 'but they pay the bills', think about this: fix our issues with t/1000, and those 200 paying customers will also benefit.


    TBH, sounds like you need someone working full-time on optimisations and performance tuning.

    And no, I'm not volunteering; my experience with Postgres, Ruby, and Ember is non-existent 😆


  • I survived the hour long Uno hand

    I think you're misunderstanding the point of the Likes thread.They're not spam, they're legitimate posts. Right now there's a discussion about coding practices and patterns; recently, it's been discourse bugs, cleaning up the discopedia thread, video games, emoticons, user scripts... basically the same crap we talk about everywhere else, but more so.


  • Banned

    @Yamikuronue said:

    I think you're misunderstanding the point of the Likes thread.They're not spam, they're legitimate posts.

    I understand that... one aspect of it is abuse. The whole liking thing. But the content itself is legit, its just as it stands the engine is struggling with it.


Log in to reply