You merely adopted the 500. I was born in it, molded by it. I didn't see a fully rendered Discourse page until I was already a man
-
why you chose to switch to a forum package written in Ruby
Eh, I don't particularly care. Personally I think it's a terrible, elitist language -- maybe like the French of programming? -- but the reality is that's what programmers want to use these days. So if you want modern software, it's gonna be built in on Ruby and run on Linux.
That said, @sam and @codinghorror are both great programmers, so despite their terrible choice of platforms, they managed to make some pretty damn good software and have a solid vision for their product. Plus, they stand by it and support it. But, we've gone down that discussion road already (see every thread posted like 6 motns
-
True, but he's also more active here than any of the Discodevs
And that means what exactly? Look according to shadowmod the most active person here is some weird girl called @RaceProUK She is more active than pjh and all the discodevs together. According to your logic, she should be the first to talk to... She will also not blame the bots or the t/1000/ because she partakes in both...
He's definitely nailed some of them.
But facts are a to bad jokes... I thought you knew this. You aren't new here after all.
-
they managed to make some pretty damn good software
The last few days would suggest otherwise.
and have a solid vision for their product
Bit diffcult to see the vision from inside Jeff's bike shed though.
-
Yeah.
Their vision is about as solid as jelly.
-
And that means what exactly? Look according to shadowmod the most active person here is some weird girl called @RaceProUK She is more active than pjh and all the discodevs together. According to your logic, she should be the first to talk to... She will also not blame the bots or the t/1000/ because she partakes in both...
-_-…touché…
the reality is that's what programmers want to use these days. So if you want modern software, it's gonna be built in on Ruby and run on Linux.
It certainly seems that way…But anyway, like I said, I don't know why Discourse was chosen. And at this point, tbh, it's all rather ; we have what we have, we just need to find a way to kick it into shape.
-
but the reality is that's what programmers want to use these days.
Reality is what you make of it. If you think it's a bad idea, DON'T DO IT YOU STUPID DUMBFUCK how is that hard? Welcome to 3rd grade.
That said, @sam and @codinghorror are both great programmers,
You're either delusional or a fucking flat-out liar.
If they're "great programmers", WHY THE FUCK IS THIS FORUM SO BROKEN!? How does your definition of "great programmer" allow for that? Is it just based on what they say about themselves? Or, like, maybe a cosmic ray struck your brain and short-circuited some of your neurons?
so despite their terrible choice of platforms,
Oh so they're "great programmers" but you say IN THE SAME SENTENCE they make really shitty decisions about what tools they use.
they managed to make some pretty damn good software and have a solid vision for their product.
Now I know you're a fucking liar.
The only work that ever gets done around here from the dumbfucks at Discourse Inc is undoing their own design decisions from a month ago.
"Let's use gravatar! Let's break our implementation of gravatar because we're incompetent idiots and don't understand how it's supposed to work! Let's just remove gravatar altogether because our 'solid vision' apparently changed in the course of 3 months!"
"Let's have square avatars, because we have a SOLID VISION! ... now wait now let's have round avatars, because we have a SOLID VISION!"
WHAT ARE YOU BASING THESE STATEMENTS ON?! IS THERE ANOTHER DISCOURSE? AM I IN A PARALLEL UNIVERSE?!
Plus, they stand by it and support it.
THEY DON'T EVEN QA IT! THEY DON'T EVEN TRY!!!
You are the BIGGEST liar. It's impossible for someone to be this deluded.
You know, Alex, I used to think you were smarter than my cat. That's changed.
-
-
She will also not blame the bots or the t/1000/ because she partakes in both...
The data shows that /t/1000 is relatively slow to load, granted.
Also, as someone who dabbled a bit with poking around the interface the bots use: bots who sit there and "read" the threads are about as much load as me leaving a tab open. Actually, here, I'll reinstall Opera 12 and set autorefresh to 5 seconds. Or install a plugin for that here. That will be more load on the system than a bot autoliking /t/1000. If I crash the whole damned thing like that, is it a stable system?
Jellypotatoes?
++, 'd because I was replying to @Kuro
-
Bit diffcult to see the vision from inside Jeff's bike shed though.
One like is not enough for that! :)
-
The data shows that /t/1000 is relatively slow to load, granted.
That might have something to do with the 40,000+ posts in it… and the million+ likes… ;)
@Onyx said:Also, as someone who dabbled a bit with poking around the interface the bots use: bots who sit there and "read" the threads are about as much load as me leaving a tab open.
Well, a little more than that; still, it should be less load than a normal user places on the system. Anyway, we have, what, 44 bot accounts, of which only 5 are currently active; compared to the number of actual human (and vulpine and erinacean and feline) users, it's a pretty small percentage ;)Anyway, @Kuro was only teasing me ;)
-
It's impossible for someone to be this deluded.
We're not even on the same level of delusion.
You know that this is a (former) hobby site of mine that I continue to pay the bills for and keep it running because I enjoy really the community that was created. I don't code any more, and hell, I don't even own a personal computer. If you or anyone else would like to take over, I'm totally open to that.
Until then, please continue directing your feedback to this way; I sincerely value your opinion and would love to take the opportunity to satisfy and serve you as a valued member!
-
Actually, here, I'll reinstall Opera 12 and set autorefresh to 5 seconds. Or install a plugin for that here. That will be more load on the system than a bot autoliking /t/1000.
Now you have to turn yourself off because we are having a "no bot"-weekend.
++, 'd because I was replying to @Kuro
Sorry ;D
@Sam I am still getting 500s while composing posts sometimes. I know it's weekend and all but could you take a look at the logs?
That would be much appreciated!
Filed Under: Amounts of 500s submitting this post: 1
-
of which only 5 are currently active
Is that not 5 more than would normally apply to a "No bot" weekend?
-
If you or anyone else would like to take over, I'm totally open to that.
He did propose to pay the bill for a server hosting the CS-forums on readonly mode.
-
You know that this is a (former) hobby site of mine that I continue to pay the bills for and keep it running because I enjoy really the community that was created.
Yes. Now could you do it well?
Call me crazy, but I don't see "I do it as a hobby" as being equivalent to "I do a really shitty job at it and that's ok."
If you or anyone else would like to take over, I'm totally open to that.
I'd gladly take over. But I'm not touching Discourse with a 50' pole, so the first step in that would be for me to set up a new server running sane software. Somehow I don't think you're actually "open" to that, since you've ignored that option since day-fucking-one.
I already offered to host the archived old forums in perpetuity, you ignored that offer and now they're offline the Google rank has gone to shit and my work is deleted and FUCK YOU.
FUCK
-
Now you have to turn yourself off because we are having a "no bot"-weekend.
What the Belgium are you talking about, {{username}}?
-
Is that not 5 more than would normally apply to a "No bot" weekend?
I can only control what mine do
-
Yeah - it wasn't aimed at you as such, just replying to your post.
-
It's Saturday, don't you people have anything better to do than complain about some issues on some site?
Bunch of sociopaths.
-
I can only control what mine do
PR for @accalia: add a remote global death-switch for all the bots!
Filed Under: Terrible ideas topic follow this line: O
-
It's Saturday, don't you people have anything better to do than complain about some issues on some site?
Something is WRONG on the Internet!
Bunch of sociopaths.
-
-
I already offered to host the archived old forums in perpetuity, you ignored that offer and now they're offline
Actually I never saw that offer, otherwise I would have gladly taken you up on it.
The only reason they're down is because they killed the performance on the server. I don't know why, and I couldn't spend any more time to find out. I just turned them on now as a test, and now I have to reboot the server.
Does the offer still stand? If so I'll send you the database and the CS files. Once it's up and running I can change the DNS.
-
Does the offer still stand?
Yes.
If so I'll send you the database and the CS files.
Sure.
Once it's up and running I can change the DNS.
Well the Google ranking's already somewhere between "shit" and "deleted from the index outright" so at this point, that step's almost optional. But yeah, if we're gonna do it we might as well do it right and not break the links.
-
ok! I just PM'd you the ftp info to download the CS database and website files.
-
Memory on ruby processes is fine, I am about to go travelling for a week.
I downed memory usage for pg just in case, enabled slow query tracking and the log just goes crazy.
There are two queries that just keep on running and hitting the 100 threshold:
SELECT action_type, COUNT(*) count FROM user_actions a JOIN topics t ON t.id = a.target_topic_id LEFT JOIN posts p on p.id = a.target_post_id JOIN posts p2 on p2.topic_id = a.target_topic_id and p2.post_number = 1 LEFT JOIN categories c ON c.id = t.category_id WHERE (a.user_id = 294) AND (t.deleted_at is null) AND (p.deleted_at is null and p2.deleted_at is null) AND (NOT COALESCE(p.hidden, false) OR p.user_id = 922) AND (a.action_type not in (3)) AND (t.visible) AND (t.archetype != 'private_message') AND (( c.read_restricted IS NULL OR NOT c.read_restricted OR (c.read_restricted and c.id in (23,24,26,28,28,28,29,30,30,30)) )) GROUP BY action_type
in particular something keeps hitting the user page for user 922 over and over... I wonder who and why
And the second more severe
2015-04-18 21:42:17 UTC [136-341] discourse@discourse LOG: duration: 184.165 ms execute a6: SELECT "posts"."id" FROM "posts" WHERE ("posts"."deleted_at" IS NULL) AND "posts"."topic_id" = $1 ORDER BY "posts"."sort_order" ASC 2015-04-18 21:42:17 UTC [136-342] discourse@discourse DETAIL: parameters: $1 = '1000'
T/1000 is fucking us big time, we have a query that grabs every postnumber from that topic each time you visit it and sends it back with the page.
There are a few things we can do here, we need to figure out a way of stopping that massive query AND avoiding that expensive counting (or partially caching) for user pages.
But pg is just no handling the particular load we are throwing at it here and we need app changes to accommodate.
-
in particular something keeps hitting the user page for user 922 over and over... I wonder who and why
What the fuck?!
922 is me, but I don't have any bots or so. None. At all.
And I'm not the only user with a user card that has broken HTML.
-
And those of us that do have bots don't have them looking at profile pages all the time. Or ever. Certainly no SockBot-based bot.
-
user 922 over and over
So, who is user 922?
Oh, solved. It's @aliceif again, Obviously! I knew she was up to no good, making queries loop. I knew it all along!
T/1000 is fucking us big time, we have a query that grabs every postnumber from that topic each time you visit it and sends it back with the page.
This is confusing me. Isn't this exactly what infinite scroll is supposed to solve? Load the topic in small chunks and then just piece them together when I scroll?
Filed Under: genuinly confused here
-
We are loading in small chunks but there is a query that selects every post id in the stream and passes to the client, this strategy is not scaling for a 16k post topic, we need to fix it
-
-
I Jeffed 12 posts to an existing topic: Awful fan fics
-
It loads the individual posts in small chunks, but gives you all the ID numbers with every chunk. And I think this madness may be my fault; I suggested adding T/1000 to our uptime tracker http://servercooti.es/, so it's getting loaded every 8 seconds now. I'll set it to some tinier topic...
-
-
Crap, and only @accalia can kill it. Though this was happening even before we did that.
Also, @sam, can userprofile thing be connected to a topic owner? There is this weird situation where this thread:
Is attributed to @aliceif, yet I am the OP (she moved the posts though).
That thread has been pretty active, too. Any chance there's a connection?
-
so it's getting loaded every 8 seconds now
Meh, while I appreciate trying to take the blame here, I think 5 or more bots not reading /t/1000 against one site reading it every 8 second (+ @Onyx) should not make the system kill itself more than usual.
Filed Under: Still made me laugh, though
-
-
I suggest replacing 1000 with 3025. That should avoid killing the site.
-
user.json will trigger this, again something we got to fix, perhaps a userscript causes this?
-
Whoops. I was actually still running my testing instance. Server off now.
-
user.json will trigger this, again something we got to fix, perhaps a userscript causes this?
Hmmm... @aliceif, running anything like that?
One of my old ones does request
user.json
, but not only for the user using it, it should be calling it across the board. And only run on post loading.
-
Temporarily make /t/1000 private and see if the problems stop?
-
Oh ... right.
Stats thingie.I switched it off now.
-
Stats thingie.
Shouldn't continually hit your own profile IIRC, but yeah, let's see if that changes the logs.
@sam - just for reference, in case it was that userscript, this is the code:
No idea why it would hit the owner's
user.json
though.
-
I blame the immigrants abusing our public health care /t/1000
Except t/1000 hasn't been highly active since yesterday afternoon. The activity there is minimal.
-
The slowness is in read, not write. Which makes it a prime candidate for caching.
-
Ok, checking of /t/1000 from the outside has been neutered, just in case.
The results are fine for now. But let's give it some time before we claim anything for sure.
-
Which might be a problem here:
INSERT
s seem to suffer more thanUPDATE
s. At least, actions that are connected to them fail more often or rarely, respectively.I don't think there really are many
UPDATE
s. Because posts have edit histories, new versions of posts must be kept separate from old versions of the same post, therefore,UPDATE
s probably only happen when there's a edit.
So if you want modern software, it's gonna be built in on Ruby and run on Linux.
Really? I work on modern software (granted, it isn't forum software) and I use .NET.
@arantor works on what I assume is relatively modern software in PHP.
At work, we are adopting a new collaboration platform to help us work with an overseas team. It's written in Python.
So what was that about Ruby?
We are loading in small chunks but there is a query that selects every post id in the stream and passes to the client, this strategy is not scaling for a 16k post topic, we need to fix it
Ok, I see what you're doing. Still, the better way would be for the client to say "I'm at post x. Send the next chunk." Then the server should be able to relatively easily figure out the posts based on that. And neutering the query like that should (in theory) reduce the load on the server.
-
That might have something to do with the 40,000+ posts in it… and the million+ likes…
if only there was a forum out there that had a way of loading small chunks of posts in huge topics withot overhead caused by the topic's size....oh wait, that's every paginated forum ever.
GO DISCHORSE!!!111
-
T/1000 is fucking us big time, we have a query that grabs every postnumber from that topic each time you visit it and sends it back with the page
What the... what?
See previous post.