Refreshrefreshrefreshrefresh


  • Discourse touched me in a no-no place

    @ben_lubar said in Refreshrefreshrefreshrefresh:

    updates since the update where I made the cache buster not change without a docker container change.

    When did you join the Order of Whispers? "Change what cannot be changed."


  • Discourse touched me in a no-no place

    @HardwareGeek Did you try rebooting your phone? Or replacing it with a more modern one? 🍹



  • @FrostCat said in Refreshrefreshrefreshrefresh:

    @ben_lubar said in Refreshrefreshrefreshrefresh:

    updates since the update where I made the cache buster not change without a docker container change.

    When did you join the Order of Whispers? "Change what cannot be changed."

    http://wiki.guildwars2.com/images/f/fa/Blighted_Tybalt_Leftpaw.jpg



  • @FrostCat said in Refreshrefreshrefreshrefresh:

    Or replacing it with a more modern one?

    Fuck you. Give me phone.


  • Discourse touched me in a no-no place

    @ben_lubar Do you have any apples?


  • Discourse touched me in a no-no place

    @HardwareGeek said in Refreshrefreshrefreshrefresh:

    Fuck you. Give me phone.

    Pfft. If you didn't live in San Francisco you could probably afford one of your own.



  • @FrostCat said in Refreshrefreshrefreshrefresh:

    If you didn't live in San Francisco

    I don't live in San Francisco. I still reside in Washington, although I am living temporarily in the Bay Area — not in San Francisco, though.


  • Discourse touched me in a no-no place

    @HardwareGeek said in Refreshrefreshrefreshrefresh:

    I am living temporarily in the Bay Area

    Close enough.



  • @ben_lubar sup.


  • Notification Spam Recipient



  • @Magus

    54/4568 timed out
    2016-07-19T17:58:02,088432971+0000
    [New LWP 62]
    [New LWP 61]
    [New LWP 60]
    [New LWP 59]
    [New LWP 58]
    [New LWP 57]
    [New LWP 56]
    [New LWP 55]
    [Thread debugging using libthread_db enabled]
    Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
    0x00001ccf02630f31 in ?? ()
    
    Thread 9 (Thread 0x7f1b47be2700 (LWP 55)):
    #0  sem_wait () at ../nptl/sysdeps/unix/sysv/linux/x86_64/sem_wait.S:85
    #1  0x0000000000fc8188 in v8::base::Semaphore::Wait() ()
    #2  0x0000000000e65949 in v8::platform::TaskQueue::GetNext() ()
    #3  0x0000000000e65a9c in v8::platform::WorkerThread::Run() ()
    #4  0x0000000000fc9140 in v8::base::ThreadEntry(void*) ()
    #5  0x00007f1b47f960a4 in start_thread (arg=0x7f1b47be2700) at pthread_create.c:309
    #6  0x00007f1b47ccb87d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111
    
    Thread 8 (Thread 0x7f1b473e1700 (LWP 56)):
    #0  sem_wait () at ../nptl/sysdeps/unix/sysv/linux/x86_64/sem_wait.S:85
    #1  0x0000000000fc8188 in v8::base::Semaphore::Wait() ()
    #2  0x0000000000e65949 in v8::platform::TaskQueue::GetNext() ()
    #3  0x0000000000e65a9c in v8::platform::WorkerThread::Run() ()
    #4  0x0000000000fc9140 in v8::base::ThreadEntry(void*) ()
    #5  0x00007f1b47f960a4 in start_thread (arg=0x7f1b473e1700) at pthread_create.c:309
    #6  0x00007f1b47ccb87d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111
    
    Thread 7 (Thread 0x7f1b46be0700 (LWP 57)):
    #0  sem_wait () at ../nptl/sysdeps/unix/sysv/linux/x86_64/sem_wait.S:85
    #1  0x0000000000fc8188 in v8::base::Semaphore::Wait() ()
    #2  0x0000000000e65949 in v8::platform::TaskQueue::GetNext() ()
    #3  0x0000000000e65a9c in v8::platform::WorkerThread::Run() ()
    #4  0x0000000000fc9140 in v8::base::ThreadEntry(void*) ()
    #5  0x00007f1b47f960a4 in start_thread (arg=0x7f1b46be0700) at pthread_create.c:309
    #6  0x00007f1b47ccb87d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111
    
    Thread 6 (Thread 0x7f1b463df700 (LWP 58)):
    #0  sem_wait () at ../nptl/sysdeps/unix/sysv/linux/x86_64/sem_wait.S:85
    #1  0x0000000000fc8188 in v8::base::Semaphore::Wait() ()
    #2  0x0000000000e65949 in v8::platform::TaskQueue::GetNext() ()
    #3  0x0000000000e65a9c in v8::platform::WorkerThread::Run() ()
    #4  0x0000000000fc9140 in v8::base::ThreadEntry(void*) ()
    #5  0x00007f1b47f960a4 in start_thread (arg=0x7f1b463df700) at pthread_create.c:309
    #6  0x00007f1b47ccb87d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111
    
    Thread 5 (Thread 0x7f1b44add700 (LWP 59)):
    #0  pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
    #1  0x0000000000fbe709 in uv_cond_wait (cond=cond@entry=0x19aa020 <cond>, mutex=mutex@entry=0x19a9fe0 <mutex>) at ../deps/uv/src/unix/thread.c:380
    #2  0x0000000000faf5d8 in worker (arg=arg@entry=0x0) at ../deps/uv/src/threadpool.c:75
    #3  0x0000000000fbe269 in uv__thread_start (arg=<optimized out>) at ../deps/uv/src/unix/thread.c:49
    #4  0x00007f1b47f960a4 in start_thread (arg=0x7f1b44add700) at pthread_create.c:309
    #5  0x00007f1b47ccb87d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111
    
    Thread 4 (Thread 0x7f1b37fff700 (LWP 60)):
    #0  pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
    #1  0x0000000000fbe709 in uv_cond_wait (cond=cond@entry=0x19aa020 <cond>, mutex=mutex@entry=0x19a9fe0 <mutex>) at ../deps/uv/src/unix/thread.c:380
    #2  0x0000000000faf5d8 in worker (arg=arg@entry=0x0) at ../deps/uv/src/threadpool.c:75
    #3  0x0000000000fbe269 in uv__thread_start (arg=<optimized out>) at ../deps/uv/src/unix/thread.c:49
    #4  0x00007f1b47f960a4 in start_thread (arg=0x7f1b37fff700) at pthread_create.c:309
    #5  0x00007f1b47ccb87d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111
    
    Thread 3 (Thread 0x7f1b377fe700 (LWP 61)):
    #0  pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
    #1  0x0000000000fbe709 in uv_cond_wait (cond=cond@entry=0x19aa020 <cond>, mutex=mutex@entry=0x19a9fe0 <mutex>) at ../deps/uv/src/unix/thread.c:380
    #2  0x0000000000faf5d8 in worker (arg=arg@entry=0x0) at ../deps/uv/src/threadpool.c:75
    #3  0x0000000000fbe269 in uv__thread_start (arg=<optimized out>) at ../deps/uv/src/unix/thread.c:49
    #4  0x00007f1b47f960a4 in start_thread (arg=0x7f1b377fe700) at pthread_create.c:309
    #5  0x00007f1b47ccb87d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111
    
    Thread 2 (Thread 0x7f1b36ffd700 (LWP 62)):
    #0  pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
    #1  0x0000000000fbe709 in uv_cond_wait (cond=cond@entry=0x19aa020 <cond>, mutex=mutex@entry=0x19a9fe0 <mutex>) at ../deps/uv/src/unix/thread.c:380
    #2  0x0000000000faf5d8 in worker (arg=arg@entry=0x0) at ../deps/uv/src/threadpool.c:75
    #3  0x0000000000fbe269 in uv__thread_start (arg=<optimized out>) at ../deps/uv/src/unix/thread.c:49
    #4  0x00007f1b47f960a4 in start_thread (arg=0x7f1b36ffd700) at pthread_create.c:309
    #5  0x00007f1b47ccb87d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111
    
    Thread 1 (Thread 0x7f1b48fea740 (LWP 54)):
    #0  0x00001ccf02630f31 in ?? ()
    #1  0x00000000000002db in ?? ()
    #2  0x00000000000001a6 in ?? ()
    #3  0x0000000000000000 in ?? ()
    [cluster] Child Process (54) has exited (code: null, signal: SIGKILL)
    [cluster] Spinning up another process...
    19/7 17:58 [2890] - warn: You have no mongo password setup!
    19/7 17:58 [2890] - info: [database] Checking database indices.
    19/7 17:58 [2890] - info: [plugins/spam-be-gone] Settings loaded
    Waiting for 2890/4568
    49/4567 timed out
    2016-07-19T17:58:10,513552759+0000
    [New LWP 66]
    [New LWP 65]
    [New LWP 64]
    [New LWP 63]
    [New LWP 53]
    [New LWP 52]
    [New LWP 51]
    [New LWP 50]
    [Thread debugging using libthread_db enabled]
    Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
    0x000039272dbf69b4 in ?? ()
    
    Thread 9 (Thread 0x7f509390a700 (LWP 50)):
    #0  sem_wait () at ../nptl/sysdeps/unix/sysv/linux/x86_64/sem_wait.S:85
    #1  0x0000000000fc8188 in v8::base::Semaphore::Wait() ()
    #2  0x0000000000e65949 in v8::platform::TaskQueue::GetNext() ()
    #3  0x0000000000e65a9c in v8::platform::WorkerThread::Run() ()
    #4  0x0000000000fc9140 in v8::base::ThreadEntry(void*) ()
    #5  0x00007f5093cbe0a4 in start_thread (arg=0x7f509390a700) at pthread_create.c:309
    #6  0x00007f50939f387d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111
    
    Thread 8 (Thread 0x7f5093109700 (LWP 51)):
    #0  sem_wait () at ../nptl/sysdeps/unix/sysv/linux/x86_64/sem_wait.S:85
    #1  0x0000000000fc8188 in v8::base::Semaphore::Wait() ()
    #2  0x0000000000e65949 in v8::platform::TaskQueue::GetNext() ()
    #3  0x0000000000e65a9c in v8::platform::WorkerThread::Run() ()
    #4  0x0000000000fc9140 in v8::base::ThreadEntry(void*) ()
    #5  0x00007f5093cbe0a4 in start_thread (arg=0x7f5093109700) at pthread_create.c:309
    #6  0x00007f50939f387d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111
    
    Thread 7 (Thread 0x7f5092908700 (LWP 52)):
    #0  sem_wait () at ../nptl/sysdeps/unix/sysv/linux/x86_64/sem_wait.S:85
    #1  0x0000000000fc8188 in v8::base::Semaphore::Wait() ()
    #2  0x0000000000e65949 in v8::platform::TaskQueue::GetNext() ()
    #3  0x0000000000e65a9c in v8::platform::WorkerThread::Run() ()
    #4  0x0000000000fc9140 in v8::base::ThreadEntry(void*) ()
    #5  0x00007f5093cbe0a4 in start_thread (arg=0x7f5092908700) at pthread_create.c:309
    #6  0x00007f50939f387d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111
    
    Thread 6 (Thread 0x7f5092107700 (LWP 53)):
    #0  sem_wait () at ../nptl/sysdeps/unix/sysv/linux/x86_64/sem_wait.S:85
    #1  0x0000000000fc8188 in v8::base::Semaphore::Wait() ()
    #2  0x0000000000e65949 in v8::platform::TaskQueue::GetNext() ()
    #3  0x0000000000e65a9c in v8::platform::WorkerThread::Run() ()
    #4  0x0000000000fc9140 in v8::base::ThreadEntry(void*) ()
    #5  0x00007f5093cbe0a4 in start_thread (arg=0x7f5092107700) at pthread_create.c:309
    #6  0x00007f50939f387d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111
    
    Thread 5 (Thread 0x7f5090805700 (LWP 63)):
    #0  pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
    #1  0x0000000000fbe709 in uv_cond_wait (cond=cond@entry=0x19aa020 <cond>, mutex=mutex@entry=0x19a9fe0 <mutex>) at ../deps/uv/src/unix/thread.c:380
    #2  0x0000000000faf5d8 in worker (arg=arg@entry=0x0) at ../deps/uv/src/threadpool.c:75
    #3  0x0000000000fbe269 in uv__thread_start (arg=<optimized out>) at ../deps/uv/src/unix/thread.c:49
    #4  0x00007f5093cbe0a4 in start_thread (arg=0x7f5090805700) at pthread_create.c:309
    #5  0x00007f50939f387d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111
    
    Thread 4 (Thread 0x7f5083fff700 (LWP 64)):
    #0  pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
    #1  0x0000000000fbe709 in uv_cond_wait (cond=cond@entry=0x19aa020 <cond>, mutex=mutex@entry=0x19a9fe0 <mutex>) at ../deps/uv/src/unix/thread.c:380
    #2  0x0000000000faf5d8 in worker (arg=arg@entry=0x0) at ../deps/uv/src/threadpool.c:75
    #3  0x0000000000fbe269 in uv__thread_start (arg=<optimized out>) at ../deps/uv/src/unix/thread.c:49
    #4  0x00007f5093cbe0a4 in start_thread (arg=0x7f5083fff700) at pthread_create.c:309
    #5  0x00007f50939f387d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111
    
    Thread 3 (Thread 0x7f50837fe700 (LWP 65)):
    #0  pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
    #1  0x0000000000fbe709 in uv_cond_wait (cond=cond@entry=0x19aa020 <cond>, mutex=mutex@entry=0x19a9fe0 <mutex>) at ../deps/uv/src/unix/thread.c:380
    #2  0x0000000000faf5d8 in worker (arg=arg@entry=0x0) at ../deps/uv/src/threadpool.c:75
    #3  0x0000000000fbe269 in uv__thread_start (arg=<optimized out>) at ../deps/uv/src/unix/thread.c:49
    #4  0x00007f5093cbe0a4 in start_thread (arg=0x7f50837fe700) at pthread_create.c:309
    #5  0x00007f50939f387d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111
    
    Thread 2 (Thread 0x7f5082ffd700 (LWP 66)):
    #0  pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
    #1  0x0000000000fbe709 in uv_cond_wait (cond=cond@entry=0x19aa020 <cond>, mutex=mutex@entry=0x19a9fe0 <mutex>) at ../deps/uv/src/unix/thread.c:380
    #2  0x0000000000faf5d8 in worker (arg=arg@entry=0x0) at ../deps/uv/src/threadpool.c:75
    #3  0x0000000000fbe269 in uv__thread_start (arg=<optimized out>) at ../deps/uv/src/unix/thread.c:49
    #4  0x00007f5093cbe0a4 in start_thread (arg=0x7f5082ffd700) at pthread_create.c:309
    #5  0x00007f50939f387d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111
    
    Thread 1 (Thread 0x7f5094d12740 (LWP 49)):
    #0  0x000039272dbf69b4 in ?? ()
    #1  0xffffffffffffaa78 in ?? ()
    #2  0xffffffffffffd29c in ?? ()
    #3  0xffffffffffffccfc in ?? ()
    #4  0xffffffffffffc478 in ?? ()
    #5  0xffffffffffffaa78 in ?? ()
    #6  0xffffffffffffaa7a in ?? ()
    #7  0xffffffffffffaa78 in ?? ()
    #8  0x0000000000000000 in ?? ()
    [cluster] Child Process (49) has exited (code: null, signal: SIGKILL)
    [cluster] Spinning up another process...
    


  • @ben_lubar Nice logs. Why do they mean we have to refresh, though?



  • @Magus you can ignore it if there's not a corresponding post in the updates thread.



  • @ben_lubar No, that's the point of this thread. I don't like the constant refreshes. The system has been going down less lately, and you said only updates would be causing refreshes. I've had three refreshes today, only one of which was legit. Why?


  • Notification Spam Recipient

    @Magus said in Refreshrefreshrefreshrefresh:

    @ben_lubar No, that's the point of this thread. I don't like the constant refreshes. The system has been going down less lately, and you said only updates would be causing refreshes. I've had three refreshes today, only one of which was legit. Why?

    Maybe it's the autokiller that was put in to stop cooties?

    Was there ever a logger function put on that?



  • @Magus said in Refreshrefreshrefreshrefresh:

    The system has been going down less lately

    it's gone down the same number of times, but the watchdog script has restarted it faster than the admins could have.



  • @Tsaukpaetra said in Refreshrefreshrefreshrefresh:

    @Magus said in Refreshrefreshrefreshrefresh:

    @ben_lubar No, that's the point of this thread. I don't like the constant refreshes. The system has been going down less lately, and you said only updates would be causing refreshes. I've had three refreshes today, only one of which was legit. Why?

    Maybe it's the autokiller that was put in to stop cooties?

    Was there ever a logger function put on that?

    The logging isn't visible to non-admins. Maybe I should get some kind of automated edit of the cooties topic set up.


  • Notification Spam Recipient

    @ben_lubar said in Refreshrefreshrefreshrefresh:

    get some kind of automated edit of the cooties topic set up.

    Maybe a post in One Post for "Days without a NodeBB reboot"? ;)



  • @HardwareGeek said in Refreshrefreshrefreshrefresh:

    I am living temporarily

    Technically true, but slightly morbid.



  • @Tsaukpaetra better make that "seconds".


  • Notification Spam Recipient

    @anotherusername said in Refreshrefreshrefreshrefresh:

    @Tsaukpaetra better make that "seconds".

    Seconds without a NodeBB reboot: 4278

    ???



  • @Tsaukpaetra Sounds about right.



  • @anotherusername said in Refreshrefreshrefreshrefresh:

    @HardwareGeek said in Refreshrefreshrefreshrefresh:

    I am living temporarily

    Technically true, but slightly morbid.

    Given the death of my nephew this morning, it feels all too, too true right now.



  • I'm at 3 so far today. Will we get four? Five? Dare I say six?



  • Well, we're at 5 at 20 past noon. You going for 10, Ben? You know, there's this kids show...


  • Notification Spam Recipient

    @Magus said in Refreshrefreshrefreshrefresh:

    Well, we're at 5 at 20 past noon. You going for 10, Ben? You know, there's this kids show...

    I think it's a nice reminder that "cooties were prevented by this server restart".



  • @Magus said in Refreshrefreshrefreshrefresh:

    You going for 10, Ben?

    Will you please shut the fuck up. Ben is not updating the forum (except when there's a giant banner that says he's updating the forum). You're seeing the refresh toaster because the forum crashed and had to be restarted.



  • @Tsaukpaetra I'd rather have something post a topic on nodeBB's site that something is causing a crash than interrupt my usage for no good reason.

    I don't care if a restart recovered from a crash: it shouldn't be crashing 5 times in one day.



  • @NedFodder said in Refreshrefreshrefreshrefresh:

    Will you please shut the fuck up. Ben is not updating the forum (except when there's a giant banner that says he's updating the forum). You're seeing the refresh toaster because the forum crashed and had to be restarted.

    He said he'd done an update to fix that. I'm complaining because things are broken. Fix them, or don't expect me to stop posting.


  • Notification Spam Recipient

    @Magus said in Refreshrefreshrefreshrefresh:

    @Tsaukpaetra I'd rather have something post a topic on nodeBB's site that something is causing a crash than interrupt my usage for no good reason.

    I don't care if a restart recovered from a crash: it shouldn't be crashing 5 times in one day.

    And efforts are being made to fix it, pay attention!



  • @Tsaukpaetra Which ones? I was shown some logs and told not to worry about it, that it's normal. No, it's not normal. Unless you're @end.


  • Notification Spam Recipient

    @Magus said in Refreshrefreshrefreshrefresh:

    told not to worry about it, that it's normal.

    Quote that please. I hardly believe anyone here (even the sane ones) would ever tell you "crashing? Don't worry about it, that's normal."



  • @Tsaukpaetra Look, if:

    @ben_lubar said in Refreshrefreshrefreshrefresh:

    it's gone down the same number of times, but the watchdog script has restarted it faster than the admins could have.

    and

    @ben_lubar said in Refreshrefreshrefreshrefresh:

    you can ignore it if there's not a corresponding post in the updates thread.

    tell you something different than what they say, that's not my fault. Does that sound at all like 'hmm, that's unusual, we'll get right on fixing that!' to you?

    And to preempt the usual jeffish 'we can't account for the whims of every user' - the forum is crashing. Multiple times daily. If that doesn't drive you to fix things, if that doesn't make you utterly ashamed of the quality of the software you put out, you don't belong in this industry.



  • @Tsaukpaetra I might, said tongue firmly in cheek.


  • Notification Spam Recipient

    @Magus said in Refreshrefreshrefreshrefresh:

    tell you something different than what they say, that's not my fault

    You are free to your own interpretations. Taken out of context, it's easy to construe that to what you think they say.

    I never said crashing was good or even acceptable, don't start putting words in my mouth.



  • @Tsaukpaetra said in Refreshrefreshrefreshrefresh:

    I never said crashing was good or even acceptable, don't start putting words in my mouth.

    That part is a general statement on anyone's philosophy on software, with an if.



  • @Magus if you can figure out how to make the forum stop crashing, you're better at this than the entire forum staff and the entire NodeBB dev team.


  • Notification Spam Recipient

    @Magus said in Refreshrefreshrefreshrefresh:

    That part is a general statement

    I would recommend placing" general statements" outside of targeted replies. From context it sounded like you expected me to reply like that, which if that was not your intent, should not have been included in a reply directed to me, TYVM.

    Edit: or at the very least, direction the statement generally specifically, ie:" look guys, blah blah blah"



  • @ben_lubar I'm not going to try to learn node.js. This forum's current state is one of the reasons I intend to avoid it forever. My point is that this isn't a minor thing. This is a 'we seriously have to fix this' kind of issue. It's completely unacceptable to release software in this state, and completely unbelievable to me that no one can find the cause after months, and would still update other things while such a major issue remains. This is stupid.



  • The worst part of this is that it's essentially avoidable. This is basically a posterchild for scrum: your product should be releasable, passing all tests, with no glaringly major bugs each sprint, and you should only release it if you trust it.

    Discourse's CI branch wasn't this bad.

    @blakeyrat am I being unfair to the nodebb devs here?


  • ♿ (Parody)

    @Magus said in Refreshrefreshrefreshrefresh:

    The system has been going down less lately, and you said only updates would be causing refreshes.

    I don't recall anyone saying that. In fact, I recall much yelling trying to get you to understand that most refreshes are not and will not be (for the foreseeable) future due to updates.


  • ♿ (Parody)

    @ben_lubar said in Refreshrefreshrefreshrefresh:

    but the watchdog script has restarted it faster than the admins could have.

    And sometimes it might recover in the time between the two. I know I've logged in to kill it and found that it's recovered all on its own. But now it's automatically killed, so it doesn't get that chance. Overall cooties time is reduced at the expense of some more refreshes.

    Also, there's no obvious pattern to the cooties. We've gone for several days on occasion, and then we've gone for only minutes between them other times.


  • ♿ (Parody)

    @Magus said in Refreshrefreshrefreshrefresh:

    I'm at 3 so far today. Will we get four? Five? Dare I say six?

    I can make that happen!


  • ♿ (Parody)

    @Magus said in Refreshrefreshrefreshrefresh:

    Fix them, or don't expect me to stop posting.

    Help us figure out how to fix it, or expect us to keep posting back telling you the same thing.


  • ♿ (Parody)

    @Magus said in Refreshrefreshrefreshrefresh:

    @Tsaukpaetra Which ones? I was shown some logs and told not to worry about it, that it's normal. No, it's not normal. Unless you're @end.

    Fuck off with that @Fox interpretation of what people have told you.


  • ♿ (Parody)

    @Magus said in Refreshrefreshrefreshrefresh:

    the forum is crashing.

    No, it is not, actually.



  • @boomzilla hang, crash, it's all the same to @Magus.



  • @boomzilla said in Refreshrefreshrefreshrefresh:

    @Magus said in Refreshrefreshrefreshrefresh:

    the forum is crashing.

    No, it is not, actually.

    I said that before @Magus did. So would you instead say the forum is hanging? There's really no distinction from a user's point of view.



  • @ben_lubar A giant unstable blob is a giant unstable blob. If it isn't programmed in such a way that when something goes wrong you have some idea where to look, it's designed wrong.


  • ♿ (Parody)

    @NedFodder said in Refreshrefreshrefreshrefresh:

    So would you instead say the forum is hanging?

    Yes. A process goes to 100% CPU.

    @NedFodder said in Refreshrefreshrefreshrefresh:

    There's really no distinction from a user's point of view.

    That's true, sort of. But there are other subtleties involved, like how it sometimes recovers. Or used to before we made it automatic.

    Still, he knows better now.


Log in to reply