I converted NodeBB to use a PostgreSQL database. AMA.


  • BINNED

    @ben_lubar said in I converted NodeBB to use a PostgreSQL database. AMA.:

    REFRESH MATERIALIZED VIEW
    Time: 2944649.876 ms

    Yeah, that will happen. Postgres doesn't have incremental materialized views, it needs to rebuild the whole damned thing on every insert.



  • @onyx said in I converted NodeBB to use a PostgreSQL database. AMA.:

    Postgres doesn't have incremental materialized views, it needs to rebuild the whole damned thing on every insert.

    Jesus.

    Ben L, why didn't you convert NodeBB to use a SQL Server database, AMA? Because indexed views in SQL Server actually fucking work.



  • @blakeyrat said in I converted NodeBB to use a PostgreSQL database. AMA.:

    Ben L, why didn't you convert NodeBB to use a SQL Server database

    Several people tried. Apparently it's easier said than done.


  • FoxDev

    @anotherusername said in I converted NodeBB to use a PostgreSQL database. AMA.:

    pparently it's easier said than done.

    as is usually the case.

    i'd be out of a jorb it it was as easy said as done.


  • ♿ (Parody)

    @anotherusername said in I converted NodeBB to use a PostgreSQL database. AMA.:

    Several people tried. Apparently it's easier said than done.

    I think he meant just do whatever he did with pg except use sqlserver. Not making an actual relational schema, though obviously that would be best.



  • @boomzilla said in I converted NodeBB to use a PostgreSQL database. AMA.:

    I think he meant just do whatever he did with pg except use sqlserver. Not making an actual relational schema, though obviously that would be best.

    Yeah, although it would be interesting to try to hook NodeBB up to a SELECT * FROM ALL_TABLES_JOINED view and then slowly try to untangle the relations the NodeBB developers are too stupid and lazy to model without breaking the schema of the view. Would be a lot of work though.


  • ♿ (Parody)

    @blakeyrat Yeah, although an iterative approach like that should allow you to get something up and working, which means you'd probably be less likely to throw your hands up in despair and abandon the whole attempt.



  • @boomzilla You can INSERT and UPDATE views composed of joined tables, but you can't DELETE from them-- so that might cause some complication. (Hopefully NodeBB uses soft-deletes anyway, which probably would work.)



  • @boomzilla well, one possible reason is that it doesn't look like SQL Server supports JSON indices... apparently to create an index on something in the JSON column, you have to extract it by adding a computed column to the table and then create an index on that column.

    Also, there are a shitton of restrictions on what indexed views (SQL Server's version of marginalized views) may contain... for instance:

    0_1502475381964_acadeed7-b04f-429a-936d-89f38c4d1b47-image.png


  • ♿ (Parody)

    @blakeyrat said in I converted NodeBB to use a PostgreSQL database. AMA.:

    Hopefully NodeBB uses soft-deletes anyway, which probably would work

    There are some cases where things are deleted deleted. A moderator (or admin?) can purge a post, for instance. I think you can delete accounts (e.g., spammers). Not the end of the world if those things end up implemented last.


  • ♿ (Parody)

    @anotherusername said in I converted NodeBB to use a PostgreSQL database. AMA.:

    well, one possible reason is that it doesn't look like SQL Server supports JSON indices... apparently to create an index on something in the JSON column, you have to extract it by adding a computed column to the table and then create an index on that column.

    Yeah, I think that's a deal breaker.



  • @blakeyrat said in I converted NodeBB to use a PostgreSQL database. AMA.:

    @boomzilla said in I converted NodeBB to use a PostgreSQL database. AMA.:

    I think he meant just do whatever he did with pg except use sqlserver. Not making an actual relational schema, though obviously that would be best.

    Yeah, although it would be interesting to try to hook NodeBB up to a SELECT * FROM ALL_TABLES_JOINED view and then slowly try to untangle the relations the NodeBB developers are too stupid and lazy to model without breaking the schema of the view. Would be a lot of work though.

    I think he's basically trying to do that, but starting from the opposite direction: import the entire blob into SQL Server without modification, hook NodeBB up to its giant blob and get it working, then start piecing off sanely-designed tables for specific stuff, but still blobbing them back in with a view like you described. Then after the whole blob has been deconstructed into a set of sane tables, it'd be time to start rewriting NodeBB to start hitting those instead of the blob view.



  • @anotherusername said in I converted NodeBB to use a PostgreSQL database. AMA.:

    @boomzilla well, one possible reason is that it doesn't look like SQL Server supports JSON indices... apparently to create an index on something in the JSON column, you have to extract it by adding a computed column to the table and then create an index on that column.

    Also, there are a shitton of restrictions on what indexed views (SQL Server's version of marginalized views) may contain... for instance:

    0_1502475381964_acadeed7-b04f-429a-936d-89f38c4d1b47-image.png

    So convert the JSON to XML and then one can leveragetake advantage of SQL Server's excellent XML support.



  • @anotherusername Who's "he"? Ben L or the NodeBB developers?

    If Ben L's trying to do that, he's not yet mentioned anything about it. Also he's using Postgres.

    And what you just outlined is exactly what I proposed. Make a view like NodeBBTableView, start it out as a SELECT * FROM NodeBBBackingData, then slowly relation-out the backing data while updating the view definition as to avoid changing the view's schema. I probably communicated it badly.



  • @boomzilla said in I converted NodeBB to use a PostgreSQL database. AMA.:

    @blakeyrat said in I converted NodeBB to use a PostgreSQL database. AMA.:

    Hopefully NodeBB uses soft-deletes anyway, which probably would work

    There are some cases where things are deleted deleted. A moderator (or admin?) can purge a post, for instance. I think you can delete accounts (e.g., spammers). Not the end of the world if those things end up implemented last.

    Soft deletes could still be used, even if there's no way to undelete something from the software level. At the DB level, you're just updating a flag that says it's deleted. If your view excludes records that you've marked as deleted, then it should pretty much work as it's supposed to.



  • @blakeyrat he posted it in some other thread somewhere. Hang on and I'll see if I can find it.



  • @anotherusername Right but my point is you'd have to modify the NodeBB code to not attempt to run its equivalent of DELETE FROM X WHERE Y, because if it tried it'd get nasty SQL Server exceptions it likely wouldn't know how to handle.

    AFAIK there's no way to tell SQL Server to "intercept" a DELETE and convert a hard-delete into a soft-delete. But I could be wrong.

    Other than that, I think you could "fix" the DB without touching the NodeBB source.



  • @blakeyrat

    CREATE TRIGGER
    INSTEAD OF
    DELETE
    UPDATE table
    SET isDeleted = 1
    WHERE table.PK = deleted.PK
    


  • @blakeyrat said in I converted NodeBB to use a PostgreSQL database. AMA.:

    Ben L, why didn't you convert NodeBB to use a SQL Server database, AMA?

    He's waiting for the new and improved NoSQL Server.



  • @izzion Wow, I'd never heard of INSTEAD OF triggers. That's... amazing and also kind of scary, really.



  • @blakeyrat
    Yeah, you can do some really powerful stuff with triggers, both INSTEAD OF and AFTER.

    Three major pitfalls most people run into:

    1. Triggers are SET BASED operations, NOT row-by-row. If you don't write your logic correctly, you can wind up doing a "row by row" operation on the whole table for every row that came into the trigger, effectively turning a 1,000 row update into 1,000,000 and corrupting your data by applying the values of the last row in the set (which isn't guaranteed to be in any particular sort order) to all 1,000 rows in the set.
    2. Ala the above, trigger performance can be a real hidden landmine. If you write a trigger that relies on a shitty scalar UDF to do its business logic, all the sudden your IUD (insert, update, delete) operations just blow up when you do more than a few rows and it's pretty much impossible to find out what's going on from execution plans or SET STATISTICS IO,TIME ON for the client statistics, since the trigger stats won't show up anywhere except traces and the DMVs.
    3. Cascading triggers are the devil. And actually usually disabled by default on new databases... there's a specific "I know what I'm doing" setting you have to enable to allow cascading triggers, because of the risk of loops and the above performance problems. If you think you need a cascading trigger, go slam your head in a car door, get a coffee, and then come back and see if you can find any way at all to not need it.


  • @izzion said in I converted NodeBB to use a PostgreSQL database. AMA.:

    If you think you need a cascading trigger, go slam your head in a car door, get a coffee, and then come back and see if you can find any way at all to not need it.

    Honestly I usually think of all triggers that way. But I understand that some people develop SQL databases without access to the code that's interacting with the schema, and so stuff like this needs to exist.



  • @anotherusername said in I converted NodeBB to use a PostgreSQL database. AMA.:

    @blakeyrat he posted it in some other thread somewhere. Hang on and I'll see if I can find it.

    I can't find it. I can't even remember if it was here, or on GitHub, or on the NodeBB community forum. I'm starting to think it wasn't Ben, though... maybe @julianlam, or one of the other NodeBB devs?


  • ♿ (Parody)

    @anotherusername

    @julianlam said in I converted NodeBB to use a PostgreSQL database. AMA.:

    @Weng The one collection business is only because Redis doesn't really have a concept of collections or tables, just objects. Mongo does, but we wanted to support both for the sake of supporting both.

    Now with a workable Postgres adapter, then I'm thinking deprecation of the Redis driver may come sooner rather than later. Getting our mongo driver to use collections is something we do want to do...



  • @blakeyrat
    The one good reason I see to use triggers, and is usually fairly easy to keep performant, is to enforce a "housekeeping column" and/or audit table to let you track who made changes to what. Doing it on a trigger means you don't lose auditing just because some developer saw the audit code as a speedup loop or made an honest mistake and forgot to hook it up.

    It can also be pretty useful to convert to soft deletes as per the above, especially if your application is already performing operations on views rather than the underlying tables, at which point the soft delete can be completely transparent to the application (application sends a standard delete from the view, SQL translates that to a soft delete, and subsequent select statements don't get the deleted rows because the view filters them out by design).



  • @boomzilla said in I converted NodeBB to use a PostgreSQL database. AMA.:

    @anotherusername

    @julianlam said in I converted NodeBB to use a PostgreSQL database. AMA.:

    @Weng The one collection business is only because Redis doesn't really have a concept of collections or tables, just objects. Mongo does, but we wanted to support both for the sake of supporting both.

    Now with a workable Postgres adapter, then I'm thinking deprecation of the Redis driver may come sooner rather than later. Getting our mongo driver to use collections is something we do want to do...

    AH! It was the post that he replied to:

    @weng said in I converted NodeBB to use a PostgreSQL database. AMA.:

    Incidentally, I expect the "path forward" to a real DBMS is to take this setup and, over time, build individual table schemas per object and have your stupid "collection" table be exposed as a view that unions JSON composed on the fly from the individual table schemas and from an underlying collection table for object types you haven't converted yet.

    Once every object class has a table, then you hit the Node code and rewire everything to hit the base tables until, one day, you can strange the collection view.

    Handling inserts to the collection view is a matter left to the reader.


  • ♿ (Parody)

    @anotherusername @Weng started trying that a while ago. I believe he referenced his effort about that upthread.



  • @boomzilla yeah, that was who posted it.


  • BINNED

    @blakeyrat said in I converted NodeBB to use a PostgreSQL database. AMA.:

    @onyx said in I converted NodeBB to use a PostgreSQL database. AMA.:

    Postgres doesn't have incremental materialized views, it needs to rebuild the whole damned thing on every insert.

    Jesus.

    Ben L, why didn't you convert NodeBB to use a SQL Server database, AMA? Because indexed views in SQL Server actually fucking work.

    To clarify, materialized views seem to be a slightly different beast than what indexed views are, from my limited understanding of it at least. From my understanding materialized views will theoretically perform faster than just indexed views because they actually (shallow?) copy the data itself. Indexed views only seem to be creating efficient indexes on the underlaying data (unless I misread it). Materialized views are really intended for fast querying of historical data, not realtime stuff.

    It's all doable in Postrges as far as I can see, but it's admittedly more work, SQL Server seems to have a nice syntax for doing this shit automatically.



  • @onyx said in I converted NodeBB to use a PostgreSQL database. AMA.:

    It's all doable in Postrges as far as I can see

    There are indexes for non-materialized views in Postgres? Can you point me to the documentation? All I can find is a bunch of mailing lists saying there isn't.


  • BINNED

    @ben_lubar I must say I'm not sure of the exact method and terminology off the tip of my head. Would have to look into it.

    On the other hand, why don't you explain what exactly you're trying to do and what the bottleneck is, maybe there's a better method than what you're trying to do.



  • @Onyx Rather than trying to guess the bottleneck or anything objective like that, here's what I'm going to try later:

    • Write a new database driver that wraps an existing database driver and logs all database queries that were executed as a runnable JavaScript file with a single async.series call and a bunch of async.apply calls.
    • Browse the test forum for a few minutes.
    • With as many things as possible turned off on the test machine, run the file for MongoDB and PostgreSQL N+1 times each, dropping the first run because it's not really fair to include disk performance in the benchmark. (Posts and topics are generally accessed many times or not accessed at all in a short period of time.)
    • Run the benchmark again with the slow query log enabled in each database.

    Right now, the indexes and data structures in PostgreSQL and MongoDB are roughly equivalent. I could probably improve DBSearch performance on Postgres by inserting tsvector values instead of computing them in the index, but that's probably not a big part of what takes time on the forum.

    Based on what @boomzilla identified as the cause of the latest cooties, I'd say it's pretty safe to say that people don't hurt forum performance - search engine robots that don't act like people hurt forum performance.


  • Winner of the 2016 Presidential Election

    @ben_lubar So... You're trying to do some actual research instead of asserting your personal opinion is correct and all others are wrong?

    Are you sure you didn't accidentally log in to the wrong forum?


  • Notification Spam Recipient

    @asdf said in I converted NodeBB to use a PostgreSQL database. AMA.:

    @ben_lubar So... You're trying to do some actual research instead of asserting your personal opinion is correct and all others are wrong?

    Are you sure you didn't accidentally log in to the wrong forum?

    @ben_lubar's shoulder aliens reply:
    Portal 2 - I'm Different – 00:06
    — LaDuelo



  • @ben_lubar said in I converted NodeBB to use a PostgreSQL database. AMA.:

    @Onyx Rather than trying to guess the bottleneck or anything objective like that, here's what I'm going to try later:

    • Write a new database driver that wraps an existing database driver and logs all database queries that were executed as a runnable JavaScript file with a single async.series call and a bunch of async.apply calls.
    • Browse the test forum for a few minutes.
    • With as many things as possible turned off on the test machine, run the file for MongoDB and PostgreSQL N+1 times each, dropping the first run because it's not really fair to include disk performance in the benchmark. (Posts and topics are generally accessed many times or not accessed at all in a short period of time.)
    • Run the benchmark again with the slow query log enabled in each database.

    Right now, the indexes and data structures in PostgreSQL and MongoDB are roughly equivalent. I could probably improve DBSearch performance on Postgres by inserting tsvector values instead of computing them in the index, but that's probably not a big part of what takes time on the forum.

    Based on what @boomzilla identified as the cause of the latest cooties, I'd say it's pretty safe to say that people don't hurt forum performance - search engine robots that don't act like people hurt forum performance.

    Progress so far:

    $ wc db_profile.js
      19686   56662 2208828 db_profile.js
    $ sudo bash -c 'sync; echo 3 > /proc/sys/vm/drop_caches'; for i in {0..10}; do node db_profile.js --database=postgres; done; sudo bash -c 'sync; echo 3 > /proc/sys/vm/drop_caches'; for i in {0..10}; do node db_profile.js --database=mongo; done
    


  • @ben_lubar Results:

    PostgreSQL MongoDB
    126032.325ms 104096.779ms
    34882.688ms 12677.505ms
    33315.507ms 12828.950ms
    33306.246ms 12607.897ms
    33373.309ms 12330.360ms
    34197.505ms 12526.826ms
    33296.983ms 12442.498ms
    33374.571ms 12221.642ms
    33589.139ms 12583.906ms
    33307.129ms 12300.606ms
    33261.017ms 12189.755ms

  • Notification Spam Recipient

    @ben_lubar said in I converted NodeBB to use a PostgreSQL database. AMA.:

    @ben_lubar Results:

    PostgreSQL MongoDB
    126032.325ms 104096.779ms
    34882.688ms 12677.505ms
    33315.507ms 12828.950ms
    33306.246ms 12607.897ms
    33373.309ms 12330.360ms
    34197.505ms 12526.826ms
    33296.983ms 12442.498ms
    33374.571ms 12221.642ms
    33589.139ms 12583.906ms
    33307.129ms 12300.606ms
    33261.017ms 12189.755ms

    So basically about three times worse?



  • @tsaukpaetra Now to fix that!



  • @tsaukpaetra said in I converted NodeBB to use a PostgreSQL database. AMA.:

    So basically about three times worse?

    Even worse than what I expected.

    Curious to see how much it can be improved.


  • Notification Spam Recipient

    @ben_lubar said in I converted NodeBB to use a PostgreSQL database. AMA.:

    @tsaukpaetra Now to fix that!

    How are you going to spay a database so that it performs better??? That seems counter-intuitive...



  • With "slow query" defined as "takes longer than 50ms",

    PostgreSQL:

    LOG:  duration: 301.035 ms  execute getSortedSetRangeAsc: SELECT ARRAY(SELECT "data"->>'value' FROM "objects" WHERE "data"->>'_key' = ANY($1::text[]) ORDER BY ("data"->>'score')::numeric ASC LIMIT $3::integer OFFSET $2::integer) s
    DETAIL:  parameters: $1 = '{tid:11847:posts}', $2 = '66559', $3 = '20'
    LOG:  duration: 145.505 ms  execute getSortedSetRangeAsc: SELECT ARRAY(SELECT "data"->>'value' FROM "objects" WHERE "data"->>'_key' = ANY($1::text[]) ORDER BY ("data"->>'score')::numeric ASC LIMIT $3::integer OFFSET $2::integer) s
    DETAIL:  parameters: $1 = '{tid:11847:posts}', $2 = '0', $3 = '19'
    LOG:  duration: 160.795 ms  execute getSortedSetRangeAsc: SELECT ARRAY(SELECT "data"->>'value' FROM "objects" WHERE "data"->>'_key' = ANY($1::text[]) ORDER BY ("data"->>'score')::numeric ASC LIMIT $3::integer OFFSET $2::integer) s
    DETAIL:  parameters: $1 = '{tid:11847:posts}', $2 = '19', $3 = '20'
    LOG:  duration: 167.504 ms  execute getSortedSetRangeAsc: SELECT ARRAY(SELECT "data"->>'value' FROM "objects" WHERE "data"->>'_key' = ANY($1::text[]) ORDER BY ("data"->>'score')::numeric ASC LIMIT $3::integer OFFSET $2::integer) s
    DETAIL:  parameters: $1 = '{tid:11847:posts}', $2 = '39', $3 = '20'
    LOG:  duration: 174.369 ms  execute getSortedSetRangeAsc: SELECT ARRAY(SELECT "data"->>'value' FROM "objects" WHERE "data"->>'_key' = ANY($1::text[]) ORDER BY ("data"->>'score')::numeric ASC LIMIT $3::integer OFFSET $2::integer) s
    DETAIL:  parameters: $1 = '{tid:11847:posts}', $2 = '59', $3 = '20'
    LOG:  duration: 177.853 ms  execute getSortedSetRangeAsc: SELECT ARRAY(SELECT "data"->>'value' FROM "objects" WHERE "data"->>'_key' = ANY($1::text[]) ORDER BY ("data"->>'score')::numeric ASC LIMIT $3::integer OFFSET $2::integer) s
    DETAIL:  parameters: $1 = '{tid:11847:posts}', $2 = '79', $3 = '20'
    LOG:  duration: 185.880 ms  execute getSortedSetRangeAsc: SELECT ARRAY(SELECT "data"->>'value' FROM "objects" WHERE "data"->>'_key' = ANY($1::text[]) ORDER BY ("data"->>'score')::numeric ASC LIMIT $3::integer OFFSET $2::integer) s
    DETAIL:  parameters: $1 = '{tid:11847:posts}', $2 = '99', $3 = '20'
    LOG:  duration: 185.663 ms  execute getSortedSetRangeAsc: SELECT ARRAY(SELECT "data"->>'value' FROM "objects" WHERE "data"->>'_key' = ANY($1::text[]) ORDER BY ("data"->>'score')::numeric ASC LIMIT $3::integer OFFSET $2::integer) s
    DETAIL:  parameters: $1 = '{tid:11847:posts}', $2 = '119', $3 = '20'
    LOG:  duration: 190.103 ms  execute getSortedSetRangeAsc: SELECT ARRAY(SELECT "data"->>'value' FROM "objects" WHERE "data"->>'_key' = ANY($1::text[]) ORDER BY ("data"->>'score')::numeric ASC LIMIT $3::integer OFFSET $2::integer) s
    DETAIL:  parameters: $1 = '{tid:11847:posts}', $2 = '139', $3 = '20'
    LOG:  duration: 191.187 ms  execute getSortedSetRangeAsc: SELECT ARRAY(SELECT "data"->>'value' FROM "objects" WHERE "data"->>'_key' = ANY($1::text[]) ORDER BY ("data"->>'score')::numeric ASC LIMIT $3::integer OFFSET $2::integer) s
    DETAIL:  parameters: $1 = '{tid:11847:posts}', $2 = '159', $3 = '20'
    LOG:  duration: 193.688 ms  execute getSortedSetRangeAsc: SELECT ARRAY(SELECT "data"->>'value' FROM "objects" WHERE "data"->>'_key' = ANY($1::text[]) ORDER BY ("data"->>'score')::numeric ASC LIMIT $3::integer OFFSET $2::integer) s
    DETAIL:  parameters: $1 = '{tid:11847:posts}', $2 = '179', $3 = '20'
    LOG:  duration: 195.170 ms  execute getSortedSetRangeAsc: SELECT ARRAY(SELECT "data"->>'value' FROM "objects" WHERE "data"->>'_key' = ANY($1::text[]) ORDER BY ("data"->>'score')::numeric ASC LIMIT $3::integer OFFSET $2::integer) s
    DETAIL:  parameters: $1 = '{tid:11847:posts}', $2 = '199', $3 = '20'
    LOG:  duration: 198.089 ms  execute getSortedSetRangeAsc: SELECT ARRAY(SELECT "data"->>'value' FROM "objects" WHERE "data"->>'_key' = ANY($1::text[]) ORDER BY ("data"->>'score')::numeric ASC LIMIT $3::integer OFFSET $2::integer) s
    DETAIL:  parameters: $1 = '{tid:11847:posts}', $2 = '219', $3 = '20'
    LOG:  duration: 198.426 ms  execute getSortedSetRangeAsc: SELECT ARRAY(SELECT "data"->>'value' FROM "objects" WHERE "data"->>'_key' = ANY($1::text[]) ORDER BY ("data"->>'score')::numeric ASC LIMIT $3::integer OFFSET $2::integer) s
    DETAIL:  parameters: $1 = '{tid:11847:posts}', $2 = '239', $3 = '20'
    LOG:  duration: 201.655 ms  execute getSortedSetRangeAsc: SELECT ARRAY(SELECT "data"->>'value' FROM "objects" WHERE "data"->>'_key' = ANY($1::text[]) ORDER BY ("data"->>'score')::numeric ASC LIMIT $3::integer OFFSET $2::integer) s
    DETAIL:  parameters: $1 = '{tid:11847:posts}', $2 = '259', $3 = '20'
    LOG:  duration: 203.190 ms  execute getSortedSetRangeAsc: SELECT ARRAY(SELECT "data"->>'value' FROM "objects" WHERE "data"->>'_key' = ANY($1::text[]) ORDER BY ("data"->>'score')::numeric ASC LIMIT $3::integer OFFSET $2::integer) s
    DETAIL:  parameters: $1 = '{tid:11847:posts}', $2 = '279', $3 = '20'
    LOG:  duration: 205.646 ms  execute getSortedSetRangeAsc: SELECT ARRAY(SELECT "data"->>'value' FROM "objects" WHERE "data"->>'_key' = ANY($1::text[]) ORDER BY ("data"->>'score')::numeric ASC LIMIT $3::integer OFFSET $2::integer) s
    DETAIL:  parameters: $1 = '{tid:11847:posts}', $2 = '299', $3 = '20'
    LOG:  duration: 205.415 ms  execute getSortedSetRangeAsc: SELECT ARRAY(SELECT "data"->>'value' FROM "objects" WHERE "data"->>'_key' = ANY($1::text[]) ORDER BY ("data"->>'score')::numeric ASC LIMIT $3::integer OFFSET $2::integer) s
    DETAIL:  parameters: $1 = '{tid:11847:posts}', $2 = '319', $3 = '20'
    LOG:  duration: 205.072 ms  execute getSortedSetRangeAsc: SELECT ARRAY(SELECT "data"->>'value' FROM "objects" WHERE "data"->>'_key' = ANY($1::text[]) ORDER BY ("data"->>'score')::numeric ASC LIMIT $3::integer OFFSET $2::integer) s
    DETAIL:  parameters: $1 = '{tid:11847:posts}', $2 = '339', $3 = '20'
    LOG:  duration: 207.629 ms  execute getSortedSetRangeAsc: SELECT ARRAY(SELECT "data"->>'value' FROM "objects" WHERE "data"->>'_key' = ANY($1::text[]) ORDER BY ("data"->>'score')::numeric ASC LIMIT $3::integer OFFSET $2::integer) s
    DETAIL:  parameters: $1 = '{tid:11847:posts}', $2 = '359', $3 = '20'
    LOG:  duration: 211.134 ms  execute getSortedSetRangeAsc: SELECT ARRAY(SELECT "data"->>'value' FROM "objects" WHERE "data"->>'_key' = ANY($1::text[]) ORDER BY ("data"->>'score')::numeric ASC LIMIT $3::integer OFFSET $2::integer) s
    DETAIL:  parameters: $1 = '{tid:11847:posts}', $2 = '379', $3 = '20'
    LOG:  duration: 209.803 ms  execute getSortedSetRangeAsc: SELECT ARRAY(SELECT "data"->>'value' FROM "objects" WHERE "data"->>'_key' = ANY($1::text[]) ORDER BY ("data"->>'score')::numeric ASC LIMIT $3::integer OFFSET $2::integer) s
    DETAIL:  parameters: $1 = '{tid:11847:posts}', $2 = '399', $3 = '20'
    LOG:  duration: 209.912 ms  execute getSortedSetRangeAsc: SELECT ARRAY(SELECT "data"->>'value' FROM "objects" WHERE "data"->>'_key' = ANY($1::text[]) ORDER BY ("data"->>'score')::numeric ASC LIMIT $3::integer OFFSET $2::integer) s
    DETAIL:  parameters: $1 = '{tid:11847:posts}', $2 = '419', $3 = '20'
    LOG:  duration: 211.126 ms  execute getSortedSetRangeAsc: SELECT ARRAY(SELECT "data"->>'value' FROM "objects" WHERE "data"->>'_key' = ANY($1::text[]) ORDER BY ("data"->>'score')::numeric ASC LIMIT $3::integer OFFSET $2::integer) s
    DETAIL:  parameters: $1 = '{tid:11847:posts}', $2 = '439', $3 = '20'
    LOG:  duration: 211.814 ms  execute getSortedSetRangeAsc: SELECT ARRAY(SELECT "data"->>'value' FROM "objects" WHERE "data"->>'_key' = ANY($1::text[]) ORDER BY ("data"->>'score')::numeric ASC LIMIT $3::integer OFFSET $2::integer) s
    DETAIL:  parameters: $1 = '{tid:11847:posts}', $2 = '459', $3 = '20'
    LOG:  duration: 213.716 ms  execute getSortedSetRangeAsc: SELECT ARRAY(SELECT "data"->>'value' FROM "objects" WHERE "data"->>'_key' = ANY($1::text[]) ORDER BY ("data"->>'score')::numeric ASC LIMIT $3::integer OFFSET $2::integer) s
    DETAIL:  parameters: $1 = '{tid:11847:posts}', $2 = '479', $3 = '20'
    LOG:  duration: 213.677 ms  execute getSortedSetRangeAsc: SELECT ARRAY(SELECT "data"->>'value' FROM "objects" WHERE "data"->>'_key' = ANY($1::text[]) ORDER BY ("data"->>'score')::numeric ASC LIMIT $3::integer OFFSET $2::integer) s
    DETAIL:  parameters: $1 = '{tid:11847:posts}', $2 = '499', $3 = '20'
    LOG:  duration: 209.172 ms  execute getSortedSetRangeDesc: SELECT ARRAY(SELECT "data"->>'value' FROM "objects" WHERE "data"->>'_key' = ANY($1::text[]) ORDER BY ("data"->>'score')::numeric DESC LIMIT $3::integer OFFSET $2::integer) s
    DETAIL:  parameters: $1 = '{users:joindate}', $2 = '0', $3 = '24'
    LOG:  duration: 185.024 ms  execute getSortedSetRangeDesc: SELECT ARRAY(SELECT "data"->>'value' FROM "objects" WHERE "data"->>'_key' = ANY($1::text[]) ORDER BY ("data"->>'score')::numeric DESC LIMIT $3::integer OFFSET $2::integer) s
    DETAIL:  parameters: $1 = '{users:reputation}', $2 = '0', $3 = '24'
    LOG:  duration: 188.736 ms  execute getSortedSetRangeDesc: SELECT ARRAY(SELECT "data"->>'value' FROM "objects" WHERE "data"->>'_key' = ANY($1::text[]) ORDER BY ("data"->>'score')::numeric DESC LIMIT $3::integer OFFSET $2::integer) s
    DETAIL:  parameters: $1 = '{users:postcount}', $2 = '0', $3 = '24'
    LOG:  duration: 98.529 ms  execute sortedSetCount: SELECT COUNT("data") c FROM "objects" WHERE "data"->>'_key' = $1::text AND (("data"->>'score')::numeric >= $2::numeric OR $2::numeric IS NULL) AND (("data"->>'score')::numeric <= $3::numeric OR $3::numeric IS NULL)
    DETAIL:  parameters: $1 = 'ip:recent', $2 = '1499973114307', $3 = NULL
    LOG:  duration: 127.405 ms  execute sortedSetCount: SELECT COUNT("data") c FROM "objects" WHERE "data"->>'_key' = $1::text AND (("data"->>'score')::numeric >= $2::numeric OR $2::numeric IS NULL) AND (("data"->>'score')::numeric <= $3::numeric OR $3::numeric IS NULL)
    DETAIL:  parameters: $1 = 'users:joindate', $2 = '1499973114308', $3 = NULL
    LOG:  duration: 1070.922 ms  execute sortedSetCount: SELECT COUNT("data") c FROM "objects" WHERE "data"->>'_key' = $1::text AND (("data"->>'score')::numeric >= $2::numeric OR $2::numeric IS NULL) AND (("data"->>'score')::numeric <= $3::numeric OR $3::numeric IS NULL)
    DETAIL:  parameters: $1 = 'posts:pid', $2 = '1502478714308', $3 = NULL
    LOG:  duration: 1069.678 ms  execute sortedSetCount: SELECT COUNT("data") c FROM "objects" WHERE "data"->>'_key' = $1::text AND (("data"->>'score')::numeric >= $2::numeric OR $2::numeric IS NULL) AND (("data"->>'score')::numeric <= $3::numeric OR $3::numeric IS NULL)
    DETAIL:  parameters: $1 = 'posts:pid', $2 = '1501960314308', $3 = NULL
    LOG:  duration: 1083.056 ms  execute sortedSetCount: SELECT COUNT("data") c FROM "objects" WHERE "data"->>'_key' = $1::text AND (("data"->>'score')::numeric >= $2::numeric OR $2::numeric IS NULL) AND (("data"->>'score')::numeric <= $3::numeric OR $3::numeric IS NULL)
    DETAIL:  parameters: $1 = 'posts:pid', $2 = '1499973114308', $3 = NULL
    LOG:  duration: 659.028 ms  execute sortedSetCount: SELECT COUNT("data") c FROM "objects" WHERE "data"->>'_key' = $1::text AND (("data"->>'score')::numeric >= $2::numeric OR $2::numeric IS NULL) AND (("data"->>'score')::numeric <= $3::numeric OR $3::numeric IS NULL)
    DETAIL:  parameters: $1 = 'ip:recent', $2 = '1502478753415', $3 = NULL
    LOG:  duration: 581.621 ms  execute sortedSetCount: SELECT COUNT("data") c FROM "objects" WHERE "data"->>'_key' = $1::text AND (("data"->>'score')::numeric >= $2::numeric OR $2::numeric IS NULL) AND (("data"->>'score')::numeric <= $3::numeric OR $3::numeric IS NULL)
    DETAIL:  parameters: $1 = 'ip:recent', $2 = '1501960353415', $3 = NULL
    LOG:  duration: 612.971 ms  execute sortedSetCount: SELECT COUNT("data") c FROM "objects" WHERE "data"->>'_key' = $1::text AND (("data"->>'score')::numeric >= $2::numeric OR $2::numeric IS NULL) AND (("data"->>'score')::numeric <= $3::numeric OR $3::numeric IS NULL)
    DETAIL:  parameters: $1 = 'ip:recent', $2 = '1499973153415', $3 = NULL
    LOG:  duration: 128.894 ms  execute sortedSetCount: SELECT COUNT("data") c FROM "objects" WHERE "data"->>'_key' = $1::text AND (("data"->>'score')::numeric >= $2::numeric OR $2::numeric IS NULL) AND (("data"->>'score')::numeric <= $3::numeric OR $3::numeric IS NULL)
    DETAIL:  parameters: $1 = 'users:joindate', $2 = '1502478753415', $3 = NULL
    LOG:  duration: 123.242 ms  execute sortedSetCount: SELECT COUNT("data") c FROM "objects" WHERE "data"->>'_key' = $1::text AND (("data"->>'score')::numeric >= $2::numeric OR $2::numeric IS NULL) AND (("data"->>'score')::numeric <= $3::numeric OR $3::numeric IS NULL)
    DETAIL:  parameters: $1 = 'users:joindate', $2 = '1501960353415', $3 = NULL
    LOG:  duration: 112.536 ms  execute sortedSetCount: SELECT COUNT("data") c FROM "objects" WHERE "data"->>'_key' = $1::text AND (("data"->>'score')::numeric >= $2::numeric OR $2::numeric IS NULL) AND (("data"->>'score')::numeric <= $3::numeric OR $3::numeric IS NULL)
    DETAIL:  parameters: $1 = 'users:joindate', $2 = '1499973153415', $3 = NULL
    LOG:  duration: 1070.829 ms  execute sortedSetCount: SELECT COUNT("data") c FROM "objects" WHERE "data"->>'_key' = $1::text AND (("data"->>'score')::numeric >= $2::numeric OR $2::numeric IS NULL) AND (("data"->>'score')::numeric <= $3::numeric OR $3::numeric IS NULL)
    DETAIL:  parameters: $1 = 'posts:pid', $2 = '1502478753415', $3 = NULL
    LOG:  duration: 1072.489 ms  execute sortedSetCount: SELECT COUNT("data") c FROM "objects" WHERE "data"->>'_key' = $1::text AND (("data"->>'score')::numeric >= $2::numeric OR $2::numeric IS NULL) AND (("data"->>'score')::numeric <= $3::numeric OR $3::numeric IS NULL)
    DETAIL:  parameters: $1 = 'posts:pid', $2 = '1501960353415', $3 = NULL
    LOG:  duration: 1080.441 ms  execute sortedSetCount: SELECT COUNT("data") c FROM "objects" WHERE "data"->>'_key' = $1::text AND (("data"->>'score')::numeric >= $2::numeric OR $2::numeric IS NULL) AND (("data"->>'score')::numeric <= $3::numeric OR $3::numeric IS NULL)
    DETAIL:  parameters: $1 = 'posts:pid', $2 = '1499973153415', $3 = NULL
    LOG:  duration: 664.631 ms  execute sortedSetCount: SELECT COUNT("data") c FROM "objects" WHERE "data"->>'_key' = $1::text AND (("data"->>'score')::numeric >= $2::numeric OR $2::numeric IS NULL) AND (("data"->>'score')::numeric <= $3::numeric OR $3::numeric IS NULL)
    DETAIL:  parameters: $1 = 'ip:recent', $2 = '1502478862777', $3 = NULL
    LOG:  duration: 633.124 ms  execute sortedSetCount: SELECT COUNT("data") c FROM "objects" WHERE "data"->>'_key' = $1::text AND (("data"->>'score')::numeric >= $2::numeric OR $2::numeric IS NULL) AND (("data"->>'score')::numeric <= $3::numeric OR $3::numeric IS NULL)
    DETAIL:  parameters: $1 = 'ip:recent', $2 = '1501960462777', $3 = NULL
    LOG:  duration: 619.196 ms  execute sortedSetCount: SELECT COUNT("data") c FROM "objects" WHERE "data"->>'_key' = $1::text AND (("data"->>'score')::numeric >= $2::numeric OR $2::numeric IS NULL) AND (("data"->>'score')::numeric <= $3::numeric OR $3::numeric IS NULL)
    DETAIL:  parameters: $1 = 'ip:recent', $2 = '1499973262777', $3 = NULL
    LOG:  duration: 127.547 ms  execute sortedSetCount: SELECT COUNT("data") c FROM "objects" WHERE "data"->>'_key' = $1::text AND (("data"->>'score')::numeric >= $2::numeric OR $2::numeric IS NULL) AND (("data"->>'score')::numeric <= $3::numeric OR $3::numeric IS NULL)
    DETAIL:  parameters: $1 = 'users:joindate', $2 = '1502478862777', $3 = NULL
    LOG:  duration: 113.030 ms  execute sortedSetCount: SELECT COUNT("data") c FROM "objects" WHERE "data"->>'_key' = $1::text AND (("data"->>'score')::numeric >= $2::numeric OR $2::numeric IS NULL) AND (("data"->>'score')::numeric <= $3::numeric OR $3::numeric IS NULL)
    DETAIL:  parameters: $1 = 'users:joindate', $2 = '1501960462777', $3 = NULL
    LOG:  duration: 111.823 ms  execute sortedSetCount: SELECT COUNT("data") c FROM "objects" WHERE "data"->>'_key' = $1::text AND (("data"->>'score')::numeric >= $2::numeric OR $2::numeric IS NULL) AND (("data"->>'score')::numeric <= $3::numeric OR $3::numeric IS NULL)
    DETAIL:  parameters: $1 = 'users:joindate', $2 = '1499973262777', $3 = NULL
    LOG:  duration: 1061.358 ms  execute sortedSetCount: SELECT COUNT("data") c FROM "objects" WHERE "data"->>'_key' = $1::text AND (("data"->>'score')::numeric >= $2::numeric OR $2::numeric IS NULL) AND (("data"->>'score')::numeric <= $3::numeric OR $3::numeric IS NULL)
    DETAIL:  parameters: $1 = 'posts:pid', $2 = '1502478862778', $3 = NULL
    LOG:  duration: 1060.660 ms  execute sortedSetCount: SELECT COUNT("data") c FROM "objects" WHERE "data"->>'_key' = $1::text AND (("data"->>'score')::numeric >= $2::numeric OR $2::numeric IS NULL) AND (("data"->>'score')::numeric <= $3::numeric OR $3::numeric IS NULL)
    DETAIL:  parameters: $1 = 'posts:pid', $2 = '1501960462778', $3 = NULL
    LOG:  duration: 1070.387 ms  execute sortedSetCount: SELECT COUNT("data") c FROM "objects" WHERE "data"->>'_key' = $1::text AND (("data"->>'score')::numeric >= $2::numeric OR $2::numeric IS NULL) AND (("data"->>'score')::numeric <= $3::numeric OR $3::numeric IS NULL)
    DETAIL:  parameters: $1 = 'posts:pid', $2 = '1499973262778', $3 = NULL
    
    EXPLAIN:
                                                                                 QUERY PLAN                                                                              
    ---------------------------------------------------------------------------------------------------------------------------------------------------------------------
     Result  (cost=386001.25..386001.26 rows=1 width=32) (actual time=314.395..314.396 rows=1 loops=1)
       Output: $0
       InitPlan 1 (returns $0)
         ->  Limit  (cost=386001.19..386001.25 rows=24 width=64) (actual time=314.336..314.370 rows=24 loops=1)
               Output: ((data ->> 'value'::text)), (((data ->> 'score'::text))::numeric)
               ->  Sort  (cost=386001.19..386364.82 rows=145453 width=64) (actual time=314.333..314.344 rows=24 loops=1)
                     Output: ((data ->> 'value'::text)), (((data ->> 'score'::text))::numeric)
                     Sort Key: (((objects.data ->> 'score'::text))::numeric) DESC
                     Sort Method: top-N heapsort  Memory: 26kB
                     ->  Bitmap Heap Scan on public.objects  (cost=4399.82..381939.44 rows=145453 width=64) (actual time=26.044..218.505 rows=142101 loops=1)
                           Output: (data ->> 'value'::text), ((data ->> 'score'::text))::numeric
                           Recheck Cond: ((objects.data ->> '_key'::text) = ANY ('{users:reputation}'::text[]))
                           Heap Blocks: exact=1794
                           ->  Bitmap Index Scan on uniq__objects__key__value  (cost=0.00..4363.46 rows=145453 width=0) (actual time=25.734..25.734 rows=142101 loops=1)
                                 Index Cond: ((objects.data ->> '_key'::text) = ANY ('{users:reputation}'::text[]))
     Planning time: 0.502 ms
     Execution time: 314.462 ms
    
                                                                               QUERY PLAN                                                                            
    -----------------------------------------------------------------------------------------------------------------------------------------------------------------
     Aggregate  (cost=47021.31..47021.32 rows=1 width=8) (actual time=43.965..43.965 rows=1 loops=1)
       Output: count(data)
       ->  Bitmap Heap Scan on public.objects  (cost=432.17..46989.21 rows=12840 width=186) (actual time=30.158..37.321 rows=13057 loops=1)
             Output: data
             Recheck Cond: (((objects.data ->> '_key'::text) = 'posts:pid'::text) AND (((objects.data ->> 'score'::text))::numeric >= '1499973262778'::numeric))
             Heap Blocks: exact=194
             ->  Bitmap Index Scan on idx__objects__key__score  (cost=0.00..428.96 rows=12840 width=0) (actual time=30.099..30.099 rows=13057 loops=1)
                   Index Cond: (((objects.data ->> '_key'::text) = 'posts:pid'::text) AND (((objects.data ->> 'score'::text))::numeric >= '1499973262778'::numeric))
     Planning time: 1.694 ms
     Execution time: 44.053 ms
    


  • MongoDB:

    2017-08-12T20:11:00.855+0000 I COMMAND  [conn2] command 0.objects command: find { find: "objects", filter: { _key: "tid:11847:posts" }, sort: { score: 1 }, projection: { _id: 0, value: 1 }, skip: 66559, limit: 20 } planSummary: IXSCAN { _key: 1, score: -1 } keysExamined:66579 docsExamined:66579 fromMultiPlanner:1 replanned:1 cursorExhausted:1 numYields:1089 nreturned:20 reslen:647 locks:{ Global: { acquireCount: { r: 2180 } }, Database: { acquireCount: { r: 1090 } }, Collection: { acquireCount: { r: 1090 } } } protocol:op_query 1930ms
    2017-08-12T20:11:06.462+0000 I COMMAND  [conn3] command 0.objects command: find { find: "objects", filter: { _key: "events:time" }, sort: { score: -1 }, projection: { _id: 0, value: 1 }, skip: 4220, limit: 20 } planSummary: IXSCAN { _key: 1, score: -1 } keysExamined:4226 docsExamined:4226 fromMultiPlanner:1 replanned:1 cursorExhausted:1 numYields:74 nreturned:6 reslen:223 locks:{ Global: { acquireCount: { r: 150 } }, Database: { acquireCount: { r: 75 } }, Collection: { acquireCount: { r: 75 } } } protocol:op_query 221ms
    
    EXPLAIN:
    {
            "queryPlanner" : {
                    "plannerVersion" : 1,
                    "namespace" : "0.objects",
                    "indexFilterSet" : false,
                    "parsedQuery" : {
                            "_key" : {
                                    "$eq" : "tid:11847:posts"
                            }
                    },
                    "winningPlan" : {
                            "stage" : "LIMIT",
                            "limitAmount" : 20,
                            "inputStage" : {
                                    "stage" : "SKIP",
                                    "skipAmount" : 0,
                                    "inputStage" : {
                                            "stage" : "PROJECTION",
                                            "transformBy" : {
                                                    "_id" : 0,
                                                    "value" : 1
                                            },
                                            "inputStage" : {
                                                    "stage" : "FETCH",
                                                    "inputStage" : {
                                                            "stage" : "IXSCAN",
                                                            "keyPattern" : {
                                                                    "_key" : 1,
                                                                    "score" : -1
                                                            },
                                                            "indexName" : "_key_1_score_-1",
                                                            "isMultiKey" : false,
                                                            "multiKeyPaths" : {
                                                                    "_key" : [ ],
                                                                    "score" : [ ]
                                                            },
                                                            "isUnique" : false,
                                                            "isSparse" : false,
                                                            "isPartial" : false,
                                                            "indexVersion" : 1,
                                                            "direction" : "backward",
                                                            "indexBounds" : {
                                                                    "_key" : [
                                                                            "[\"tid:11847:posts\", \"tid:11847:posts\"]"
                                                                    ],
                                                                    "score" : [
                                                                            "[MinKey, MaxKey]"
                                                                    ]
                                                            }
                                                    }
                                            }
                                    }
                            }
                    },
                    "rejectedPlans" : [
                            {
                                    "stage" : "SKIP",
                                    "skipAmount" : 66579,
                                    "inputStage" : {
                                            "stage" : "PROJECTION",
                                            "transformBy" : {
                                                    "_id" : 0,
                                                    "value" : 1
                                            },
                                            "inputStage" : {
                                                    "stage" : "SORT",
                                                    "sortPattern" : {
                                                            "score" : 1
                                                    },
                                                    "limitAmount" : 66599,
                                                    "inputStage" : {
                                                            "stage" : "SORT_KEY_GENERATOR",
                                                            "inputStage" : {
                                                                    "stage" : "FETCH",
                                                                    "inputStage" : {
                                                                            "stage" : "IXSCAN",
                                                                            "keyPattern" : {
                                                                                   "_key" : 1,
                                                                                   "value" : -1
                                                                            },
                                                                            "indexName" : "_key_1_value_-1",
                                                                            "isMultiKey" : false,
                                                                            "multiKeyPaths" : {
                                                                                   "_key" : [ ],
                                                                                   "value" : [ ]
                                                                            },
                                                                            "isUnique" : true,
                                                                            "isSparse" : true,
                                                                            "isPartial" : false,
                                                                            "indexVersion" : 1,
                                                                            "direction" : "forward",
                                                                            "indexBounds" : {
                                                                                   "_key" : [
                                                                                   "[\"tid:11847:posts\", \"tid:11847:posts\"]"
                                                                                   ],
                                                                                   "value" : [
                                                                                   "[MaxKey, MinKey]"
                                                                                   ]
                                                                            }
                                                                    }
                                                            }
                                                    }
                                            }
                                    }
                            }
                    ]
            },
            "serverInfo" : {
                    "host" : "32cc8256626f",
                    "port" : 27017,
                    "version" : "3.4.2",
                    "gitVersion" : "3f76e40c105fc223b3e5aac3e20dcd026b83b38b"
            },
            "ok" : 1
    }
    
    {
            "queryPlanner" : {
                    "plannerVersion" : 1,
                    "namespace" : "0.objects",
                    "indexFilterSet" : false,
                    "parsedQuery" : {
                            "_key" : {
                                    "$eq" : "events:time"
                            }
                    },
                    "winningPlan" : {
                            "stage" : "LIMIT",
                            "limitAmount" : 20,
                            "inputStage" : {
                                    "stage" : "SKIP",
                                    "skipAmount" : 0,
                                    "inputStage" : {
                                            "stage" : "PROJECTION",
                                            "transformBy" : {
                                                    "_id" : 0,
                                                    "value" : 1
                                            },
                                            "inputStage" : {
                                                    "stage" : "FETCH",
                                                    "inputStage" : {
                                                            "stage" : "IXSCAN",
                                                            "keyPattern" : {
                                                                    "_key" : 1,
                                                                    "score" : -1
                                                            },
                                                            "indexName" : "_key_1_score_-1",
                                                            "isMultiKey" : false,
                                                            "multiKeyPaths" : {
                                                                    "_key" : [ ],
                                                                    "score" : [ ]
                                                            },
                                                            "isUnique" : false,
                                                            "isSparse" : false,
                                                            "isPartial" : false,
                                                            "indexVersion" : 1,
                                                            "direction" : "forward",
                                                            "indexBounds" : {
                                                                    "_key" : [
                                                                            "[\"events:time\", \"events:time\"]"
                                                                    ],
                                                                    "score" : [
                                                                            "[MaxKey, MinKey]"
                                                                    ]
                                                            }
                                                    }
                                            }
                                    }
                            }
                    },
                    "rejectedPlans" : [
                            {
                                    "stage" : "SKIP",
                                    "skipAmount" : 4220,
                                    "inputStage" : {
                                            "stage" : "PROJECTION",
                                            "transformBy" : {
                                                    "_id" : 0,
                                                    "value" : 1
                                            },
                                            "inputStage" : {
                                                    "stage" : "SORT",
                                                    "sortPattern" : {
                                                            "score" : -1
                                                    },
                                                    "limitAmount" : 4240,
                                                    "inputStage" : {
                                                            "stage" : "SORT_KEY_GENERATOR",
                                                            "inputStage" : {
                                                                    "stage" : "FETCH",
                                                                    "inputStage" : {
                                                                            "stage" : "IXSCAN",
                                                                            "keyPattern" : {
                                                                                   "_key" : 1,
                                                                                   "value" : -1
                                                                            },
                                                                            "indexName" : "_key_1_value_-1",
                                                                            "isMultiKey" : false,
                                                                            "multiKeyPaths" : {
                                                                                   "_key" : [ ],
                                                                                   "value" : [ ]
                                                                            },
                                                                            "isUnique" : true,
                                                                            "isSparse" : true,
                                                                            "isPartial" : false,
                                                                            "indexVersion" : 1,
                                                                            "direction" : "forward",
                                                                            "indexBounds" : {
                                                                                   "_key" : [
                                                                                   "[\"events:time\", \"events:time\"]"
                                                                                   ],
                                                                                   "value" : [
                                                                                   "[MaxKey, MinKey]"
                                                                                   ]
                                                                            }
                                                                    }
                                                            }
                                                    }
                                            }
                                    }
                            }
                    ]
            },
            "serverInfo" : {
                    "host" : "32cc8256626f",
                    "port" : 27017,
                    "version" : "3.4.2",
                    "gitVersion" : "3f76e40c105fc223b3e5aac3e20dcd026b83b38b"
            },
            "ok" : 1
    }
    


  • Does anyone know if there's a way to tell Postgres to index something like rank() over (partition by foo order by bar asc)?


  • FoxDev

    @ben_lubar said in I converted NodeBB to use a PostgreSQL database. AMA.:

    Does anyone know if there's a way to tell Postgres to index something like rank() over (partition by foo order by bar asc)?

    You want Postgres to index something that isn't a column? I'm not sure that's po-

    thought occurs

    Hmm… Is this going to be a column in a view? If so, it may be possible to define an index on that. Never defined an index on a view myself, but I see no reason it can't be done.



  • @raceprouk said in I converted NodeBB to use a PostgreSQL database. AMA.:

    @ben_lubar said in I converted NodeBB to use a PostgreSQL database. AMA.:

    Does anyone know if there's a way to tell Postgres to index something like rank() over (partition by foo order by bar asc)?

    You want Postgres to index something that isn't a column? I'm not sure that's po-

    thought occurs

    Hmm… Is this going to be a column in a view? If so, it may be possible to define an index on that. Never defined an index on a view myself, but I see no reason it can't be done.

    I'd like to define an index on a view, but all the documentation I can find just says it only works on materialized views. @Onyx said something about it further up the thread, but nobody's been able to find what he was thinking of yet.


  • Winner of the 2016 Presidential Election

    @ben_lubar said in I converted NodeBB to use a PostgreSQL database. AMA.:

    @raceprouk said in I converted NodeBB to use a PostgreSQL database. AMA.:

    @ben_lubar said in I converted NodeBB to use a PostgreSQL database. AMA.:

    Does anyone know if there's a way to tell Postgres to index something like rank() over (partition by foo order by bar asc)?

    You want Postgres to index something that isn't a column? I'm not sure that's po-

    thought occurs

    Hmm… Is this going to be a column in a view? If so, it may be possible to define an index on that. Never defined an index on a view myself, but I see no reason it can't be done.

    I'd like to define an index on a view, but all the documentation I can find just says it only works on materialized views. @Onyx said something about it further up the thread, but nobody's been able to find what he was thinking of yet.

    Have you tried just going ahead and declaring one on the view to see what happens?



  • @dreikin said in I converted NodeBB to use a PostgreSQL database. AMA.:

    @ben_lubar said in I converted NodeBB to use a PostgreSQL database. AMA.:

    @raceprouk said in I converted NodeBB to use a PostgreSQL database. AMA.:

    @ben_lubar said in I converted NodeBB to use a PostgreSQL database. AMA.:

    Does anyone know if there's a way to tell Postgres to index something like rank() over (partition by foo order by bar asc)?

    You want Postgres to index something that isn't a column? I'm not sure that's po-

    thought occurs

    Hmm… Is this going to be a column in a view? If so, it may be possible to define an index on that. Never defined an index on a view myself, but I see no reason it can't be done.

    I'd like to define an index on a view, but all the documentation I can find just says it only works on materialized views. @Onyx said something about it further up the thread, but nobody's been able to find what he was thinking of yet.

    Have you tried just going ahead and declaring one on the view to see what happens?

    ERROR: "sorted_sets" is not a table or materialized view


  • BINNED

    @ben_lubar said in I converted NodeBB to use a PostgreSQL database. AMA.:

    Does anyone know if there's a way to tell Postgres to index something like rank() over (partition by foo order by bar asc)?

    Yes, you can create indexes on expressions:


  • :belt_onion:



  • @raceprouk said in I converted NodeBB to use a PostgreSQL database. AMA.:

    You want Postgres to index something that isn't a column? I'm not sure that's po-

    In SQL Server you can add a computed column and index that.

    @onyx said in I converted NodeBB to use a PostgreSQL database. AMA.:

    Yes, you can create indexes on expressions:

    And of course SQL Server will do that too, although I don't know how complex the expressions can be. EDIT: no, nevermind, it basically just directs you to make an index on a computed column, like I first suggested. Same thing really, just the computed column has a name and will show up in SELECT *. You can put expressions in constraints, but not in indexes.


Log in to reply