A critical look at Marvel vs. Capcom....


  • Discourse touched me in a no-no place

    @Kamil-Podlesak said in A critical look at Marvel vs. Capcom....:

    But MVC on top of MVC does not make any sense.

    For some reason, I'm now thinking about :disco:🐎…


  • And then the murders began.

    @error said in A critical look at Marvel vs. Capcom....:

    @Unperverted-Vixen said in A critical look at Marvel vs. Capcom....:

    And that you’re limited to however many parameters the SQL server supports. I think for Microsoft the limit is 3000?

    Wha...? I've never gone past 8.

    Then you must be using narrow tables and only sending one INSERT to SQL Server at a time. Inserting 150 rows into a table with 20 columns will require 3000 parameters.

    (The solution was, as I said, to reduce the batch size - only send 10 insert statements to SQL Server at a time instead of 150, or more in our case.)

    @Jaime Interesting idea. Are you doing that just for SELECTs or for INSERT/UPDATEs too?



  • @Kamil-Podlesak said in A critical look at Marvel vs. Capcom....:

    RPC-over-HTTP (usually called "REST" today,

    REST is supposed to mean something else than RPC/be constrained compared to RPC. In REST you are supposed to just have getters (GET method) to read the data—without complicated parameters, mostly just by ID—and setters (PUT) to update them. Of course if you have more complicated business logic, you end up with arbitrary operations (POST), and then you probably shouldn't be calling it REST any more, because it isn't.

    In either case though, if you have full MVC in the front-end, the API-over-HTTP, no matter how RESTful it is, should be a fairly thin layer over the DAL.


  • Considered Harmful

    @Unperverted-Vixen said in A critical look at Marvel vs. Capcom....:

    Then you must be using narrow tables and only sending one INSERT to SQL Server at a time. Inserting 150 rows into a table with 20 columns will require 3000 parameters.

    Nope. As I said, for bulk updates I use a table-valued parameter.



  • @boomzilla said in A critical look at Marvel vs. Capcom....:

    @Jaime said in A critical look at Marvel vs. Capcom....:

    @boomzilla said in A critical look at Marvel vs. Capcom....:

    Huh. I use Java and can do the equivalent of that code (using Hibernate).

    public List<Person>(List<String> personIDs){
        return getEm().createQuery("from Person where id in :ids")
            .setParameter("ids", personIDs)
            .getResultList();
    }
    

    Interestingly, you seem to have the same limitation as @Unperverted-Vixen mentioned above. See https://stackoverflow.com/questions/7758198/hibernate-oracle-in-clause-limitation-how-to-solve-it

    If I'm reading the StackOverflow thread correctly, Hibernate seems to be simply replacing the parameter with a comma delimited literal list.

    More or less. The IN clause is the real limiting factor there because it can't take more than 1,000 elements, whether hard coded or as parameters, which for the places I've done that isn't a problem in practice.

    Ummm, guys... are you telling me that you don't split these long lists to batches of strictly predefined sizes, with padding (when shorter) and invoking as several queries (when the count is higher than maximum batch size)???

    From this thread, it looks like you just generated N question marks (N = length of the list/array) and fire SQL query.

    This is quite a serious performance killer and this is reason #2343 why you don't let developers to use database directly these days!

    Don't do this!!

    Unless you are 110% sure that the database engine in question has special logic that prevents queryplan-cache flushing. I am not away of any such engine, but I did not do any research (and actually I have done very little SQL in last 4 years).
    Also, some database engines do not have this problem because query plan compiler and optimizer does not consume any significant amount of CPU (you know which one that is...).


  • And then the murders began.

    @error said in A critical look at Marvel vs. Capcom....:

    Nope. As I said, for bulk updates I use a table-valued parameter.

    Okay, now I understand what you do.

    I've flirted with them briefly but agree with @Jaime that they're a pain to keep synced. Any change would require a wholesale rewrite to code I don't own so it's probably not going to happen anyways, but something like TVPs that creates more work for me ongoing instead of saving work is not going to happen. 😛


  • ♿ (Parody)

    @Kamil-Podlesak said in A critical look at Marvel vs. Capcom....:

    @boomzilla said in A critical look at Marvel vs. Capcom....:

    @Jaime said in A critical look at Marvel vs. Capcom....:

    @boomzilla said in A critical look at Marvel vs. Capcom....:

    Huh. I use Java and can do the equivalent of that code (using Hibernate).

    public List<Person>(List<String> personIDs){
        return getEm().createQuery("from Person where id in :ids")
            .setParameter("ids", personIDs)
            .getResultList();
    }
    

    Interestingly, you seem to have the same limitation as @Unperverted-Vixen mentioned above. See https://stackoverflow.com/questions/7758198/hibernate-oracle-in-clause-limitation-how-to-solve-it

    If I'm reading the StackOverflow thread correctly, Hibernate seems to be simply replacing the parameter with a comma delimited literal list.

    More or less. The IN clause is the real limiting factor there because it can't take more than 1,000 elements, whether hard coded or as parameters, which for the places I've done that isn't a problem in practice.

    Ummm, guys... are you telling me that you don't split these long lists to batches of strictly predefined sizes, with padding (when shorter) and invoking as several queries (when the count is higher than maximum batch size)???

    No, I am not telling you that.

    From this thread, it looks like you just generated N question marks (N = length of the list/array) and fire SQL query.

    In some places, yes.

    This is quite a serious performance killer and this is reason #2343 why you don't let developers to use database directly these days!

    LOL. Then no queries would be done! Also, I know they're not performance bottlenecks because I've tested them and they work quite well, though I've never come anywhere close to thousands of parameters.


  • Discourse touched me in a no-no place

    @Bulb said in A critical look at Marvel vs. Capcom....:

    then you probably shouldn't be calling it REST any more, because it isn't

    As long as the semantics of getters, setters and deleters is right, the rest is fine. For proper REST, you also need to map errors correctly and you should make things discoverable with appropriate links in pages (when retrieved with the correct content type). REST doesn't mean just plain old CRUD.

    The usual problems with REST as implemented are in failure mapping and, especially, discovery.



  • @dkf I am usually the guy who does not care whether it is better called X or Y as long as it solves the problem anyway.

    Having the API described by the OpenAPI 3 JSOUN tends to help a lot in practice.



  • @boomzilla said in A critical look at Marvel vs. Capcom....:

    @Kamil-Podlesak said in A critical look at Marvel vs. Capcom....:

    This is quite a serious performance killer and this is reason #2343 why you don't let developers to use database directly these days!

    LOL. Then no queries would be done! Also, I know they're not performance bottlenecks because I've tested them and they work quite well, though I've never come anywhere close to thousands of parameters.

    It's not really about number of parameters. It's about number of different queries that are executed in some (short) time window, in relation to the size of queryplan (or "prepared statement") cache. If some other, more critical queries get pushed out of that cache, you might have a problem... and this is something that might happen randomly or by nefarious means (as a DOS attack). BDDT

    So, yes, I was a little harsh, but I cannot overstress my advice: be careful when generating SQL. There might be unexpected consequences.


  • ♿ (Parody)

    @Kamil-Podlesak said in A critical look at Marvel vs. Capcom....:

    @boomzilla said in A critical look at Marvel vs. Capcom....:

    @Kamil-Podlesak said in A critical look at Marvel vs. Capcom....:

    This is quite a serious performance killer and this is reason #2343 why you don't let developers to use database directly these days!

    LOL. Then no queries would be done! Also, I know they're not performance bottlenecks because I've tested them and they work quite well, though I've never come anywhere close to thousands of parameters.

    It's not really about number of parameters. It's about number of different queries that are executed in some (short) time window, in relation to the size of queryplan (or "prepared statement") cache. If some other, more critical queries get pushed out of that cache, you might have a problem... and this is something that might happen randomly or by nefarious means (as a DOS attack). BDDT

    So, yes, I was a little harsh, but I cannot overstress my advice: be careful when generating SQL. There might be unexpected consequences.

    Well...it's doing operations that the users need to do when they push the proper buttons. When other users are also doing stuff...yeah, sometimes Oracle gets stupid but there's not much I can do about it. The alternative to what I'm doing would be to run the same query 50 times with a single id parameter instead of a single query with 50 ids all at once. The single query with lots of parameters way of doing stuff ends up being a lot faster in practice.



  • @boomzilla said in A critical look at Marvel vs. Capcom....:

    @Kamil-Podlesak said in A critical look at Marvel vs. Capcom....:

    @boomzilla said in A critical look at Marvel vs. Capcom....:

    @Kamil-Podlesak said in A critical look at Marvel vs. Capcom....:

    This is quite a serious performance killer and this is reason #2343 why you don't let developers to use database directly these days!

    LOL. Then no queries would be done! Also, I know they're not performance bottlenecks because I've tested them and they work quite well, though I've never come anywhere close to thousands of parameters.

    It's not really about number of parameters. It's about number of different queries that are executed in some (short) time window, in relation to the size of queryplan (or "prepared statement") cache. If some other, more critical queries get pushed out of that cache, you might have a problem... and this is something that might happen randomly or by nefarious means (as a DOS attack). BDDT

    So, yes, I was a little harsh, but I cannot overstress my advice: be careful when generating SQL. There might be unexpected consequences.

    Well...it's doing operations that the users need to do when they push the proper buttons. When other users are also doing stuff...yeah, sometimes Oracle gets stupid but there's not much I can do about it. The alternative to what I'm doing would be to run the same query 50 times with a single id parameter instead of a single query with 50 ids all at once. The single query with lots of parameters way of doing stuff ends up being a lot faster in practice.

    You can prepare several variants: query with one parameter, query with 10 parameters, query with 50 parameters. Choose closest higher value, pad the parameters (just repeat the last value), execute query. If it's more than 50, just execute the query as many times as needed and concatenate the results. From database perspective, these are 3 queries -> 3 query plans to compile (and cache).
    The batch sizes might be auto-generated, of course... with any number of possible batches it's quite easy to hide this in a library.

    AFAIK Hibernate does that (but it's some time since I investigated the sources, so I might remember it incorrectly).


  • Notification Spam Recipient

    @Kamil-Podlesak said in A critical look at Marvel vs. Capcom....:

    @MrL said in A critical look at Marvel vs. Capcom....:

    @Bulb said in A critical look at Marvel vs. Capcom....:

    @MrL said in A critical look at Marvel vs. Capcom....:

    @Kamil-Podlesak said in A critical look at Marvel vs. Capcom....:

    @Groaner said in A critical look at Marvel vs. Capcom....:

    @Unperverted-Vixen said in A critical look at Marvel vs. Capcom....:

    @Groaner said in A critical look at Marvel vs. Capcom....:

    What's a good name, then, for a design where you not only have models, views, and controllers, but viewmodels and mappers and services and repositories, not just on the backend, but the frontend as well?

    if by "frontend" you mean "ASP.NET MVC or ASP.NET Core code"?

    By "frontend" I mean "any code that directly handles user input, comprised of HTML and Javascript or things that compile to Javascript, and ends just before a MVC Controller method (which, from there on out, is where the 'backend' lives)."

    And yes, as it happens, there are Angular controllers, repositories, services, views, in addition to the controllers, repositories, services, etc. on the ASP.NET MVC side. Having fun yet?

    You should have started with this, because this is definitely :wtf:
    Actually, having two frontends on top of each other is a front-page material.

    Js framework on top of C# MVC? That's pretty normal.

    You have a JS front-end on top of a C# back-end, but then you shouldn't have all those layers in both of them.

    Which ones?

    The frontend ones. MVC is a frontend architecture.

    It's perfectly fine to have full-blown Javascript framework with fancy MVN, connected to C# (Java, python, PHP) server which serves data and implements the business logic layer via some kind of RPC-over-HTTP (usually called "REST" today, "REST-like" by those less pretentious).

    It's also perfectly fine (but unfashionable) to have C# (Java, python, PHP) MVC framework with some light Javascript (plain, maybe JQuery or something similar).

    But MVC on top of MVC does not make any sense.

    But it does. The way it works goes like this, starting from a request from frontend:

    Controller - check authentication, validate request data, form parameters for BL (Service), call Service.
    Service - do BL stuff, communicating with DB via ORM, gather results, return them to Controller.
    Controller - convert results to Model, return them to View.
    View - js lives here.

    I don't care what's in View. If js guys want to build a tower of Babel there, that's their problem. I have request params as inbound contract and Model as outbound contract, that's it.

    And it doesn't really matter how many 'layers' are there and if you can assign a letter from pattern acronym to each one. I worked with 3 layer systems that made no sense, everything everywhere. And I worked with 10 layer systems that were sensible.



  • @MrL said in A critical look at Marvel vs. Capcom....:

    @Kamil-Podlesak said in A critical look at Marvel vs. Capcom....:

    @MrL said in A critical look at Marvel vs. Capcom....:

    @Bulb said in A critical look at Marvel vs. Capcom....:

    @MrL said in A critical look at Marvel vs. Capcom....:

    @Kamil-Podlesak said in A critical look at Marvel vs. Capcom....:

    @Groaner said in A critical look at Marvel vs. Capcom....:

    @Unperverted-Vixen said in A critical look at Marvel vs. Capcom....:

    @Groaner said in A critical look at Marvel vs. Capcom....:

    What's a good name, then, for a design where you not only have models, views, and controllers, but viewmodels and mappers and services and repositories, not just on the backend, but the frontend as well?

    if by "frontend" you mean "ASP.NET MVC or ASP.NET Core code"?

    By "frontend" I mean "any code that directly handles user input, comprised of HTML and Javascript or things that compile to Javascript, and ends just before a MVC Controller method (which, from there on out, is where the 'backend' lives)."

    And yes, as it happens, there are Angular controllers, repositories, services, views, in addition to the controllers, repositories, services, etc. on the ASP.NET MVC side. Having fun yet?

    You should have started with this, because this is definitely :wtf:
    Actually, having two frontends on top of each other is a front-page material.

    Js framework on top of C# MVC? That's pretty normal.

    You have a JS front-end on top of a C# back-end, but then you shouldn't have all those layers in both of them.

    Which ones?

    The frontend ones. MVC is a frontend architecture.

    It's perfectly fine to have full-blown Javascript framework with fancy MVN, connected to C# (Java, python, PHP) server which serves data and implements the business logic layer via some kind of RPC-over-HTTP (usually called "REST" today, "REST-like" by those less pretentious).

    It's also perfectly fine (but unfashionable) to have C# (Java, python, PHP) MVC framework with some light Javascript (plain, maybe JQuery or something similar).

    But MVC on top of MVC does not make any sense.

    But it does. The way it works goes like this, starting from a request from frontend:

    Controller - check authentication, validate request data, form parameters for BL (Service), call Service.
    Service - do BL stuff, communicating with DB via ORM, gather results, return them to Controller.
    Controller - convert results to Model, return them to View.
    View - js lives here.

    I don't care what's in View. If js guys want to build a tower of Babel there, that's their problem. I have request params as inbound contract and Model as outbound contract, that's it.

    And it doesn't really matter how many 'layers' are there and if you can assign a letter from pattern acronym to each one. I worked with 3 layer systems that made no sense, everything everywhere. And I worked with 10 layer systems that were sensible.

    My view is that MVC letters are not supposed to be layers at all. They are just parts of the same layer. In this view, marking some layers with M/V/C letters make no sense.

    Of course, it's a philosophical term, so you can define it differently... but then it cannot be used for efficient communication, which is the primary purpose of "software patterns", so then it makes no sense. Unless we define "software pattern" differently...


  • Java Dev

    @boomzilla said in A critical look at Marvel vs. Capcom....:

    @Kamil-Podlesak said in A critical look at Marvel vs. Capcom....:

    @boomzilla said in A critical look at Marvel vs. Capcom....:

    @Kamil-Podlesak said in A critical look at Marvel vs. Capcom....:

    This is quite a serious performance killer and this is reason #2343 why you don't let developers to use database directly these days!

    LOL. Then no queries would be done! Also, I know they're not performance bottlenecks because I've tested them and they work quite well, though I've never come anywhere close to thousands of parameters.

    It's not really about number of parameters. It's about number of different queries that are executed in some (short) time window, in relation to the size of queryplan (or "prepared statement") cache. If some other, more critical queries get pushed out of that cache, you might have a problem... and this is something that might happen randomly or by nefarious means (as a DOS attack). BDDT

    So, yes, I was a little harsh, but I cannot overstress my advice: be careful when generating SQL. There might be unexpected consequences.

    Well...it's doing operations that the users need to do when they push the proper buttons. When other users are also doing stuff...yeah, sometimes Oracle gets stupid but there's not much I can do about it. The alternative to what I'm doing would be to run the same query 50 times with a single id parameter instead of a single query with 50 ids all at once. The single query with lots of parameters way of doing stuff ends up being a lot faster in practice.

    There's also SELECT NAME INTO :name WHERE ID=:id and binding arrays.



  • @Unperverted-Vixen said in A critical look at Marvel vs. Capcom....:

    @Jaime Interesting idea. Are you doing that just for SELECTs or for INSERT/UPDATEs too?

    For any. Here's a simple insert example expressed as a batch.

    DECLARE @data xml
    SELECT @data = '
    <People>
      <Person FirstName="Henry" LastName="Aaron" />
      <Person FirstName="Babe" LastName="Ruth" />
    </People>'
    
    INSERT People (FirstName, LastName)
    SELECT
      people.person.value('@FirstName', 'varchar(30)'),
      people.person.value('@LastName', 'varchar(30)')
    FROM
      @data.nodes('People/Person') AS people(person)
    

    This supports a nearly infinite number of rows and performance is almost as good as BCP.


  • ♿ (Parody)

    @PleegWat said in A critical look at Marvel vs. Capcom....:

    @boomzilla said in A critical look at Marvel vs. Capcom....:

    @Kamil-Podlesak said in A critical look at Marvel vs. Capcom....:

    @boomzilla said in A critical look at Marvel vs. Capcom....:

    @Kamil-Podlesak said in A critical look at Marvel vs. Capcom....:

    This is quite a serious performance killer and this is reason #2343 why you don't let developers to use database directly these days!

    LOL. Then no queries would be done! Also, I know they're not performance bottlenecks because I've tested them and they work quite well, though I've never come anywhere close to thousands of parameters.

    It's not really about number of parameters. It's about number of different queries that are executed in some (short) time window, in relation to the size of queryplan (or "prepared statement") cache. If some other, more critical queries get pushed out of that cache, you might have a problem... and this is something that might happen randomly or by nefarious means (as a DOS attack). BDDT

    So, yes, I was a little harsh, but I cannot overstress my advice: be careful when generating SQL. There might be unexpected consequences.

    Well...it's doing operations that the users need to do when they push the proper buttons. When other users are also doing stuff...yeah, sometimes Oracle gets stupid but there's not much I can do about it. The alternative to what I'm doing would be to run the same query 50 times with a single id parameter instead of a single query with 50 ids all at once. The single query with lots of parameters way of doing stuff ends up being a lot faster in practice.

    There's also SELECT NAME INTO :name WHERE ID=:id and binding arrays.

    OK, I've never done anything like that. My only experience with INTO is in PL/SQL and cursors.


  • Java Dev

    @boomzilla You'd need to do some research - I know we have to use different SQL in our C and PHP code because language bindings are different. On the C side, I bind the first value of an array, then on execute tell I actually want to execute X times; at that point every bind becomes an X-position array bind. On the PHP side, this multi-execute is not available and a PL/SQL wrapper is needed with 'real' array binds. Provided the PL/SQL is written correctly, the performance benefit is the same.

    What we really use this for is bulk insert of raw data. I allocate thousands of times the amount of memory I need for a single insert, then write data to the arrays as I read the input records. I then actually execute specifying the number of records I've prepared when I hit that limit, when I need to resize one of the binds (because I guessed the required string buffer size per record too low), or when a separate insert into a different table with REF constraint needs to be flushed.


  • ♿ (Parody)

    @PleegWat ah, yes, good point. In my C++ code I do some stuff like that where I can add multiple rounds of parameters before firing off the actual query. On the java side Hibernate does batching for that stuff automatically so I don't really need to worry about it (not that I really do any bulk inserts from java that would need that sort of thing). The stuff I posted above is really for either bulk updates or just bulk selects.


  • Java Dev

    @boomzilla Yeah, I don't think I've ever actually used SELECT this way. We do use out binds in a number of places for 'simple' SQL statements, but that's mostly for the RETURNING clause.


  • ♿ (Parody)

    @PleegWat it's useful for selects when you'd like to use HQL to fetch full objects but you can't properly express the conditions in HQL so you break it up into two queries, first SQL to get the IDs then the HQL to pull all the objects (which may be multiple queries under the covers). It sounds :wtf:y, and it kind of is, but it works well.


  • Java Dev

    @boomzilla We do everything with our own query builders and hardcoded queries. On the upside, this means no dealing with stuff like HQL. On the downside, it means maintaining our own queries and query builders. Our SQL performance issues are our own.


  • ♿ (Parody)

    @PleegWat the great benefit of HQL is that I get my Java objects hydrated and ready to use with basically no effort on my part. Changed the object or schema? You just update the class definition and it works everywhere you use it (assuming the query itself doesn't need changing due to conditions in the query, of course, but that's true for any query).



  • @Jaime said in A critical look at Marvel vs. Capcom....:

    @Unperverted-Vixen said in A critical look at Marvel vs. Capcom....:

    @Jaime Interesting idea. Are you doing that just for SELECTs or for INSERT/UPDATEs too?

    For any. Here's a simple insert example expressed as a batch.

    DECLARE @data xml
    SELECT @data = '
    <People>
      <Person FirstName="Henry" LastName="Aaron" />
      <Person FirstName="Babe" LastName="Ruth" />
    </People>'
    
    INSERT People (FirstName, LastName)
    SELECT
      people.person.value('@FirstName', 'varchar(30)'),
      people.person.value('@LastName', 'varchar(30)')
    FROM
      @data.nodes('People/Person') AS people(person)
    

    This supports a nearly infinite number of rows and performance is almost as good as BCP.

    Oh, yeah, I did something like that 15yr ago with sql server 2008. I recall that it was faster if you did define a xml type schema thing for it in the database. Not that we would notice the difference with current hardware, probably.



  • @robo2 said in A critical look at Marvel vs. Capcom....:

    @Jaime said in A critical look at Marvel vs. Capcom....:

    @Unperverted-Vixen said in A critical look at Marvel vs. Capcom....:

    @Jaime Interesting idea. Are you doing that just for SELECTs or for INSERT/UPDATEs too?

    For any. Here's a simple insert example expressed as a batch.

    DECLARE @data xml
    SELECT @data = '
    <People>
      <Person FirstName="Henry" LastName="Aaron" />
      <Person FirstName="Babe" LastName="Ruth" />
    </People>'
    
    INSERT People (FirstName, LastName)
    SELECT
      people.person.value('@FirstName', 'varchar(30)'),
      people.person.value('@LastName', 'varchar(30)')
    FROM
      @data.nodes('People/Person') AS people(person)
    

    This supports a nearly infinite number of rows and performance is almost as good as BCP.

    Oh, yeah, I did something like that 15yr ago with sql server 2008. I recall that it was faster if you did define a xml type schema thing for it in the database. Not that we would notice the difference with current hardware, probably.

    That, or insert the XML into an XML column. I remember running into performance issues shredding large XML documents stored in a scalar variable that disappeared when that same scalar variable was inserted into a table and the same XQuery commands run against that table column instead of the variable. YMMV.



  • @boomzilla said in A critical look at Marvel vs. Capcom....:

    @Zenith said in A critical look at Marvel vs. Capcom....:

    @Bulb said in A critical look at Marvel vs. Capcom....:

    @Jaime said in A critical look at Marvel vs. Capcom....:

    You don't get to say "I'm doing it my way until you convince me otherwise."

    Except many, many developers out there just say that anyway.

    So somebody else decides you'll do it their way until they're convinced otherwise. Seems like authority is often at odds with burden of proof. In other words, those that have authority don't have to answer/explain/prove anything.

    Edit: My work environment leaks into every post, doesn't it? So tired of being punished for doing stuff wrong like I was told even though I knew better...

    Having actual security constraints is a legitimate reason not to do something (even if the underlying reasons are nonsense, like with expiring passwords). It's out of your control. Still, it's good for the developers to understand the reasons behind the constraint so that they don't reimplement the same vulnerability in another way that the security guideline didn't think about.

    Stuff like, "Don't sftp to China" is a real concern, but you're not really preventing that in a meaningful way by not allowing some particular kind of code to be deployed. That's just security theater. Better to have defined network connections / firewalls and monitored ways to pierce through them (e.g., only the ports that talk to the DB, web server is the only thing that can talk to the outside world, etc). Because I have a bunch of other places to hide nefarious code than inside the DB (which as already mentioned is already behind several firewalls so trying to exfiltrate data from there is a non-starter).

    You can even get outside the firewalls if you pump data via the web interface, so 🍌 ing the database from doing it directly doesn't protect from a hostile developer. If you have rules because you don't trust the people you're hiring, you are doing it very wrong.



  • @Carnage said in A critical look at Marvel vs. Capcom....:

    If you have rules because you don't trust the people you're hiring, you are doing it very wrong.

    Tell that to any company with an IT security department, auditors, or an ISO27001 certification.
    Not that you're wrong though.



  • @nerd4sale said in A critical look at Marvel vs. Capcom....:

    @Carnage said in A critical look at Marvel vs. Capcom....:

    If you have rules because you don't trust the people you're hiring, you are doing it very wrong.

    Tell that to any company with an IT security department, auditors, or an ISO27001 certification.
    Not that you're wrong though.

    Oh I do tell them.
    My current gig, the system I build was the first one ever to pass through audit/pen tests on the first try.
    I've also told them that having no authorization or validation at all between servers once inside the perimeter is a really bad idea, but they keep telling me that there is no way that an attacker could find a vulnerable system to gain a foothold to jump further inside.
    And I've run into this so many times with different security departments. They just don't know what they are talking about.


  • Discourse touched me in a no-no place

    @Carnage said in A critical look at Marvel vs. Capcom....:

    If you have rules because you don't trust the people you're hiring, you are doing it very wrong.

    Never trust people not to fuck up from time to time. Minimise the scale of their little accidents instead, and help them recover easily so that problems don't become disasters.



  • @dkf said in A critical look at Marvel vs. Capcom....:

    @Carnage said in A critical look at Marvel vs. Capcom....:

    If you have rules because you don't trust the people you're hiring, you are doing it very wrong.

    Never trust people not to fuck up from time to time. Minimise the scale of their little accidents instead, and help them recover easily so that problems don't become disasters.

    Yeah, but I've never accidentally set up a data siphon to a chinese sftp.



  • @Carnage said in A critical look at Marvel vs. Capcom....:

    @dkf said in A critical look at Marvel vs. Capcom....:

    @Carnage said in A critical look at Marvel vs. Capcom....:

    If you have rules because you don't trust the people you're hiring, you are doing it very wrong.

    Never trust people not to fuck up from time to time. Minimise the scale of their little accidents instead, and help them recover easily so that problems don't become disasters.

    Yeah, but I've never accidentally set up a data siphon to a chinese sftp.

    There should be sanity checks and reviews and such in place to catch most mistakes, but against malice the time proven method is legal action. The technical side just needs enough auditing to provide evidence for that legal action, i.e. you shouldn't try to prevent someone from siphoning the data to a chinese sftp, because that also often prevents them from getting work done, you should just make it very hard for them to cover their traces.



  • @Bulb said in A critical look at Marvel vs. Capcom....:

    @Carnage said in A critical look at Marvel vs. Capcom....:

    @dkf said in A critical look at Marvel vs. Capcom....:

    @Carnage said in A critical look at Marvel vs. Capcom....:

    If you have rules because you don't trust the people you're hiring, you are doing it very wrong.

    Never trust people not to fuck up from time to time. Minimise the scale of their little accidents instead, and help them recover easily so that problems don't become disasters.

    Yeah, but I've never accidentally set up a data siphon to a chinese sftp.

    There should be sanity checks and reviews and such in place to catch most mistakes, but against malice the time proven method is legal action. The technical side just needs enough auditing to provide evidence for that legal action, i.e. you shouldn't try to prevent someone from siphoning the data to a chinese sftp, because that also often prevents them from getting work done, you should just make it very hard for them to cover their traces.

    It's probably a good idea to have a firewall in the way if you're dealing with anything sensitive, but other than that it's mostly just security theatre.
    One way I've seen a data siphon implemented is as a part of a Jenkins job. There are so many ways to get around any limits that locking it all down will kill off any connectivity.



  • @Carnage I am all for a bunch of firewalls compartmenting the different systems, but that's against external attacker. Insider won't be stopped by them.


  • Discourse touched me in a no-no place

    @Bulb said in A critical look at Marvel vs. Capcom....:

    Insider won't be stopped by them.

    Having worked for a long time in a university, I know for sure that you absolutely should not trust students to go poking around where they shouldn't. Closing off things so that you're only providing the access that you explicitly want to is a really good plan. Firewalls are a key part of that: they're one of the places where you enforce security policy.

    Of course, they're not the whole story. Nor is mere security the whole story either. If people can't do what they're supposed to, your system might be secure but your job isn't…



  • @dkf By insider I mean people who have some work to do with the systems. Of course the students shouldn't get to the systems at all, so firewall to stop them from getting to the compartment at all is effective. But some people need a way through that firewall for work, and those can find thousands of ways to exfiltrate data from it or do some damage in it long after technical measures prevent them from actually doing the work they should.


  • Discourse touched me in a no-no place

    @Bulb said in A critical look at Marvel vs. Capcom....:

    Of course the students shouldn't get to the systems at all

    It's really not that simple. They can, but it's on a case-by-case basis depending on what resources their particular project needs. Some things are more heavily protected than others.



  • @Carnage said in A critical look at Marvel vs. Capcom....:

    If you have rules because you don't trust the people you're hiring, you are doing it very wrong.

    The whole idea that any of this comes from lack of trust in the IT folks is the beginning of the internal friction that's so damaging. Trusting your people has nothing to do with it.

    Let me give you an example of where these things actually matter:

    I work for a company that provides payment card solutions. You can come to us for cool stuff like "Give my people a VISA card they can use to pay all of their travel expenses". Let's say the Screen Actor's Guild of America comes to us as a prospective customer for this service. As the conversation progresses, they say "It occurs to us that you will have private personal details on a lot of A-list celebrities and those details include things like where they are staying right now".

    We'll say "We've got top notch IT people, last week one of them saved an old lady from a burning building". They'll say "That seems to be common. We just met with four of your competitors and three of them said the same thing."

    Now we have to figure out how to convince them that we really do have good people and we really do put in due care to ensure that their data is handled responsibly. We go out and download a security framework, like the Center For Internet Security's Top 20 controls. We go through all of the points and assess the objectives against our business, come up with specific control activities. Then we make sure our people are producing a decent evidence trail as they are doing these tasks and put in appropriate reporting. Then we hire an external auditor and have them confirm that they saw this evidence trail and that it seems genuine.

    We do all this, hand the prospective client the audit report... and we win the business because our competitors simply tried to tell them over and over again they they had good people.

    This is such a common practice, that there are industry standards that roll up a lot of external concerns into a single audit. It's quite common that all of the above plus more will be covered in a SOC 2 Type 2 audit report.

    That's what we do, we hand every client our most recent SOC 2, and win a lot of business, especially sensitive clients and governments.

    All the people above that wonder why I want blind compliance.... because it will be audited, and exceptions will end up on the report, and our clients will read about it. Only me and a small group of others actually have to go through the pain of documenting all these controls.


  • ♿ (Parody)

    @Jaime said in A critical look at Marvel vs. Capcom....:

    All the people above that wonder why I want blind compliance.... because it will be audited, and exceptions will end up on the report, and our clients will read about it. Only me and a small group of others actually have to go through the pain of documenting all these controls.

    Again, you've misread the room.


  • Discourse touched me in a no-no place

    @boomzilla said in A critical look at Marvel vs. Capcom....:

    Again, you've misread the room.

    I'm looking forward to the “we're so secure that we make sure even our own customer service people can't do their jobs!” moment…



  • @dkf If you're talking about me... I don't claim to be doing security any better than anyone else, I just claim the the reasonable security we are doing isn't founded on empty claims.



  • @Jaime said in A critical look at Marvel vs. Capcom....:

    All the people above that wonder why I want blind compliance.... because it will be audited, and exceptions will end up on the report, and our clients will read about it. Only me and a small group of others actually have to go through the pain of documenting all these controls.

    As with everything that is too complicated for one person, the audit view is much simplified.

    By following the check-list, without understanding it, you'll persuade your clients that you are doing good job of it—irrespective of whether you actually are—and will probably not do a totally bad job of it. And that goes for the auditors too—by following the check-list, they produce an audit that looks professional and trustworthy while spending relatively little time on it.

    And the rules are collected over time and even the auditors long forgot why they added some of them. And they may be obsolete, or they may be over-generalized, or many other things.

    The password complexity rules are a good example. We've long known that a longer memorable password is better than a letter salad, and that password rotation policies are counter-productive because they just force more people to write their password on a post-it on the monitor (we now also know that having passwords written in a wallet is reasonable in many cases), but many companies still hold on to those rules.

    So if you want audit, you blindly stick to the rules that will get you the audit. But actual security requires understanding—and people who give enough fucks, which are always in short supply, so you end up taking the rules as at least something.



  • @Bulb said in A critical look at Marvel vs. Capcom....:

    But actual security requires understanding

    Yes. But the problematic attitude is people don't comply with the rules that they selectively disagree with. Any sensible security team will be open to answer your questions and help you understand the details, as time allows.

    However, implementers aren't in a position where they can say "Don't worry, I'm doing it better. I don't produce any audit trail and I won't help assure your auditors and/or client of anything, you'll simply have to trust that it's all good". There is literally no way to tell, from the outside, the difference between an incompetent cowboy and a rogue expert.

    You mention password expiration.... the guidance on that has changed. Lazy organizations that wrote their rules five years ago and have turned off their brains are still requiring frequent password rotation. NIST currently recommends checking a blacklist and imposing minimum length, but nothing else. No max age, no complexity requirements. They actually say this in NIST 800-63:

    As noted above, composition rules are commonly used in an attempt to increase the difficulty of guessing user-chosen passwords. Research has shown, however, that users respond in very predictable ways to the requirements imposed by composition rules. For example, a user that might have chosen “password” as their password would be relatively likely to choose “Password1” if required to include an uppercase letter and a number, or “Password1!” if a symbol is also required.



  • @Jaime said in A critical look at Marvel vs. Capcom....:

    However, implementers aren't in a position where they can say "Don't worry, I'm doing it better. I don't produce any audit trail and I won't help assure your auditors and/or client of anything, you'll simply have to trust that it's all good". There is literally no way to tell, from the outside, the difference between an incompetent cowboy and a rogue expert.

    Of course not. Having an audit trail of how the system was secured is required. You won't get decent security without doing a security review. But it's much better to understand the actual risks, and describe how they are controlled, than just following rules to avoid this or that, because then you risk that your alternate solution will have exactly the same problem.

    @Jaime said in A critical look at Marvel vs. Capcom....:

    You mention password expiration.... the guidance on that has changed. Lazy organizations that wrote their rules five years ago and have turned off their brains are still requiring frequent password rotation.

    I know it did. Recently. Security experts were saying it for much longer. It takes a long time to update standards like this. Understanding, and cross-checking other sources helps actual security. Of course if you need a standard, you are stuck with it.


  • Discourse touched me in a no-no place

    @Bulb said in A critical look at Marvel vs. Capcom....:

    you risk that your alternate solution will have exactly the same problem

    Or an even worse one. There's no difficulty that can't be made worse by following rules stupidly and blindly.



  • @Bulb said in A critical look at Marvel vs. Capcom....:

    But it's much better to understand the actual risks, and describe how they are controlled, than just following rules to avoid this or that, because then you risk that your alternate solution will have exactly the same problem.

    You don't seem to understand that IT contains so many drooling idiots that have made the same simple mistakes over and over again that the only sensible path is to give everyone simple rules (because the idiots don't know they're idiots) and let the handful of competent people make their individual cases... but until they do, they're lumped in with the idiots.

    For every ten people I hear asking to have an exception to a rule, nine of them are simply trying to get permission to remake a common mistake because they are not willing to learn how to do it right. I'm quite fine with the idea that I make life harder for people like you because it's the weakest link that causes the chain to break, not the strongest. Rules that eliminate the stupid provide more value than guidelines that let the smart innovate. If we ever manage to move the baseline up, this will change.



  • @Jaime said in A critical look at Marvel vs. Capcom....:

    @Bulb said in A critical look at Marvel vs. Capcom....:

    But it's much better to understand the actual risks, and describe how they are controlled, than just following rules to avoid this or that, because then you risk that your alternate solution will have exactly the same problem.

    You don't seem to understand that IT contains so many drooling idiots that have made the same simple mistakes over and over again that the only sensible path is to give everyone simple rules (because the idiots don't know they're idiots) and let the handful of competent people make their individual cases... but until they do, they're lumped in with the idiots.

    For every ten people I hear asking to have an exception to a rule, nine of them are simply trying to get permission to remake a common mistake because they are not willing to learn how to do it right. I'm quite fine with the idea that I make life harder for people like you because it's the weakest link that causes the chain to break, not the strongest. Rules that eliminate the stupid provide more value than guidelines that let the smart innovate. If we ever manage to move the baseline up, this will change.

    And here you go saying that the wrong people are being hired. I think that IT would actually start to move faster by culling the idiots. We'd get higher quality faster if the competent people were not constantly stuck fixing all the shit the army of idiots excrete.
    Rules also have a long tradition of not preventing stupid when confronted with a sufficiently motivated idiot.



  • @Carnage said in A critical look at Marvel vs. Capcom....:

    culling the idiots

    2 : to reduce or control the size of (something, such as a herd) by removal (as by hunting or slaughter) of especially weak or sick individuals

    also : to hunt or kill (individuals) for culling

    👍


Log in to reply