Connect to a central db server



  • Hello. I am teaching myself more .NET, really love the technology.



    I've created a simple desktop application. Basically, its a database frontend for an inventory management system.



    The databse is simply an Access file (.mbd) which is in the same directory as the executable. In other words, my oleDbConnection objects look for the database (.mdb file) in the same directory.



    You see, this application will be independantly run on different PCs
    (each PC is in a different branch of our shop). I want to make all the
    different PCs share the same database. Ofcourse, the PCs will be connected (via LAN) to a central database server (where my .mbd file will be stored).



    In essence, all I want to do is make my application look for the .mbd
    file on the database server, instead of localdisk. However, there are
    some issues which I dont know how to address:


    1. I guess the simplest way to achieve this is to create a shared
      network folder. But I dont think this solution is elegant... because it
      seems I have to hardcode the network path into the program
      (specifically, I have to hardcore the ConnectionString of the
      oleDbConnection object)


    2. How do I keep the data synchronized? (ie, Race condition)

      The naive way to do this would be simply allow the database to throw an
      exception, and simply make the user "try again". But I want a more
      robust solution.

      In University, I learnt how to tackle race conditions in the same application
      using Semaphores, Monitors, etc. But I have no idea how to do it via
      networks.. since I am no longer dealing with Threads anymore.



      I would sincerely appreciate if someone could guide me in the right
      direction. I am sure there are well-establish methodologies for
      achieving this... a nudge in the right direction will help. A link, a
      book, anything.


  • The problem is that you're using Access. It's such a simple database that trying to use it the way you are is only going to cause problems.

    The "established" way of doing multiple connection databases is to use a database server that handles conflicts by wrapping updates together into TRANSACTIONS. By using tcp/ip connections over the network it also lets many client applications connect to the same tables at once.

    You can download free versions of the Microsoft SQL Server (express?), the IBM DB2 server, Oracle, PostgreSQL and MySQL. Some of those (especially the first three) will have restrictions to the size of the databases and the number of concurrent users.

    You use an ODBC driver to connect, just like with access, but instead of opening a file the driver creates a socket. The server will then ensure data consistency.

    If you really do want to use Access with multiple users then you'll have to write a server yourself. Only one PC should have the database, and then it should take requests from the others. You'd then have to take care of data consistency yourself. This is really a bad, bad idea.

    In general, file systems offer file locks that will grant one process exclusive read-/write access. How well that works for network shares is another question. I wouldn't rely on it for this.



  • Interesting... interesting.

    Ok, I have a few questions.

    1) I "link" (dunno if thats the right word) the tables in Access into a Dataset. Basically, each table in my database is is represented by a DataAdapter, and then I include all these DataAdapters into a nice cozy Dataset.
    So my question is, if I do this whole MS-SQL server network thing you are talking about... will I still be able to use Datasets? or will I have to go back to raw SQL queries. (Lately, I've really learnt to appreciate the elegance of Datasets.)

    2)
    Ok so what you are saying is that MS-SQL server will take care concurrency issues. But.. well, let me present this scenario. Please tell me if I understand this correctly.

    Lets say there is a PRODUCTS table, with columns ProductName and StockAvailable.
    The PRODUCTS table has a row: ProductName=Soap, StockAvailable=10

    * Salesman1 queries the SQL server for the PRODUCTS table.
    * Salesman1 is now using my program which visually presents PRODUCTS table with ListBoxs, TextFields, n all that crap (frontend stuff)
    * (30 seconds elapse)
    * Salesman2 also queries the SQL server for the PRODUCTS table.
    * Salesman1 sells 3 bars of Soap. My program sends this info to the server. The server updates the PRODUCTS table. Now, StockAvailable=7.
    * Salesman2 was also using my program... and [b]my program shows that there are 10 soap bars available because he had queried the database BEFORE Salesman1 updated it[/b]. He then ends up selling all 10 bars.
    * Now the StockAvailable is = -3
    wtf? ūüėě

    What am I missing?



    1. Since the MS-SQL server shows up as 'just another' odbc (or in the case of java, a JDBC) driver, you can use anything that's based on that. I'm pretty sure that the DataSet class wouldn't be there if it wasn't for MS-SQL, as nobody would bother writing that just for Access ūüôā


      2) This is more or less correct, and it's up to you to make sure that this doesn't happen. There are some tools, however, that will make your job easier.

      You can put a constraint on the StockAvailable column, and say that it must  not be less than zero. Assuming the database checks this properly (MySQL, for example, doesn't seem to care about that at all) you will get an SQL error and can then inform the user with an apropriate error message.

      This is also where transactions come in. When you have to perform multiple sql queries, and any one of them may fail, and they are dependent on each other, you can wrap them in a transaction:

      (Forgive me for syntax errors..)
      Begin transaction;
      UPDATE products SET available = available - 5 WHERE name = 'product1';
      UPDATE products SET available = available - 3 WHERE name = 'product2';
      UPDATE customers SET invoice = invoice + 500 WHERE customerid = 123;
      commit;

      Now when there isn't enough of product1 or product2 and the stock drops below 0, you'd get an SQL exception and fail on line 1 or 2. Also if you have a constraint on the invoice column and it got too high (say, higher than the customers credit rating) the transaction might fail at that point.
      In your program you'd see this as some kind of SQL Exception being thrown. Your error handler can then roll back the database to the point where the transaction began, and revert any change that you've made.
      The database should also lock the rows that you're working with, so while your transaction isn't finished, nobody 'should' be able to come in and mess with the data that you're working with.

      As far as I know, there is no way for a client to be notified when the data you're looking at changes. So the best you can do is refresh the data periodically, and make sure it's still current before you actually work with it.

      If anyone more experienced with DB work has a better solution, I'd love to hear it too ūüôā


  • @GizmoC said:


    Ok so what you are saying is that MS-SQL server will take care concurrency issues. But.. well, let me present this scenario. Please tell me if I understand this correctly.

    Lets say there is a PRODUCTS table, with columns ProductName and StockAvailable.
    The PRODUCTS table has a row: ProductName=Soap, StockAvailable=10

    (snip)

    * Now the StockAvailable is = -3
    wtf? ūüėě

    What am I missing?


    This kind of problem is something that every database system that deserves its name can handle.
    But in most cases, Salesman2 will not recognize that Salesman1 already sold some bars of soap until he tries to do the same. The best you can accomplish is that the view on Salesman2's screen is automatically updated in the same second when Salesman1 confirms his order, but this takes some effort.



  • @Nandurius said:



    As far as I know, there is no way for a client to be notified when the data you're looking at changes. So the best you can do is refresh the data periodically, and make sure it's still current before you actually work with it.

    If anyone more experienced with DB work has a better solution, I'd love to hear it too ūüôā


    SQL Server 2005 has a new feature called Notification Services that allows clients to subscribe to data changes.  It looks like you can create a service that automatically notifies the clients when any change is made to the inventory table.  I haven't played around with it myself so I might be wrong about its purpose, but it might be something worth looking into.

    For SQL Server, I typically use the Optimistic Concurrency approach.  I add a timestamp column to each table and I compare the timestamp values from the client with the database.  If they are equal, then no changes have been made since the client retrieved the data and the modifications go through.  If they are unequal, then modifications are denied and the client is notfiied the data is stale.  I typically work in applications where modifying the data for the same record simultaneously is extremely rare, so it works well.  I wouldn't recommend it for financial transactions though.

    @GizmoC said:

    Lets say there is a PRODUCTS table, with columns ProductName and StockAvailable.
    The PRODUCTS table has a row: ProductName=Soap, StockAvailable=10

    * Salesman1 queries the SQL server for the PRODUCTS table.
    *
    Salesman1 is now using my program which visually presents PRODUCTS
    table with ListBoxs, TextFields, n all that crap (frontend stuff)
    * (30 seconds elapse)
    * Salesman2 also queries the SQL server for the PRODUCTS table.
    *
    Salesman1 sells 3 bars of Soap. My program sends this info to the
    server. The server updates the PRODUCTS table. Now, StockAvailable=7.
    * Salesman2 was also using my program... and my program shows that there are 10 soap bars available because he had queried the database BEFORE Salesman1 updated it. He then ends up selling all 10 bars.
    * Now the StockAvailable is = -3
    wtf? ūüėě


    Most of the inventory management systems I've worked with had no constraints on the Available Qty field so it is allowed to go negative.  Since in my mind Qty Available is a calculated field (available = on hand - on order), you should apply constraints to the On Hand and On Order fields instead.  The only time I can think of where you would need to handle a negative Qty Available in a more restrictive manner is in the case of a discontinued item where there is no possibility of backordering.  There is some debate whether that logic should exist in the database or in the Business Logic Layer. 




  • @lpope187 said:


    Most of the inventory management systems I've worked with had no constraints on the Available Qty field so it is allowed to go negative.  Since in my mind Qty Available is a calculated field (available = on hand - on order), you should apply constraints to the On Hand and On Order fields instead.  The only time I can think of where you would need to handle a negative Qty Available in a more restrictive manner is in the case of a discontinued item where there is no possibility of backordering.  There is some debate whether that logic should exist in the database or in the Business Logic Layer. 


    I think that's just semantics, because GizmoC is using a much simplified approach and different terminology than the 'industry.'

    I think you need to think of the system as a 'shopping cart' type of process. The agent / customer takes the items from the inventory and places them into his account/cart. That gives him some time to finalize the deal, and he has some assurance that nobody is going to come in and take the items he's trying to take (i.e. resell.)

    That way you can write the GUI to reflect that, and first have a step to move inventory into the agents holding account/cart. When that bombs, inform the user that the quantity isn't available. But once it goes through, you won't have to worry about the further sale having problems.
    After X hours/days move the unsold items from the agents back to the inventory.     



  • @lpope187 said:


    Most of the inventory management systems I've worked with had no constraints on the Available Qty field so it is allowed to go negative.  Since in my mind Qty Available is a calculated field (available = on hand - on order), you should apply constraints to the On Hand and On Order fields instead.  The only time I can think of where you would need to handle a negative Qty Available in a more restrictive manner is in the case of a discontinued item where there is no possibility of backordering.  There is some debate whether that logic should exist in the database or in the Business Logic Layer. 


    In my experience (and I have a lot of experience with such systems) the system should not be too strict regarding "quantity available", because the "quantity on hand" can unexpectedly decrease, for example, when a pallet crashes down 10m from a high rack storage area.

    What a system can do in such a case (and what our system does) is that it blocks delivery of such items until the overbooking of the remaining stock has somehow been adjusted; i.e. a salesperson must decide which customer's orders are cancelled. Of course, if all customers are considered equal, the system can automatically cancel the most recent orders to fix that.



  • @Nandurius said:



    I think that's just semantics, because GizmoC is using a much simplified approach and different terminology than the 'industry.'

    (snip)

    That way you can write the GUI to reflect that, and first have a step to move inventory into the agents holding account/cart. When that bombs, inform the user that the quantity isn't available. But once it goes through, you won't have to worry about the further sale having problems.
    After X hours/days move the unsold items from the agents back to the inventory.     


    That was my gut interpretation as well, but without more detail I wasn't quite sure.  I would guess technically it would be more along the lines of soft or hard inventory allocation to orders.

    @ammoQ said:

    In my experience (and I have a lot of experience with such systems) the
    system should not be too strict regarding "quantity available", because
    the "quantity on hand" can unexpectedly decrease, for example, when a
    pallet crashes down 10m from a high rack storage area.

    What a
    system can do in such a case (and what our system does) is that it
    blocks delivery of such items until the overbooking of the remaining
    stock has somehow been adjusted; i.e. a salesperson must decide which
    customer's orders are cancelled. Of course, if all customers are
    considered equal, the system can automatically cancel the most recent
    orders to fix that.


    Another circumstance that I've run into is large orders where the required date is far into the future because it is much larger than your typical in-stock quantity.  If you have 100 of Item A and you get an order for 1,000 but not due for 6 months, would you really want to block small orders for the forseeable future?  I'm curious as to how that's handled in your system.



  • @lpope187 said:



    Another circumstance that I've run into is large orders where the required date is far into the future because it is much larger than your typical in-stock quantity.  If you have 100 of Item A and you get an order for 1,000 but not due for 6 months, would you really want to block small orders for the forseeable future?  I'm curious as to how that's handled in your system.



    The most obvious solution is, when calculating the available stock, to ignore orders which are far enough in the future that replenishment is possible early enough.Your production plant or supplier needs at most (say) 2 months to deliver, so you don't have to worry about that order for the next 4 months. That said, I know a company that really reserves stock many months, possibly years, in advance; they are producing very special biotech goods and they cannot say for sure whether or not they will be able to produce exactly the same quality ever again.



  • you need to add one column timestamp for optimistic concurrency¬†and read this : http://msdn.microsoft.com/library/default.asp?url=/library/en-us/sqlxml3/htm/updategram_7tir.asp

    or another example in vb6 :

    http://support.microsoft.com/kb/170380/EN-US/

     

     



  • @ammoQ said:


    The most obvious solution is, when calculating the available stock, to ignore orders which are far enough in the future that replenishment is possible early enough.Your production plant or supplier needs at most (say) 2 months to deliver, so you don't have to worry about that order for the next 4 months. That said, I know a company that really reserves stock many months, possibly years, in advance; they are producing very special biotech goods and they cannot say for sure whether or not they will be able to produce exactly the same quality ever again.


    Ah, kind of like timeboxing your MRP runs based on lead times.  Anything outside of the typical lead time for your vendor is disregarded in the calculations.  Must be nice to have a system and a process that works well in that instance.  I haven't yet begun to work on MRP at my current employer, but where I worked before, the manufacturing process was between made-to-order and repetitive.  That coupled with a terrible ERP system and implementation made MRP almost worthless.  If we actually trusted the MRP runs and an order was placed today and due in two weeks, the result were production plans that should have started three weeks ago.  In reality, we could fulfil the order in one shift's production.



  • @lpope187 said:


    Ah, kind of like timeboxing your MRP runs based on lead times.  Anything outside of the typical lead time for your vendor is disregarded in the calculations.  Must be nice to have a system and a process that works well in that instance.

    Maybe someone is going to hit me for this attitude, but IMO it's better to have a system with clear limits and clear rules and a possibility for manual override than an ultra-clever knows-it-all system that lives in its own fantasy world and has to be bypassed to get some work done.



  • Thank you everyone, your replies have been helpful. To summarise, heres what I've learnt.

    1) Move away from Access.
    Ok, I always knew Access was not very advanced. Is there any program out there that can convert my Access database to another database? I mean, not just transfer my table values, but also keep my constraints intact.

    2) The simplest way to implement "Optimistic Concurrency" is by adding a TIMESTAMP column to all my tables.



  • @GizmoC said:

    Thank you everyone, your replies have been helpful. To summarise, heres what I've learnt.

    1) Move away from Access.
    Ok, I always knew Access was not very advanced. Is there any program out there that can convert my Access database to another database? I mean, not just transfer my table values, but also keep my constraints intact.

    2) The simplest way to implement "Optimistic Concurrency" is by adding a TIMESTAMP column to all my tables.


    If you want to convert to SQL Server use the database upsizing wizard.  It will push both the schema and the data to the selected destination server.  I think it is under the Tools menu.  If you want to push it to another database, perhaps someone else has an easy method - I can't think of any at the moment.




  • @GizmoC said:


    He then ends up selling all 10 bars.

    What am I missing?




    There's your problem right there.



    When Salesman2 hits "buy", your program should

        a) check to make sure there's still 10 bars available

                if not, inform the buyer, apologize, and give them a chance to change their order based on the new inventory amount.

        b) remove the desired amount from the inventory

               this will prevent another purchase from claiming the inventory

        c) process payment info. If payment fails, return order amount to inventory.



    a) and b) should be done atomically if possible.



  • @lpope187 said:

    @GizmoC said:
    Thank you everyone, your replies have been helpful. To summarise, heres what I've learnt.

    1) Move away from Access.
    Ok, I always knew Access was not very advanced. Is there any program out there that can convert my Access database to another database? I mean, not just transfer my table values, but also keep my constraints intact.

    2) The simplest way to implement "Optimistic Concurrency" is by adding a TIMESTAMP column to all my tables.


    If you want to convert to SQL Server use the database upsizing wizard.  It will push both the schema and the data to the selected destination server.  I think it is under the Tools menu.  If you want to push it to another database, perhaps someone else has an easy method - I can't think of any at the moment.


    Since you are using .Net, use the tools available to you.  For example, all of the built in data tools in Visual Studio automatically implement "batch optimistic" concurrency.  This means that data conflicts like the ones you described earlier can occur, but when an attempt is made to save the last change, the data objects will throw a concurency violation exception.  It is best to learn how to deal with this situation rather than learning how to avoid it.  95% of the time, when people are frustrated by optimistic concurrecy, it is because they haven't thought out their proccess well enough.

    Let me give an example -- air travel bookings.  It is very important to not sell the same seat to two people.  A lot of people look at it as a concurrency problem and try to lock a seat row in a table somewhere while a user is still finalizing their order on a web booking system.  Really, the seat should be reserved and the reservation converted to a booking or removed at checkout time (or when the session is abandoned).  By adding this extra step, explicit locking is no longer necessary.  Now we only have to deal with the fight over the booking between two prospective travellers.  This is easy.  Imagine a case where there is a very near tie between two users.  Both will load the row as "unbooked" when trying to book the seat.  The winner will successfully change the value to "booked".  Both the winner and loser will issue a statement that follows this pseudocode:

    UPDATE seats
    SET status = 'booked'
    WHERE seatid = '123456789' AND (the row hasn't been modified by anyone else)

    The check for modification could use a DB specific technology like SQL Server's timestamp datatype, or simply by checking if every column has the same value as when it was read into the application.  If the statement updates zero rows, you are the loser.  If it updates one row, you are the winner.  For the loser, simply refresh the data and tell the user to choose another seat.

    Visual Studio builds the above query automatically as long as you don't try to roll your own solution.  If you are rolling your own, you'd better make sure that your solution is better than the free one.

    BTW, attmpting to lock data for more than an instant is a horrible idea in web applications and a bad one in desktop applications unless you have a really really good reason to do so and no easy alternative.

    Also, don't worry about moving from Access in your situation.  It will work as well as a client/server system as long as the system load is light and there are few users.  It isn't as easy to maintain as a client/server system and backups are a royal pain without shutting down the application, but it generally won't cause you a great deal of grief.  Access has the benefit of being very simple to deploy.  As for you original question of "How do I connect to the database if it resides in the same directory as the application, assuming that will NOT be a consistent path?"  Here is your answer --

    VB:
    cn.ConnectionString = "Provider=Microsoft.Jet.OLEDB.4.0;Data Source=" & _
              System.IO.Path.Combine(Application.StartupPath, "DB.mdb")

    C#:
    cn.ConnectionString = "Provider=Microsoft.Jet.OLEDB.4.0;Data Source=" +
              System.IO.Path.Combine(Application.StartupPath, "DB.mdb");

    You should, however, refrain from calling the server that has the *.mdb file a "database server".  That will get you kicked out of an interview for any serious job.

    To answer question #1 above, Access has an "Upsizing Wizard" that will move the entire database including constraints to SQL Server.  It will do some wacky things to constraints and queries in order to preserve the Access behaviors that differ from SQL Server, so consider yourself warned.  My favorite is the "SELECT TOP 100%" queries that it makes to preserve the Access concept that queries can have a sort order defined, while SQL Server views cannot unless they have a "TOP" clause.



  • @GizmoC said:

    Hello. I am teaching myself more .NET, really love the technology.

    I've created a simple desktop application. Basically, its a database frontend for an inventory management system.

    The databse is simply an Access file (.mbd) which is in the same directory as the executable. In other words, my oleDbConnection objects look for the database (.mdb file) in the same directory.

    You see, this application will be independantly run on different PCs (each PC is in a different branch of our shop). I want to make all the different PCs share the same database. Ofcourse, the PCs will be connected (via LAN) to a central database server (where my .mbd file will be stored).

    In essence, all I want to do is make my application look for the .mbd file on the database server, instead of localdisk. However, there are some issues which I dont know how to address:

    1) I guess the simplest way to achieve this is to create a shared network folder. But I dont think this solution is elegant... because it seems I have to hardcode the network path into the program (specifically, I have to hardcore the ConnectionString of the oleDbConnection object)

    2) How do I keep the data synchronized? (ie, Race condition)
    The naive way to do this would be simply allow the database to throw an exception, and simply make the user "try again". But I want a more robust solution.
    In University, I learnt how to tackle race conditions in the same application using Semaphores, Monitors, etc. But I have no idea how to do it via networks.. since I am no longer dealing with Threads anymore.

    I would sincerely appreciate if someone could guide me in the right direction. I am sure there are well-establish methodologies for achieving this... a nudge in the right direction will help. A link, a book, anything.

    I have to mention this.....

    It frightens me that they taught you Semaphores and Monitors in your University classes, but never taught you database consistency models.  It's not your fault at all.  Let me just say that the database engine will take care of the race conditions for you, but you may need to give it help by choosing a locking strategy such as "optimistic" or "pesimistic".  It is also sometimes referred to as a "transaction isolation level".  Any database worth a nickel will either commit your change properly or return an error message and make no modification to the data, so there is no need for you to worry about the details.



  • @jsmith said:

    Both the winner and loser will issue a statement that follows this pseudocode:

    UPDATE seats
    SET status = 'booked'
    WHERE seatid = '123456789' AND (the row hasn't been modified by anyone else)

    The check for modification could use a DB specific technology like SQL Server's timestamp datatype, or simply by checking if every column has the same value as when it was read into the application.  If the statement updates zero rows, you are the loser.  If it updates one row, you are the winner.  For the loser, simply refresh the data and tell the user to choose another seat.



    IMO, you do not really need - or even want - to use a timestamp or something similar;

    UPDATE seats
    SET status = 'booked'
    WHERE seatid = '123456789' AND status='available';

    [Edit: you might want to check a few other important columns too, e.g. the price of the seat]

    is the way to go. I don't care if there have been any modifications in the meantime provided that the current status of the seat makes it eligible for the booking. Let's say another customer booked the seat, but immedatately cancelled the booking, while our customer was still considering. Does that mean we should our customer tell "sorry, this seat is no longer available, please choose another seat" but offer the same seat in the list of available seats again? I don't think so.


  • @ammoQ said:

    @jsmith said:

    Both the winner and loser will issue a statement that follows this pseudocode:

    UPDATE seats
    SET status = 'booked'
    WHERE seatid = '123456789' AND (the row hasn't been modified by anyone else)

    The check for modification could use a DB specific technology like SQL Server's timestamp datatype, or simply by checking if every column has the same value as when it was read into the application.  If the statement updates zero rows, you are the loser.  If it updates one row, you are the winner.  For the loser, simply refresh the data and tell the user to choose another seat.



    IMO, you do not really need - or even want - to use a timestamp or something similar;

    UPDATE seats
    SET status = 'booked'
    WHERE seatid = '123456789' AND status='available';

    [Edit: you might want to check a few other important columns too, e.g. the price of the seat]

    is the way to go. I don't care if there have been any modifications in the meantime provided that the current status of the seat makes it eligible for the booking. Let's say another customer booked the seat, but immedatately cancelled the booking, while our customer was still considering. Does that mean we should our customer tell "sorry, this seat is no longer available, please choose another seat" but offer the same seat in the list of available seats again? I don't think so.

    If the people trying to get seats had unique ids they could use, prospective passengers #42 and #128 could issue these updates:

    update seat
    set passengerID=42
    where seatID=32
    and passengerID is null
    

    update seat
    set passengerID=128
    where seatID=32
    and passengerID is null

     

    Now each passenger can check the seat or even the rowcount to see if they got the booking



  • The WTF is using .Net on top of Access.¬† Access is a very nice prototyping and front-end tool.¬† The .Net examples only show you how to use Access because they know that not everyone has¬†the money to pay for an¬†SQL server licence.¬† (Or they use Oracle.)¬† If you build your queries and forms in Access, it will take care of this kind of stuff for you - updating the second salesman's screen as soon as the database value changes.¬† It takes care of all the file-locking semaphores itself.¬† If you need to upgrade from data-and-app all in one file, you split the data into a second .mdb and have a pure front-end .mdb file.¬† When (not if) Access crashes the front end, you haven't lost any data.¬† The next upgrade from that is to push the data file into SQL server, using the wizard suggested above.¬† The next upgrade -¬†where you usually switch to .Net -¬†is to rewrite from scratch.

    If this is a serious application that is too important to have crashing on a daily basis then your prototype should be used as the specification for the actual final production  version.  Do not let the client/user ever tell you "this one is good enough, why don't we keep it?"  You must always refer to the Access version as a prototype or proof-of-concept.  It's sort of like building a house with no roof or front door.  The clients can go into the house, walk between rooms, turn the lights on and believe that they're "testing" it.  They never look up to see that there's no roof.  They don't notice the lack of a front door because there is no doorway.  Customers won't pay a builder who builds a house without a roof and they shouldn't pay a developer who delivers an application with no documentation, no backup strategy or whatever. 

    If you're building a database for your mother to keep her recipies and she also keeps the paper copies, then Access is fine.  My favourite "Access Moment" was when I was explaining to a user why they coudn't just use the prototype and the *very next thing they clicked on* caused a crash.



  • So, Jet (the default Access database engine) could never be used for anything in production reliably?¬† Someone better tell that to Microsoft.¬† They've been using it successfully¬†with their DHCP and WINS products for 15 years.¬† I don't see any widespread panic, just a bit of extra maintenance.¬† Microsoft Exchange also used a modified version of Jet successfully¬†for years.

    The Microsoft examples could just as easily be SQL based without extra expense.  MSDE (or the new SQL 2005 Express Edition) is totally free and 100% code compatible with SQL Server.  They chose Jet becuase it is simple and popular.

    I agree all Access apps should have the front end/db split.  It is almost impossible to roll out updates and fixes otherwise.  Backups are a huge issue, but there are solutions to back up a Jet database out there.  Personally, I don't see anything wrong with a small internal app being an Access front end (as an ADP file, not an MDB) connected to MSDE or SQL Server on the back end.



  • Yes, Jet is fairly reliable in my experience.¬† Access is the problem.¬† That's why splitting it into¬†two databases works - the front end is the one that usually crashes and corrupts itself while the data stays safe.¬†

    As always it's a tradeoff between initial investment and ongoing maintenance.  If you have considered the maintenance workload during the design process, then you will probably have a sucessful application.  Otherwise you will probably end up with a WTF like Windows ME, unable to shake off its DOS roots. 



  • @jsmith said:

    So, Jet (the default Access database engine) could never be used for anything in production reliably?  Someone better tell that to Microsoft.  They've been using it successfully with their DHCP and WINS products for 15 years.  I don't see any widespread panic, just a bit of extra maintenance.  Microsoft Exchange also used a modified version of Jet successfully for years.

    The Microsoft examples could just as easily be SQL based without extra expense.  MSDE (or the new SQL 2005 Express Edition) is totally free and 100% code compatible with SQL Server.  They chose Jet becuase it is simple and popular.

    I agree all Access apps should have the front end/db split.  It is almost impossible to roll out updates and fixes otherwise.  Backups are a huge issue, but there are solutions to back up a Jet database out there.  Personally, I don't see anything wrong with a small internal app being an Access front end (as an ADP file, not an MDB) connected to MSDE or SQL Server on the back end.

     

    One small gotcha with the jet provider/Access is that MS limits it to approximately 10 concurrent connections.  I say approximately because it's changed back and forth between 10 and 15 in various versions of MDAC over the years.  If you have a SQL Server, my recommendation would be to just use the SQL Server and forget about Access entirely...  unless you really enjoy having weirdness start happening when it begins to scale. 

    Jay



  • Hmmm..... I just made a simple application and was able to make 200 simultaneous connections to a Jet database, it will support up to 255.¬† It's been that way at least since Jet3 (Access 97).¬† Jet has horrible concurrency, so I wouldn't recommend more than 10 users, but it will certainly work.¬† Another big problem with Jet is its failure mechanism.¬† When there is a locking issue, SQL will block until it can complete.¬† Sometimes this leads to a timeout error.¬† Jet will instantly throw a hissy fit and make you retry.¬† But that's just an annoyance -- Jet still works just fine.

    Jet still has a place in the world.  If you build a little "track some stupid little thing" app in a day and plan to put it on everyone's desktop, Jet is a great way to do it.  Write an install routine that integrates the installation and configuration of MSDE, and compare that to the same process with a Jet database.


Log in to reply
 

Looks like your connection to What the Daily WTF? was lost, please wait while we try to reconnect.