MS Office adds a few extra wrinkles. The COM libraries for Office automation specifically support the concept of launching Word from an app, generating a document, and then detaching and leaving Word running. Because of this, even if you thouroghly destroy all of the COM objects, Word may still hang. Excel is actually a worse offender than Word, but it can sometimes be tricky to get any of them to close. Usually the culprit is that you used a collection without creating an explicit reference to it. Then, you never remember to destroy it because you used it implicitly and you don't have a variable for it. If you Call .Quit(), the GUI will go away, but the automation server will still hang out in memory.
Posts made by jsmith
-
RE: One of the reasons why you shouldn't use Word automation on web server
-
RE: SQL 2005: How can I create "Insert Into" statements?
[quote user="DWalker59"]
I needed to create "Insert Into" statements for about 10 records of data (6 fields) to send to someone who has the same database I have. I was hoping SQL 2005 would have a way to do this, but I don't see it.
Google led to me to a couple of stored procs to generate the statements, but neither of them actually worked. Creating the statements manually is... tedious.
Is there a generally accepted, easy way to do this? (These are both development systems, so I don't know if we'll ever set up real replication between them or not.)
Thanks.
[/quote]
For a really quick and dirty solution, set a query window to "Output to Text" and do a select * from the table. Then add quotes and commas as necessary and paste an "INSERT (blah,blah,blah,...) VALUES(" in front of each line. You can even make the INSERT line from the header row.
However, I'd recommend that you build a little app in your favorite language to make INSERT statements. You'll use it a hundred times throughout your professional career. Eventually you can add stuff like special character handling, less common data types (varbinary is fun), an INSERT or UPDATE variation, and a bunch of other useful stuff.
-
RE: MS SQL Query Performance
[quote user="KenW"]
[quote user="dhromed"]
A) avoid subqueries always, and use JOIN instead. This way I save the overhead for the subquery.
[/quote]
The difference in using a subquery and a JOIN is that the subquery will run once for every row in the main table of the SELECT, whereas a JOIN most likely won't. So use a JOIN wherever possible instead.
However, you'll find times where you have to use the subselect and absorb the performance hit.
[/quote]
Run this:
SELECT *
FROM Employees
WHERE
Salary > (SELECT AVG(Salary) FROM Employees)In query analyzer and see how many times it runs the inner query. I assure you that the subquery will only run once. Of course, add a Salary column to the Employees table in Northwind first.
-
RE: MS SQL Query Performance
In general, use subqueries where performance isn't the main concern and where they will enhance readability. Subqueries may improve performance when used with EXISTS or NOT EXISTS, but in general will either have a negative impact or no impact on performance. Usually with small test batches of data, there will be no performance difference, but when getting into large data set where SQL starts to do has joins, subqueries can cause performance degradation.
-
RE: When designing a website for a religious cult...
[quote user="R.Flowers"][quote user="shadowman"]
But the 11th is 1 day before the 12th, right? And Septembet 12th was 0 days before the 12th.
Perhaps you were interpteting it as 1 day remaining until the day before the start of nuclear war?
[/quote]
Yes, it was a misinterpretation. But consider this:
Let's have the meeting next Friday.
Now if this is Wednesday, will the meeting be in 2 days, or 9 days?
At my last job, the staff met bi-monthly. At my new job, we meet bi-weekly.
At face value, I take bi-weekly and bi-monthly to mean every two weeks (or so). However, by that logic, some of the meetings could take place twice a week, every two weeks, or every two months.
I'm not sure if such idioms are in use outside the US, but they have always caused me (mild) confusion. I'm just saying, when it comes to dates, math, and the English language, I don't feel so bad for misinterpreting something.
[/quote]
Actually bi-monthly means every two months. Semi-monthly means twice a month. You have to figure out whether bi-monthly means every two months or twice a month in your case because the people who told it to you want you to figure out what they meant, not what they said. Correct them someday and marvel at the response.....
You: Ummm.... why do you want to see us at another bi-monthly meeting in two weeks? Bi-monthly means every other month, here's the dictionary reference.
Boss: Bi-monthly can mean twice a month under certain circumstances, it has become common usage. Are you one of those people who refuses to think outside the box? You are now demoted to the mail room.
Many people like to flap thier lips and then want you to ignore everything they've said and do exactly what they want you to do. It has become so prevalent that most of us don't even bat an eye when something truely rediculous is said. The best example is "I could care less.". Most people say it improperly, but they assume that you will know what they mean because you recognize the phrase as a magic phrase that means that the utterer doesn't care rather than actually listening to the words and figuring out the meaning. If you try to correct them, they say "but you know what I meant!!!".
-
RE: .NET string.ToLower() == TooSlower() ?
[quote user="Sunday Ironfoot"][quote user="danielpitts"]
Actually, if you keep an original document, and only search/replace on the original, but display the replaced text, it will work. You would search for the "item" "a|e|i|o|u", then it will highlight a,e,i,o, and u.
[/quote]
Cool, thats works upto a point. However, if I search for s smaller string thats a substring of another search term later on, eg. I search 'data' and 'database' (data is a substring of database), and it's in that order. It will highlight data but not database. I know I'm being picky here, but logically it should highlight the whole word database. I could maybe reorder the words in the array so that larger words are at the top, for instance if you search database followed by data they are both highlighted fine. Also what if the user types in characters used by regular expressions | \ + ? * etc. Again I could probably strip those out (Is # a regex token? What if the user searches 'C#.NET' ?).
Thanks for the help here people. I was thinking maybe my method was overly long and convoluted and that there must be a simpler method.
[/quote]
Search for "\s(\witem\w)" and use $1 instead of $& in the replace method.
As for tokens, there are 11 of them, [^$.|?*+() all can be fixed by inserting a \ before them. It would be kinda neat to inform the user that they can search by regular expression. That would add immense power to your search engine without any extra code on your part.
-
RE: .NET string.ToLower() == TooSlower() ?
[quote user="Sunday Ironfoot"]
[quote user="jsmith"]<font size="2">Regex rx </font><font color="#0000ff" size="2">=</font><font size="2"> </font><font color="#0000ff" size="2">New</font><font size="2"> Regex(item, RegexOptions.IgnoreCase);
str = rx.Replace(text, "<span style='background-color: lightgray'>$&</span>');</font>[/quote]That also creates another problem in that the text block that gets returned has it's upper case letters converted to lowercase on the words that are searched. For instance the sentence is "Ajax performance techniques" the user searches 'ajax' and ' Performance' turns the sentence into "ajax Performance techniques".
Also I assume Replace() method replaces only a single item at a time so you'd have to iterate through each item in the array of strings to search for, so if you searched for the word 'style' you'd end up with <span> tags wrapped around your style attribute in the already existing span tags (same goes with background, color, light, gray etc etc.).
[/quote]
I tried it and it does preserve the case of the values being replaced. It also does replace all occurrences. I will admit that if it ends up replacing an attribute of an existing tag, it will create an invalid document and have unexpected results. I'm sure someone here with a lot of experience with regular expressions could help eliminate that problem.
I also timed it and for a small string, it took far less than 1ms to to two replacements.
-
RE: .NET string.ToLower() == TooSlower() ?
[quote user="Sunday Ironfoot"][quote user="danielpitts"]
Ever hear of regex?
I don't know the .NET way, but in Java:
public String highlight(Collection<String> searchItems, String text) {
StringBuilder builder = new StringBuilder();
for (String item: searchItems) {
if (builder.length() != 0) {
builder.append('|');
}
builder.append(item); // You might want to escape item.
}
return text.replaceAll('(' + builder.toString() + ')', "<span style='background-color: lightgray'>$1</span>');
}[/quote]
The problem with the replaceAll() method is that it's case sensitive (same with .NET's Replace() method), so if the user searches 'ajax; it wouldn't highlight 'Ajax' or 'aJAx' etc.
[/quote]
Try:
<font size="2">
</font><font size="2">Regex rx </font><font color="#0000ff" size="2">=</font><font size="2"> </font><font color="#0000ff" size="2">New</font><font size="2"> Regex(item, RegexOptions.IgnoreCase);<font size="2">Regex rx </font><font color="#0000ff" size="2">=</font><font size="2"> </font><font color="#0000ff" size="2">New</font><font size="2"> Regex(item, RegexOptions.IgnoreCase);
str = rx.Replace(text, "<span style='background-color: lightgray'>$&</span>');
str = rx.Replace(text, "<span style='background-color: lightgray'>$&</span>'); -
RE: Select * is evil, still?
My biggest gripe about SELECT * is that it makes covering indexes impossible. Of all the tools in the bag of a database tuner, one of the most effective is the ability to make a mulicolumn index that acts like a "mini table" and contains all the columns relevant to a particular query, but no more.Most other tools rely on tweaking the design, the SQL, or only work if a small amount number of rows are returned. Covering indexes work without changing the design, without tweaking the SQL, and regardless of how many rows are returned. When idiots use SELECT *, this technique doesn't work.
Even if you really want all the columns -- ask for them. Think about a situation where a column is dropped. Would you want your code to blow up when the query is run or wait until you access the dropped column? At least the first is easy to find and fix.
-
RE: Good "introductory" book on RDB/SQL?
Careful. Learning from doing is normally an excellent idea. However, SQL is a weird animal. Most people build fully functional "pet" systems and still make every mistake in the book. The fact that the system works fine only reinforces the bad techniques.
I would concentrate on reports first. Get familiar with the Northwind sample that comes with all of Microsoft's products and try to get some results from it. After getting a good handle on querying, work on designing a "pet" project. After basic design, then worry about the stuff that 90% of the devs out there get wrong -- database integrity, locking, and optimization.
-
RE: Windows explorer FTP: One, global webhost: Zero.
[quote user="danielpitts"]
In truth, you should be using some sort of revision control system, and ask for secure shell (ssh) access. ssh is encrypted, so no one can sniff your database passwords/etc...
I personally think that there are a few WTF's here.
WTF#1: Certanly the sites shouldn't have died just because of an FTP client. (#1 rule. Server's should rely on client behavior to be stable).
WTF#2 is the support staff. They're hosting servers, they should be reliable.
WTF#3: Lack of source control. Its generally considered a good idea to have backed-up revision control. Unless you LIKE losing data (or like using Developmenstruction.)
WTF#4: managing your code with a Windows FTP client
WTF#5: Managing your code with FTP at all. FTP is okay (not great) for flat content, but anything that accesses a database that should be secure (handling money/personal information, etc...) must not be transmitted unencrypted, lest some hacker should sniff your password.
WTF#6: PHP. (j/k, I use PHP sometimes myself. for hobbyist sites only!)
[quote user="Graham"]
This is the story of a bug we ran into with our webhost.
I won't name the host, but I will say they are big. Very big. And proudly claim they are "the world's no.1 web host".
While working on a part of the website that wasn't critical, I was using FTP to make changes. Via windows explorer.
I was editing some php files, copying them, testing them. Repeat until they worked :-)
Then all of a sudden, after one copy-replace, the window refreshed, and every version of each file was listed. Say, 5 versions of each, only differing by size/date.
Very odd I thought. So I went to refresh
No reply.
Reconnect...?
Time out.
Check the webpage.
404.
!!?
The server had gone down. Completly. All domains on it too (we had 6) were not repsonding, no doubt the other users on the server box went down too.
6 hours later it came back up again. From backup too, so all changes to the file system we had made that morning were gone. (This probably included the databases too, but we wern't in production yet, so I don't know for sure).
At first I didn't click it was their FTP server, until the exact same thing happened a few days later.
Needless to say this was a major problem to us.
We could get huge downtime, and lose data, simply by someone using FTP. No doubt it effected every other server too. We are dealing with money on our site, having the possibly of a day's transaction records wiped would be a problem. We couldn't rely on them.
So we sent their support an email, listing the exact steps to reproduce the problem, and it's significant consequences.
The responce we got was nothing short of amazing. It was quite long, so I'll paraphrase, basically went like 'Well of course. You shouldn't be using windows explorer, don't you know how insecure microsoft software is. You should be using [insert linux ftp client]'.
[/quote][/quote]
#1 and #2 are the only relevant WTF's here. It doesn't matter how Graham found out that IE crashed the server, it's the fact that the support team thinks that telling him not to use IE any more is a solution. What are they going to do, send this email to the entire Internet so that the server won't crash?
As for the other items you mentioned, nowhere in the original posting does it say that he does or doesn't use a source code control system. You have to get the files to the production environment somehow, some hosting providers only support FTP. Same with sending encrypted data. This simply amounts to attacking the attacker and this method is generally employed by those that know they are wrong and want to divert attention. I would expect it from the hosting provider (they actually did do it), but not from a poster here. #s 3 through 6 at most deserve a BTW in a response rather than being in the core WTF list.
-
RE: When's the best time to take the test?
If the Reviewer™ is real test questions, then stay away from it. It's specifically against the rules of the certifying body and a big risk if you happen to get the "new" round of questions instead of the old ones. If the Reviewer™ is a bunch of test-like questions, then have at it. Remember that anyone who may be professionally judging you won't know the actual content of the exams, but will compare your performance against others. So, if you take 12 months to legitimately complete your 5 exams and your co-workers take 2 months, it may not look good for you. If you have a resource, use it (without stepping beyond your comfort level). Don't concern yourself with the fact that you may be certified, yet not comfortable with your own abilities. If you get offered a job that you feel is over your head, you don't have to take it.
In my opinion, the vendors could do a lot to maintain the integrity of the certification process that they don't. The easiest thing to do would be to use a pool of 5000 question to generate exams. That way, memorizing enough to pass would be harder than learning the material. Some vendors seem to think that if they make the questions reallllllllly hard or reallllllllly long that the quality of candidates will improve. It actually works the other way around. MS is in this situation now with the Windows 2003 MCSE exams. They are very hard for someone starting out in IT and taking the "hard road" of actually learning the stuff. Only the really perserverent ones make it. But, nearly all of the cheaters pass because cheating on a hard test is just as easy as cheating on an easy test.
-
RE: Generating Unique Codes
[quote user="UncleMidriff"][quote user="tster"]
well, I know nothing about the system you are using, but I woudln't use codes like that. I would add a list of students who are allowed to register for the class that only the professor can change.
if you can't do that then why not just do a random number generator?
[/quote]
That's a fine idea, but the instructors need to be able to give out invitation codes without having to wait for all the people they want to give them to to register with the site first.
I thought about using some sort of random number generator, but then I'd get a situation like this:
1. Generate a random number
2. If we've already got that number in the list, start over at Step 1
3. Add the random number to the list
4. Repeat NumberOfCodesNeeded times
where, although it's incredibly unlikely, we could get stuck on Steps 1 and 2 for eternity.
[/quote]
No need to have them register before assigning them to a class. Simpy pre-create accounts for all relevant students and make the registration process into an account activation process. Use their school-assigned email for the activation code for the account to prevent one student from taking another student's account. Then you will have a system based on authentication and authorization instead of shared secrets. This is far simpler to maintain. -
RE: A computer technician's advice on removing Spyware
[quote user="Bob Janova"]All you should need is a tiny app which gets a process list, and allows you to 'mutli-select ... kill'. Then go rename/delete the EXEs. Hmm, maybe I should write one. The only thing is I don't know if you can forcibly kill a process from another application (other than Task Manager, obviously).
[/quote]Lots of spyware hooks into the system calls for getting a process list and make sure they never show up on the list in the first place. Those are the really fun nes to deal with. I usually boot in the Recovery Console and rename exe and dll files with a recent last modified date.
-
RE: Help me choose ASP.NET web hosting provider
I've been using them for a few years and they have worked out really well. Only one database in the $15 package, but you can add tham for $5/month each.
-
RE: Marshalling structs
In that case, there's probably no Managed way to solve the problem. The Marshal class also requires a size of memory to be allocated. The only reasonable way to access an arbitrary number of arbitrarily sized strings is unsafe code.
You should need a license to write C code that uses strings.
-
RE: Marshalling structs
@luke727 said:
<FONT face="Courier New"></FONT>
...FileInfo is dynamically allocated by the callee. Is there a way to do this without using unsafe code? I think it is impossible, but I'm not the most knowledgeable when it comes to marshalling.BTW, this would disturb me greatly if I had to deal with it. If the library allocates the memory for the FileInfos, who frees it? Sure, you know the minimum amount of memory that it allocated, but it probably allocated more than it needs. It is generally bad practice for a library to allocate memory and will cause memory leaks in your application if the library doesn't deallocate it or you don't use the same library to deallocate the memory that the library used to allocate it.
You may be reduced to taking the second member of the FileResult struct as an IntPtr and using unsafe code or the Marshal class to pick out the data. Yipee. At least you can do it as a member method of the struct. Make FileInfos private and build an accessor method that returns a C# array of FileInfos.
-
RE: Marshalling structs
First, I'll answer the easy question:
@luke727 said:<FONT face="Courier New"> FileInfo FileInfos[1]; // why???</FONT>
That tells C that there is an array of FileInfos structures. The upper bound of 1 can be violated as traditional C doesn't check array bounds anyways.
Second, try declaring the C# structures like this:
<FONT face="Courier New">using System.Runtime.InteropServices;</FONT>
<FONT face="Courier New">[StructLayout(LayoutKind.Sequential, Pack=2)]
</FONT>
public struct FileInfo
{
[MarshalAs(UnmanagedType.LPStr)]
public string Name;
</FONT><FONT face="Courier New">[MarshalAs(UnmanagedType.LPStr)]
public string DisplayName;
public uint Status;
}
<FONT face="Courier New">[StructLayout(LayoutKind.Sequential, Pack=2)]
public struct FileResult
{
public uint FileCount;
[MarshalAs(UnmanagedType.AsAny)]
public FileInfo[] FileInfos;
}</FONT>You can allocate an instance of these guys and use Marshal.ReadByte to see where it is storing data if it needs some tweaking.
-
RE: Faster: session variables or sql tables ?
What I'm saying is that it is extraordinarily unlikely that you will be able to make something that works better than what PHP has built in. You may be able to make something that's a little faster, but there will most likely be significant drawbacks to the alternative solution.
The reason I didn't answer the question directly was because you asked the wrong question. Rolling your own solutions to common problems in order to increase performance in generally a horrible idea and will eventually land your code on the front page of this site. If you really can do better than PHP's current session handling, the best course of action is to contribute your solution to the PHP project and then use it as a standard feature of PHP.
It's like when someone calls a suicide hotline asking which weapon they should use. The operator isn't going to directly answer the question because any answer is wrong. That's basically why I didn't answer your question.
Try this for more explanation:
-
RE: Script Obfuscation
@Sgt. Zim said:
@Albatross said:
Lock the VBA code down with a password: Tools > Project Properties > Protection (Tab).
Only give the password to "real" programmers.
The way I read this, he needs to give access to "programmers," but they're all of the sort that write code that shows up here ...
<flame-retardant underwear mode="on">
This falls into the "right tool for the job" realm. I'm presuming that since you're using Access '97, there's a logical reason, and any of the "use a better tool" comments are pointless... The job is to drive screws, so try to make the best of that screwdriver; don't reach for a hammer.
Were I in your (hugely uncomfortable) shoes, I'd probably take all of the things that I don't want getting misused, and wrap 'em in a VB6 DLL. The differences between VBA and VB6 aren't huge ... Of course, that might be a WTF in its own right. It's a bit more difficult from the "VBA programmer's" perspective to call a DLL than to use the code modules in the MDB, but it would at least prevent the script-kiddies from deleting random bits of code, and you could easily leave your date check in place.
Or do they need to actually change parts of it? If that's the case, your best bet is to scrub all references to yourself from the code, and hope that it never comes back to haunt you. Of course, you'll still know, but at least no one else will.
<flame-retardant underwear />
Good luck.You can easily fix the "difficulty in calling a dll" by distributing a wrapper module. However, if he builds a COM dll in VB6, it is very easy to call it from VBA. So easy that building a wrapper module would seem silly.
-
RE: Faster: session variables or sql tables ?
@ItsAllGeekToMe said:
Anyone have any insight as to what might run faster on a webpage? I'm currently using MySQL tables to store shipping, billing, and order information. Depending on what stage of the transaction process you are in, there are any number of lookups or inserts/updates (typically no more than 2 per stage). I'm curious if just storing everything in a php object and storing it in a session variable would be faster/more efficient. This way, I figure, the queries are minimized (1 or 2 TOTAL), but does the php object add too much weight to the webpage for it to make a noticeable difference?
Anyways.....any thoughts would be great.
Thanks.Stop!!!!
What you are doing is called "premature optimization". If you don't have a performance problem and you aren't doing something seems to be way too slow, ignore performance. Here is the problem you will find for yourself;
Let's say performance actually matters here and the slight difference between the two implementation will actually make a measureable impact on the performance of your site. So, we pick the faster of the two methods (the session variable). Two months later, the site has even more visitors and you find that there aren't any more simple tweaks to help the server keep up with the demand. So, you buy two more servers and use two for application servers with a web server and your PHP app on both and use the third for a database server. BTW, this solution isn't overboard -- two servers will likely cost less and have more of a positive effect than throwing the same amount of time and money at tuning the app. It doesn't take long to burn through $10,000 in development money.
Now, your session based solution (which is faster) now has to be tweaked to work with a server farm. It will either make the migration cost more as you need to tweak the state mechanism to work across multiple servers or force you into a suboptimal situation of making sure that every user is serviced by one and only one server. The faster solution can result in a lower performance end product.
Had you originally chosen the slightly slower, but more scalable, solution of putting state data in a database, then adding more servers would be seamless and would increase performance very easily and cheaply.
The ironic end result of almost any occurrance of "premature optimization" is that any time you go chasing after the last ounce of performance, you end up hurting scalability which hurts performance in the long run. That's why anyone with a few years under their belt will cringe any time a question like this comes up. The best answer really is "shut up and follow the best practices, newbie", just like the gruff senior programmer at work will tell if you work on a good development team.
I wonder if PHP has a solution similar to what ASP.Net has. In ASP.Net, there is a session objct that stores data in memory by default, but can store the data in a database by changing one line of the config file. That way you can change the storage mechanism at any time without changing the app.
-
RE: Hugs error messages
You think those error messages are bad, try almost any flavor of SQL. Usually the message is stated as "error near XYZ", where XYZ is the last thing you did correctly before the parser got confused. My favorite variation is "Sytax error near ,". There's a comma on almost every line in SQL.
My general rule of thumb is to get the structure right first, then worry about the content. When I open a "(" or "{", I immediately close it and then back up and start typing in the middle. I find this especially necessary when writing java in a lame editor like Notepad. A good IDE will solve 95% of these problems. I generally pick my current favorite language based on my available IDE choices. I can learn a language quickly, but I'm not going to be able to write a good IDE in a reasonable amount of time.
-
RE: Connect to a central db server
Hmmm..... I just made a simple application and was able to make 200 simultaneous connections to a Jet database, it will support up to 255. It's been that way at least since Jet3 (Access 97). Jet has horrible concurrency, so I wouldn't recommend more than 10 users, but it will certainly work. Another big problem with Jet is its failure mechanism. When there is a locking issue, SQL will block until it can complete. Sometimes this leads to a timeout error. Jet will instantly throw a hissy fit and make you retry. But that's just an annoyance -- Jet still works just fine.
Jet still has a place in the world. If you build a little "track some stupid little thing" app in a day and plan to put it on everyone's desktop, Jet is a great way to do it. Write an install routine that integrates the installation and configuration of MSDE, and compare that to the same process with a Jet database.
-
RE: Subnetting WTF.
Actually, that mask is technically useable. However, it would make anyone trying to manage that network go berserk in a few hours. I don't know of a DHCP server that will give out a discontiguous block of addresses, so that would make network management even harder.
BTW, that network has 14 hosts, the addresses are:
146.118.105.136
146.118.105.137
146.118.105.138
146.118.105.139
146.118.105.140
146.118.105.141
146.118.105.142
146.118.105.143
146.118.105.201
146.118.105.202
146.118.105.203
146.118.105.204
146.118.105.205
146.118.105.206The broadcast address is:
146.118.105.207
Every other criticism you make is valid and I agree that it is the lamest attempt ever at explaining subnetting. Who edits these books? I like where they say "Table 1-2 indicates how to determine it" yet Table 1-2 contains no clues whatsoever at how to determine the values. Books like this actually make people dumber. I've seen student on the cusp of "getting it" read a book or web page like this and suddenly "lose it".
-
RE: Connect to a central db server
So, Jet (the default Access database engine) could never be used for anything in production reliably? Someone better tell that to Microsoft. They've been using it successfully with their DHCP and WINS products for 15 years. I don't see any widespread panic, just a bit of extra maintenance. Microsoft Exchange also used a modified version of Jet successfully for years.
The Microsoft examples could just as easily be SQL based without extra expense. MSDE (or the new SQL 2005 Express Edition) is totally free and 100% code compatible with SQL Server. They chose Jet becuase it is simple and popular.
I agree all Access apps should have the front end/db split. It is almost impossible to roll out updates and fixes otherwise. Backups are a huge issue, but there are solutions to back up a Jet database out there. Personally, I don't see anything wrong with a small internal app being an Access front end (as an ADP file, not an MDB) connected to MSDE or SQL Server on the back end.
-
RE: Like Textpad with Folding?
Try Notepad++. I like a full featured IDE for coding, but I use Notepad++ for all my text file needs. It's a sourceforge project.
-
RE: Connect to a central db server
@GizmoC said:
Hello. I am teaching myself more .NET, really love the technology.
I've created a simple desktop application. Basically, its a database frontend for an inventory management system.
The databse is simply an Access file (.mbd) which is in the same directory as the executable. In other words, my oleDbConnection objects look for the database (.mdb file) in the same directory.
You see, this application will be independantly run on different PCs (each PC is in a different branch of our shop). I want to make all the different PCs share the same database. Ofcourse, the PCs will be connected (via LAN) to a central database server (where my .mbd file will be stored).
In essence, all I want to do is make my application look for the .mbd file on the database server, instead of localdisk. However, there are some issues which I dont know how to address:
1) I guess the simplest way to achieve this is to create a shared network folder. But I dont think this solution is elegant... because it seems I have to hardcode the network path into the program (specifically, I have to hardcore the ConnectionString of the oleDbConnection object)
2) How do I keep the data synchronized? (ie, Race condition)
The naive way to do this would be simply allow the database to throw an exception, and simply make the user "try again". But I want a more robust solution.
In University, I learnt how to tackle race conditions in the same application using Semaphores, Monitors, etc. But I have no idea how to do it via networks.. since I am no longer dealing with Threads anymore.
I would sincerely appreciate if someone could guide me in the right direction. I am sure there are well-establish methodologies for achieving this... a nudge in the right direction will help. A link, a book, anything.I have to mention this.....
It frightens me that they taught you Semaphores and Monitors in your University classes, but never taught you database consistency models. It's not your fault at all. Let me just say that the database engine will take care of the race conditions for you, but you may need to give it help by choosing a locking strategy such as "optimistic" or "pesimistic". It is also sometimes referred to as a "transaction isolation level". Any database worth a nickel will either commit your change properly or return an error message and make no modification to the data, so there is no need for you to worry about the details.
-
RE: Connect to a central db server
@lpope187 said:
@GizmoC said:
Thank you everyone, your replies have been helpful. To summarise, heres what I've learnt.
1) Move away from Access.
Ok, I always knew Access was not very advanced. Is there any program out there that can convert my Access database to another database? I mean, not just transfer my table values, but also keep my constraints intact.
2) The simplest way to implement "Optimistic Concurrency" is by adding a TIMESTAMP column to all my tables.
If you want to convert to SQL Server use the database upsizing wizard. It will push both the schema and the data to the selected destination server. I think it is under the Tools menu. If you want to push it to another database, perhaps someone else has an easy method - I can't think of any at the moment.Since you are using .Net, use the tools available to you. For example, all of the built in data tools in Visual Studio automatically implement "batch optimistic" concurrency. This means that data conflicts like the ones you described earlier can occur, but when an attempt is made to save the last change, the data objects will throw a concurency violation exception. It is best to learn how to deal with this situation rather than learning how to avoid it. 95% of the time, when people are frustrated by optimistic concurrecy, it is because they haven't thought out their proccess well enough.
Let me give an example -- air travel bookings. It is very important to not sell the same seat to two people. A lot of people look at it as a concurrency problem and try to lock a seat row in a table somewhere while a user is still finalizing their order on a web booking system. Really, the seat should be reserved and the reservation converted to a booking or removed at checkout time (or when the session is abandoned). By adding this extra step, explicit locking is no longer necessary. Now we only have to deal with the fight over the booking between two prospective travellers. This is easy. Imagine a case where there is a very near tie between two users. Both will load the row as "unbooked" when trying to book the seat. The winner will successfully change the value to "booked". Both the winner and loser will issue a statement that follows this pseudocode:
UPDATE seats
SET status = 'booked'
WHERE seatid = '123456789' AND (the row hasn't been modified by anyone else)The check for modification could use a DB specific technology like SQL Server's timestamp datatype, or simply by checking if every column has the same value as when it was read into the application. If the statement updates zero rows, you are the loser. If it updates one row, you are the winner. For the loser, simply refresh the data and tell the user to choose another seat.
Visual Studio builds the above query automatically as long as you don't try to roll your own solution. If you are rolling your own, you'd better make sure that your solution is better than the free one.
BTW, attmpting to lock data for more than an instant is a horrible idea in web applications and a bad one in desktop applications unless you have a really really good reason to do so and no easy alternative.
Also, don't worry about moving from Access in your situation. It will work as well as a client/server system as long as the system load is light and there are few users. It isn't as easy to maintain as a client/server system and backups are a royal pain without shutting down the application, but it generally won't cause you a great deal of grief. Access has the benefit of being very simple to deploy. As for you original question of "How do I connect to the database if it resides in the same directory as the application, assuming that will NOT be a consistent path?" Here is your answer --
VB:
cn.ConnectionString = "Provider=Microsoft.Jet.OLEDB.4.0;Data Source=" & _
System.IO.Path.Combine(Application.StartupPath, "DB.mdb")C#:
cn.ConnectionString = "Provider=Microsoft.Jet.OLEDB.4.0;Data Source=" +
System.IO.Path.Combine(Application.StartupPath, "DB.mdb");You should, however, refrain from calling the server that has the *.mdb file a "database server". That will get you kicked out of an interview for any serious job.
To answer question #1 above, Access has an "Upsizing Wizard" that will move the entire database including constraints to SQL Server. It will do some wacky things to constraints and queries in order to preserve the Access behaviors that differ from SQL Server, so consider yourself warned. My favorite is the "SELECT TOP 100%" queries that it makes to preserve the Access concept that queries can have a sort order defined, while SQL Server views cannot unless they have a "TOP" clause.
-
RE: Where do you stand on Licensing? (Flame advisory in effect)
The idea behind the GPL is that it's free only to those who agree to advance the cause of free software. BTW, there is the LGPL that strikes a middle ground. With the LGPL, you cannot enhance and sell the library but you can sell software that depends on the library. I think that's fair.
GPL software is not designed to be free to you, it's designed to be free to everyone. With BSD software, there is much less of an impetus to put enhacements back into the codebase, so it matures slower. With GPL software, it's in your best interests to simply give your enhancements to the parent project and have them maintain it. With BSD software, it's in your best interests to keep your enhancements private.
Detractors like to compare GPL software to a virus because it makes anything it touches free. However, many pieces of commercial software have viral licensing too. A lot of libraries charge runtime royalties which is the commercial variation of the same theme.
-
RE: Curious -- who blocks Ads, and Why?
@Alex Papadimoulis said:
Let's consider television. As a consumer, I want to be able to watch TV shows in HD at my convienience. I don't mind paying (with eyes or money, but would prefer the choice) for this experience because I believe it's reasonable for what I'm getting in return.
But TIVO and bit-torrent have made this impossible. Now I have to watch invasive ads and wait until broadcasts are fully monetized before the DVD is released. Wouldn't it be nice if there would be a way to prevent commercial-skipping and DVD-ripping? With DRM, this can and will happen.
..I've been one of the people knocking this thread off topic, so I'll get myself back on...So, you think that if everyone played nice, then you would get better content. Isn't that a bit selfish? That's like refusing to put salt on your food in hopes that the poor sales will cause the manufacturer to add more salt at the factory. Then, blaming your bad experience on the rest of us for salting our food.
The argument that "you made a deal to watch it this way" has the same flaw. If I buy health food and add chocolate syrup, I might like it. That is entirely against the wishes of the manufacturer, but it's my food. The manufacturer shouldn't be allowed to put a license agreement on the back of the box prohibiting me from making the food non-healthy.
We didn't invent a revenue system that was built the premise that people wouldn't skip the commercials -- they did. Yes, it is easy to skip ads. That's a fact of life. The solution isn't to disable our remote controls so we have to watch the content as it is delivered to us, the solution is for the market to find a happy place. Artificial government controls are just going to make it take longer and be more painful and more expensive finding that happy place.
I'm all for protection of copyrighted material. But, I feel it should be done in a traditional American way. Don't restrict people from breaking the law, find the law breakers and hold them accountable for their actions. That's why I'm all for watermarks and against DRM. In my world, new cool devices will come to the market regularly and they will live or die at the whim of the market, the average consumer will be king and be catered to. In a DRM world, my choices will be made by lobbiests(sp?) and the only ones who will really be able to enjoy the content will be those who have no qualms about breaking the law. In a DRM world, manufacturers will have a guarantee that their products will succeed, because better products will be against the law. Because of this lack of feedback, we will have a guarantee of inferior products.
Sure, we (TIVO people) are screwing up your TV shows. But the people who started the American Revolution got a lot of innocent people's houses burnt down. Did that make it a bad thing? The solution is rarely to sit back and take it. (Yes, I just compared TIVO to the American Revolution. I like extreme analogies.)
-
RE: VB WTF
It's changing the temporary variable created internally by VB to hold the result of the expression. Since this variable is inaccessible to the programmer, it looks like the value is being discarded. VB isn't the only language to do this, Transact-SQL in MS SQL Server does the same with OUTPUT parameters on stored procedures.
-
RE: VB WTF
@dhromed said:
...Scrap the distinction between Sub and Function, and all problems go away....
Actually, Subs and Functions suffer from this problem equally. This biggest mistake people make when trying to deal with this problem is try to break the rules down into one set of rules for Subs and one for Functions. All you do is make twice as many things for you to remember. There is only one rule and it applies to both:"If you use the return value, you must use parentheses. If you ignore the return value, then you must not use parentheses"
Just consider a Sub to be like a C function that returns void. It does return something, but that something is nothing.
-
RE: VB WTF
@RiX0R said:
My shot:
In CallMe a, the variable a is passed by reference as you would expect. In CallMe (a), (a) is first evaluated as an expression, because of the parentheses, then the result of that expression is passed by reference... yielding no change in the variable if the value is changed.
Though why it would be legal to pass an expression by reference, I don't know.
This syntactical oddity of VB has annoyed me more than once... you are required to use parentheses in a function call (just like most other languages), but are required to omit them in a sub call (why??). However, because parentheses are also used to group expressions, if the sub has only a single argument you can get away with putting parens in the function call and not be aware that you're misusing the syntax.
<FONT face="Courier New" size=2>CallMe a ' valid
CallMe(a) ' valid
c = CallMe(a, b) ' valid
CallMe(a, b) ' invalid</FONT>
Bingo. (a) is seen as an expression, so it cannot be successfully passed by reference. I can answer your why question too -- backwards compatibility. In VB6 and earlier, there is only one rule for using parentheses on calls:Rule: You must use parentheses around arguments if you are using the return value, if you are ignoring the return value (or if there is none), then parentheses must be ommitted.
This is the rule because realllllllllly old BASIC didn't have functions. Old BASIC also didn't use parentheses for sub calls (unless you used the Call statement). So, when functions were added, itwas noticed that a function couldn't be called while embedded in an expression without parentheses or the statement would be ambiguous. Example:
MsgBox MsgBox "Are You Sure", vbYesNo
It is ambiguous which MsgBox gets the vbYesNo parameter.
MsgBox MsgBox ("Are You Sure", vbYesNo)
Is unambiguous and acceptable to VB. Notice how the inner MsgBox must have parentheses, but the outer MsgBox must omit them. By the same rule this:
MsgBox (MsgBox ("Are You Sure", vbYesNo))
is lagal. The extra parentheses are useless, but don't cause an error. However, this:
MsgBox (MsgBox ("Are You Sure", vbYesNo), vbOK)
is not legal because the only valid interpretation of the outer parantheses is that they enclose parameters and this is not legal since the return value is being ignored.
-
VB WTF
I'm usually the one defending VB as at least an OK language and blaming all the bad VB code in the world on the hordes of inexperienced programmers that pick VB as the language with which to torment the rest of the wolrd.
However, VB does have some funkinesses. Especially the older versions. My favorite is a VB6 "feature". It turns out that only about 2% of the VB population actually knows when to use parentheses when calling routines in VB6, that's why they changed the rules in VB.Net. Here is a little snippet to illustrate the WTF:
<FONT face="Courier New"><FONT color=#0000ff>Sub</FONT> Test()
<FONT color=#0000ff>Dim</FONT> a <FONT color=#0000ff>As Integer</FONT>
a = 5
CallMe a
MsgBox a
CallMe (a)
MsgBox a
<FONT color=#0000ff>End Sub</FONT></FONT>
<FONT face="Courier New"><FONT color=#0000ff>Sub</FONT> CallMe(<FONT color=#0000ff>ByRef</FONT> x <FONT color=#0000ff>As Integer</FONT>)
x = x + 1
<FONT color=#0000ff>End Sub</FONT>
</FONT><FONT face="Courier New"><FONT face="Times New Roman">If Sub Test is called, it will behave strangely. I'll leave it up to you guys to explain why it will raise two MsgBoxes, both displaying "6". This behavior confused VB6 programmers so much that most simply guessed whether they should use parens or not and fixed the code when it blew up rather than actually understanding how it worked. It used to take me at least 30 minutes to explain this behavior in classes, and a lot of people still didn't get it.</FONT></FONT>
<FONT face="Courier New"><FONT face="Times New Roman">BTW, this behavior isn't dead yet. Until Office switches over to a .Net version of VBA, macro programmers will have to continue to wrestle with it.</FONT>
</FONT> -
RE: I know VB is a WTF, but...
@lpope187 said:
I typically don't even bother upgrading VB6 to VB.NET due to the paradigm shift from object based to object oriented (not to mention most VB6ers don't even know the difference).
The last VB6 upgrade I did literally resulted in every line of code having some sort of upgrade warning comment inserted above it. Most of them were due to the fact that every object was either not declared or declared as variant/object. The rest were default property changes such as Me.TextBox = "New Value" was invalid because you need to explicitly set the Text property in .Net.
Another slight WTF is naming the CommonDialog CDO. When I see CDO, I immediately think Collaboration Data Objects. I guess you could tell from context though.
Larry
As you probably know, the blame lies entirely with the programmers, not the language or the upgrade process. They could have at least read this article on how to code in VB6 so that an upgrade would go smoother:I had to link to the Google cache since the article is so old that it has moved to the subscription only archives. Their #1 suggestion is to stop using default properties (The Me.TextBox = "New Value" syntax). I taught VB extensively since 1995 and I always suggested staying away from default properties. It's not that I'm a genius, every sane developer advocated the same. We also advocated Option Explicit and to avoid Variants. However, the general VB population generally ignored us. Another big one was to avoid the "Dim foo as New bar" shortcut. Most people generally didn't realize the side effects of the shortcut and it upgraded into a horrible mess that preserved all the side effects.
-
RE: IIS6 pulls a WTF
You don't really expect net.exe you give you help on every service it's supposed to start do you? Then should Explorer.exe give you help on everything you could possibly click on? Should IE give you help on every page on the Internet? Sure, the way it didn't give help is a little cryptic. But that's because the tool is 20 years old and that's the same message that you get when net.exe encounters many errors. It's not a great message, but those looking for a polished interface should use the "Services" administrative tool.
The message you are looking at is a holdover from the 80s, back when net.exe was the client for all MS LAN Manager networking functions. Microsoft has left it essentially the same so that they didn't break people's batch files, just added some new stuff where appropriate.
-
RE: A look back at Y2K - Share your WTF stories here
I saw customers create issues due to Y2K....
Novell Netware's NDS uses timestamps to help merge replicated data. Well, if you set the date of a production server forward beyond the year 2000, nothing happens. Except, internally is it making time stamps with the year 2000 on them. When you set the clock back to 199x, the system needs a way to deal with the out-of-order timestamps. Fortunatetly, Novell thought of this. The system will never issue a timestamp older than any existing timestamp. So, in 1998, it will start putting Y2K timestamps on data. They call this "synthetic time" and the server beeps and give an error message something like "Synthetic time is being issued on partition: XYZ" every two minutes. Nothing bad happens and synthetic time runs slower than real time, so ventually the system gets back on real time. But, with sythetic time a full to years ahead of real time, it would take four to six years for the beeping to stop.
Fixing this problem isn't very difficult, but it does take a very long time to explain to most people what is wrong with their system, and most people are pretty freaked out by the error message.
-
RE: Curious -- who blocks Ads, and Why?
@tster said:
@jsmith said:
Oh, and as for the top comment, consumers really don't care about picture quality that much. Last year I had a choice. I have an HD-ready TV, so I could get programming in HDTV. However, with standard TV, I can do anything I want with it. If I got HDTV, a TIVO would cost hundreds more, I would need a lot more hard drive space, a new capture card, and a system upgrade in my PVR, I couldn't do multiple simultaneous streams over my existing network. And for several thousand dollars up front, and ten more bucks a month I would get the piece of mind that the new DRM schemes coming out in the next years will break everything I have. Yippie. I'm not selling my freedom for a few more pixels.
so wait. do you seriously think that most people have a setup like you? When you say "consumers really don't care about picture quality that much" are you referencing some big survey, or are you just stating that you don't care about it? judging from your argument I'd say you are talking for a single person and not "customers" in general. I'd wager that in general customers do care about picture quality. If they didn't why would they be replacing their VHS collection with DVDs at such a rapid pace.
DVDs are better than VHS because they are smaller, easier to rewind, and cheaper to produce (therefore generally cheaper to buy). DVDs really aren't very much better quality than VHS tapes. If a cable network advertised it had new "480p" HD channels, they'd get laughed out of the building by the press. Yet 480p is the maximum DVD quality, and compressed at that. Dark movies often look bad on DVD or digital channels because of the compression.If consumers care so much about picture quality, then why did BetaMax and LaserDisc die a horrible death? VHS crushed them despite having a lower quality picture and sound.
MP3 is another great example. I've seen people pay even for low bitrate MP3s. How about people that watch ABC shows on their video iPods? Do you think they do it for the quality? TV on cellphones, streaming video on the 'net? Pretty much every successful new technology for media except HDTV has worse quality than was previously available. Even HDTV is having a hard time being successful, the FCC keeps moving the date of the mandatory HDTV broadcast switchover because of slower than expected adoption.
Customers want convenience. Very few technologies have won market acceptance if you couldn't record your own stuff. Most people didn't throw away their VCRs until recordable DVD drives were readilty available. You say I'm in a unique situation, yet eveyone who lives a "TIVO lifestyle" faces the same choice I do -- spend $1000 (in addition to the HDTV itself) and get less recording time, give up the TIVO while making the switch to HDTV, or stick with SDTV and the old TIVO.
-
RE: Curious -- who blocks Ads, and Why?
@Alex Papadimoulis said:
@ammoQ said:
the "camcorder in a theatre" example shows that consuments don't care that much about quality
I've read that this is becoming less and less the case with surround sound and HD becoming the norm.
@ammoQ said:
All it takes is just one HD-DVD player which is less-than-perfect tamper-proof
I haven't thought about this too much, but from a theoritical standpoint, would it be feasible to sign all hi-fidelity media and have TVs that will only display content in hi-fi if it has a valid signature? Would require all DVD print shops would have the key? Or, perhaps, just some central organization that signs the content?
The only way to make it so that media cannot be copied is to take away our freedom to watch it the way we like. There would have to be a closed system where only a select few organizations can create devices that consume media. If that weren't the case, then anyone could create a video driver that records the video stream to a drive.Right now I have a home-brew PVR setup that allows me to record stuff and play it on any device throughout my house. I can change recording schedules from work. Sure, I could buy a system that does this, but I like to build them. In the very near future, my system is going to become useless and the tools to create the same system for the new media will not be available to the general public. Already, putting my DVD library on my system would require breaking the DMCA.
What bothers me is that all this is unneccesary. If the only goal was to stop copyright violation, then watermarking is the way to go. Just watermark all delivered content so that the original source can be identified. Every DVD every sold can be traced back. No loss of flexibility or freedom, no inconveniences. Let the courts figure out the guilty people and make them accountable.
The big problem is that copyright protection is being used to leverage other unnecessary restrictions on us. There is absolutely no reason for unskippable ads in the front of purchased DVDs, no reason to prevent me from recording TV for my personal use, no reason to try to force me to buy the same song for my CD player, my iPod, my phone, and for use as a ringtone. Yet all these things are coming. They are coming because we let them, not because they are needed.
As long as content producers continue to overstep their bounds, I'm going to try to find a way to make life bad for them. I haven't bought (or downloaded) a CD from a major record label in five years. I just don't want their music anymore. I used to buy a lot, I have hundreds of CDs from the 80s and 90s. I used to go to the movies a lot, now I go once or twice a year. I make sure to educate everyone I know about abuses like the Sont rootkit fiasco so they know what is coming.
Oh, and as for the top comment, consumers really don't care about picture quality that much. Last year I had a choice. I have an HD-ready TV, so I could get programming in HDTV. However, with standard TV, I can do anything I want with it. If I got HDTV, a TIVO would cost hundreds more, I would need a lot more hard drive space, a new capture card, and a system upgrade in my PVR, I couldn't do multiple simultaneous streams over my existing network. And for several thousand dollars up front, and ten more bucks a month I would get the piece of mind that the new DRM schemes coming out in the next years will break everything I have. Yippie. I'm not selling my freedom for a few more pixels.
-
RE: The internet is closing in 5 minutes.
@Iago said:
@KenW said:
Ok... Can't take it. :-)
It can't be "24/7/365". It can be 24/7, or 24/365, but not 24/7/365.
Why the hell not? It's shorthand for "24 hours a day, 7 days a week, 365 days a year". That is to say: no daily downtime, no weekend downtime, no downtime on public holidays. This is a perfectly logical set of guarantees, and makes perfect sense even to a pedant such as myself.
(Either that, or the guy was promising that it would be possible to use the mainframe on the 24th of July, 365 AD. Was this related to some kind of time travel research?)
They're complaining about the redundancy. How could you be available 365 days a year, but yet not be available 7 days a week? 24/365 says everything that 24/7/365 says. -
RE: Create Database Production;
What about development "oopses"? Would you want that accidental DROP COLUMN to be propogated to production? It's better to deal with them manually or generate the scripts with a compare tool after QA. BTW, the test environment and a good unit test suite will prevent that oops from making it to production.
-
RE: Help me o' SQL gurus for my faith is weak
@lpope187 said:
It will work under very strict conditions, but you should be using SCOPE_IDENTITY(). @@IDENTITY is a global variable and will give you the last identity generated across all connections to the database. SCOPE_IDENTITY() will return the last identity generated from the current connection.
Larry
Close.@@IDENTITY is connection specific, but not scope specific. The biggest downfall of @@IDENTITY is if you insert a row into a table and a trigger inserts a row into a log table because of your insert, you'll get the log table's identity instead of the table you inserted into. However, SCOPE_IDENTITY will return the identity value from your insert, not the trigger. You should pretty much always use SCOPE_IDENTITY. If you want a cross-session identity for an object then use IDENT_CURRENT.
BTW, doing a select after insert may feel a little weird, but it is one of the few ways to eliminate insert locking hotspots. Oracles SEQUENCE does the same thing, but in a totally different way.
-
RE: Have you ever implimented a "related pages" engine?
Most content indexing systems call it a stop list and it's a common feature. You might also want to at least give word normalization a shot -- ex. convert dried and dries to dry, but not dryad. That way the search returns more accurate hits.
You might also want to consider the intentions of the content creators. If the content creators are in a position to try to unfairly influence the system then they are likely to put a bunch of bait words on a document. If it's an internal system used simply for the convienience of everyone, then a word search might be fine.
Finally, you might already own a system like this. If you have Windows 2000 server or Windows 2003, then you already have a sophisticated indexing service that can be queried through ADO. If you have SQL 2000 or later, then you have a system that can index and search database content. Both are really easy to use, highly programmable and likely to be far better than anything you or I could come up with in 100 hours or so.
-
RE: SQL WTF. Yeah, another one
PIVOT is a nice addition to SQL, but it really only works for trivial examples. I can't think of any good repoting tool that would be happy with a variable number of columns being returned from the data source.
90% of the time pivoting is a UI issue. Do a GROUP BY query to get the data and then dynamically design the report taking into account the number of groups returned. The other 10% of the time can be taken care of by returning a pivoted result set and making a simple HTML table out of them or binding to a generic (no specific column definitions) DataGrid. If the query language doesn't do pivots, then dynamically build the query. And yes, it will look a lot like the given query.
-
RE: C# verses VB.Net
You missed my reasons above:
Reasons for VB:
1. Intellisense updates without rebuilding
2. Easier to source events.
3. Prior to 2005, easier to make overrides
4. No case sensitivity (at least for names)
5. Better IDE experience (example: make a property in VB and C# and you'll see that even though the VB code is twice as long, you still end up typing only half as much, the rest is generated by the IDE as you type).Reasons for C#:
1. Support for unsafe code
2. C# 3.0 looks realllllly cool, so better get practiced up.
3. Less typing
4. I'm sure there are more, but 99% of the other reasons boil down to less typing. -
RE: [C#] Connect to DB in constructor
@ammoQ said:
At least in Oracle, making the connection is relatively slow and expensive. You wouldn't want to do that repeatedly, unless you have some kind of transparent connection pooling like the one Alex described.
That's true of SQL server too. Still, best practice in .Net is to never hold a connection open unneccesarily. Try this someday -- repeatedly connect and disconnect in a method while monitoring the database with whatever tool the database engine uses. You'll see that ADO.Net is smart enough to actually not disconnect when you call Close so it doesn't have to reconnect when you call Open. Viola, expensive operation avoided.Then try this -- write a method that creates two connections, holds them opened and executes a statement on each. Then write a method that does the same but doesn't hold them open. You'll see that in the second case, only one connection will actually be opened -- ADO.Net will simply reuse it wherever it is appropriate.
It would be VERY difficult for you to build a connection management strategy that works better than the stock stuff in ADO.Net. Simply follow the rule "only hold a connection open as long as you have to" and everything will work out wonderfully.
-
RE: ADO.NET: spawn a thread and sleep?
Nope, executing a query actually uses your thread. You can't use it until ADO.Net is done with it. If you want to do it asychronously, spawn a thread and call ADO.Net from that thread.
Asynchronous programming is more complex in web applications. It's a bit tricky to give a client a "waiting" message and then have the refresh reconnect to the thread that client spawned previously. Console applications rarely use threads unless you want to make a little spinner.
-
RE: C# verses VB.Net
Another point is that it is easier to translate C# to VB than the other way around. VB uses parentheses to mean too many things to narrow it down without reflecting the original VB code. So, if you do end up with a half and half project, it will be easier to make it all VB than all C#. It's also easier to translate C# found on the Internet to VB for the same reason. Good commercial converters have no problem, but the free stuff on the Internet likes C# to VB better.
Sometimes C# can be a little better -- I use C# for our Image library because of its support for unsafe code. GetPixel is really slow, so unsafe is a life saver there.
However, some things are easier in VB. Raising a custom event is 10 times easier in VB than it is in C#. The IDE is also better at making overloads in VB (prior to 2005). The VB IDE is far nicer than C#... it doesn't force you to rebuild to update Intellisense and it can even give you some Intellisense for code that won't compile. VB also fixes case where the C# IDE thinks you meant something different. All the little things add up to a smoother experience and lets you concentrate on what you are doing.
Being closer to Java is not really a good thing. The programmers may keep thinking they are writing Java and screw it up. The real work is learning the framework and that is the same for VB and C# and very different from Java.
At the end of the day, whoever sticks to their guns the most is wrong. Neither language is wrong, thinking that this is a major decision is the big mistake.
-
RE: Error: an out parameter or a return value?
@xrT said:
@ammoQ said:
I think your design is flawed.
The best option is to return the success status where appropriate. Other than that, it's generally not a good idea to pass too much information around, as this would create dependencies that make maintainance more difficult.
<FONT face=Tahoma>I agree about the passing too much information part, so ok, let's say we've narrowed down the methods that needs to have error checking, for example 2 methods, and one of those methods returns a value. What should be used then? :-/ Probably the "global" one like dhromed suggested is the most suitable...<FONT face="Times New Roman">
</FONT>
</FONT>
Stay away from anything Global. All ASP.Net apps are multithreaded and you'll end up on the front page of this site if you train yourself to use globals for error aggregation in an ASP.Net application. Even sticking an errors collection in Session is asking for trouble if a user opens two browser windows or if you end up with something out of the ordinary like frames or popups. I'd find some type of organization for the methods that need this functionality, then instantiate some sort of "CallContext" object. The methods would be methods on the "CallContext" object and errors would be aggregated as a property of that object. Then error clearing and checking would be methods on the "CallContext".I'd also caution against the whole multi-error approach. By definition, in order to create a multi-error system like this, you have to have silent errors (errors that you have to intentionally look for). Passing an error as a parameter or a return code is asking for trouble.... If you forget to check it, all hell breaks loose. C programmers get away with it because everything works this way in C, so they don't forget (well, sometimes they still do). If you forget about an exception, it'll come and find you. Besides, then how do you return a legitimate value? That makes code ugly, this:
<FONT face="Courier New">try
{ MyLabel.Text = MyMethod(MyParameter).toString(); }
catch { ... }</FONT>becomes:
<FONT face="Courier New">int ret;
if( MyMethod(MyParameter, ret) !=0 )
{ MyLabel.Text = ret.toString(); }
else
{ ... }</FONT>And this:
<FONT face="Courier New"><FONT face="Courier New">try
{ MyLabel.Text = MyMethod2(MyMethod(MyParameter)).toString(); }
catch (ExType1 ex) { ... }
catch (ExType2 ex) { ... }</FONT><FONT face="Times New Roman">Becomes a total mess.</FONT>
</FONT>So, for the methods that perform the actual work, I'd strongly recommend exceptions. Then for those specific processes where you want to collect a bunch of errors, wrap the process in a big method and throw a custom exception that contains a collection of thing that went wrong.
Take all of the advice to avoid exception with a grain of salt. It's realllllllllllllllllllllllly hard to have enough exceptions to significantly slow down a system, unless everything always throws an exception. Besides, it's worth buying a little extra harware to avoid the pain of debugging silent failures. Also, 95% of all business type apps are IO bound, not CPU or memory bound. That means that even if a system is running at 100% of capacity, it's unlikely that a significant exception load will impact performance at all. Exceptions are just objects and they consume CPU time and memory.
-
RE: Salaries
I feel like jumping in here with another perspective because right now it looks like CodeWhisperer beating up on CPound...
CPound,
It looks like you are using the wrong yardstick to evaluate good code. You seem to think that if it functions then it is as good as it could get. That's like a custom car painter being happy if the paint sticks to the car.The absolute lowest level of acceptability of code is that it performs its intended function. Next, it must be maintainable and then extensible. Good programmers write maintainable code without even thinking about it. They wouldn't dream of copy and pasting the same chunk of code 9 times -- not because they were taught that loops/functions/classes/whatever were "good" in school, but because they know that some day the requirements will change or a bug will be discovered and they'll never find the other eight.
Good programmers also write methods that are "reusability friendly". Meaning that they can be used in many situations without doing a whole lot of research as to whether the method will cause unexpected issues. We do have fancy words for this stuff, like idempotent. But, we don't use the words to scare away 23 year olds that didn't get a CS degree. We name things that are important, these things aren't important to the rest of the world, so the names seem odd to most and it looks like we are trying to build some type of clique. This is not ivory tower stuff, but real issues that are important even in 1000 line apps. All of us have libraries of code that we use in many projects and we know from experience that methods with side effects cause us to have to work late at the least convenient times.
We also know the common pitfalls of modern languages and the available solutions. The interview question about String building was designed to see if the candidate was familiar with the perils of String concatenation and the available solutions and their tradeoffs. That is very important stuff for writing reusable code. You might write a library in a small app today and reuse it in a large app tomorrow. If the library builds strings (maybe javascript generator), it should be second nature to choose the correct String building technique. Making the wrong choice will lead to horribly slow code very quickly. Sometimes questions like this could be considered "trick questions", for example; the .Net GetPixel() method of the BitMap class is horribly slow. However, expecting that someone knows this at an interview would be a trivia question. Expecting that they know that String concatenation is slow is not trivia. It is far more common and is a result of the way that most modern languages handle Strings.
As an example, I recently picked up some C# code that was written to wrap some common TWAIN API calls. Well, the code did perform its intended function, but it was still amateur code. Whenever the API returned an error code, the library hosed up the message pump of the form, locking it up (BTW, the asynchronous parts of TWAIN use the message pump to signal state changes -- yipee!!!). This was very annoying and not all that fun to debug. A few hours of coding later and it still worked just like it did before from a functional perspective. However, now I could drop it into an app without expecting clients calling me back saying "there was no error message, it just locked up".
Also, if you've never heard of polymorphism, then it means that you haven't spent a whole lot of time thinking about how interfaces could help to build extensible apps. I've seen code written by guys like you. After five or six years of maintenance, there are 100 if tests that check for a configuration option and implement a different chunk of code. A good programmer would immediately see the advantage of interfaces and probably run-time loadable implementations (or plugins as people like to call them). Not knowing the word doesn't mean that a person doesn't know about these things, but it would be very hard to read and talk about interfaces and plugins without bumping into the word polymorphism a few hundred times. Someone who knows the word polymorphism is more likely to do extensibility right than someone who doesn't. Also, at an interview, the interviewer would go a step further and not just find out if the candidate knew the word polymorphism, but also if he knew how to use it and where it is applicable. I want to tell my higher end programmers "change that config switch to a plugin" and have them know what I'm talking about and go do it.
Finally, I don't have a degree in CS. I still managed to learn all the big words and find value in them. A read a lot, not just API references, but about design patterns and architecture. I try new stuff, and sometimes make a bigger mess than I started with. I also make the type of salary mentioned earlier in the thread in a small US market as a full-time employee with benefits. I feel that I'm far cheaper than two programmers at half my salary.