Click the link below.
You are in Google street view. In front of you is an obelisk and a water tower.
Your task: get to the obelisk in street view (you are not allowed to go back to map).
Click the link below.
You are in Google street view. In front of you is an obelisk and a water tower.
Your task: get to the obelisk in street view (you are not allowed to go back to map).
@t_wheeler said:
Here's why I like checked exceptions. [...] Without checked exceptions, I'm going to end up with something like:
try { payrollSystem.printPaycheck(emp);
} catch(Exception e) {
// show a dialog box indicating things went wrong
}Why? Because I have no idea what exceptions are being thrown by printPaycheck() and no way to find out. But if I have checked exceptions I can do something more user-friendly:
try { payrollSystem.printPaycheck(emp);
} catch(DatabaseException e) {
// let the user know the database appears to be offline and they should try again later
} catch(NetworkException e) {
// let the user know there was an issue connecting to the network
} catch(PrintException) {
// let the user know there was a problem printing, maybe the printer is offline or needs paper?
}
And this has to do with checked exceptions.. what? You're confusing typed exceptions with checked exceptions. You can perfectly do typed catches in environments that don't support checked exceptions.
In addition, what you are doing here is wrong. You should not have caught those exceptions at all at that point if all you do is reporting errors. Reporting errors should only be done on the top level (entry point) and not at the code point that calls printPayCheck. At the top level, the system will know whether there is a user at all (instead of a deaf service call) and how to report (log? web page? message box?), That is something your coded point does not know. The top level also knows from the stack trace where the error occurred (print paycheck). One top level handler can handle all method invocations: printInvoice, shipOrder,...
@blakeyrat said:
@Renan said:I suggest using some good GUI, like Tower.I'd love to. Know of one for the platform 95% of the people on Earth use? (Github for Windows? Not good. Before you suggest that.)
Git Extensions is ugly as hell and somewhat clumsy but functional. No need to ever use a CLI. The error and warning messages remain Git's and therefore incomprehensible but the UI's visual cues are helpful enough that you can safely ignore Git core's babbling. Don't think you would have run into your troubles using this UI. Been using it for three years without ever reading any fucking manual and quite happy about it.
@Zecc said:
I don't think most people fully understand JavaScript's "var" (ie, the whole thing about variable hoisting), while Delphi's "with" keyword doesn't have much to understand (I think? I'm no Delphi expert, or even noob for that matter).It's just easier to abuse the latter than the former, that's all. You can't nest var declarations, unlike with statements and ternary operators.
Yes you can and this creates the same kind of scope soup as nested withs.
var x = function() { var x = function() { return x; }; return x; };
@Cassidy said:
@OhNoDevelopment said:So my suspicions are true! It's all meaningless (at least unit tests) and we should all live chaotically
Try a controlled test: one group do unit testing, another omit it completely. Compare the results.
@blakeyrat said:
Saying "it works but it's hard to use" is the same thing as saying "it doesn't work."
Saying "it's easy to use but it doesn't work" is also the same thing as saying "it doesn't work."
@boomzilla said:
@JvdL said:The supermarket accros the road sells beer for 0.38 euros per liter, which is about 1 labor-minute per pint of beer beating USA by seven laps.There's gotta be something funny going on there. Well, maybe that's the cheapest they sell? What kind of beer is it, anyways? That's roughly the price of generic (store brand) soda here. I also think you've inflated the median wage in Spain a bit. Definitely more than their calculation.
The brand is a blank store brand which I've never tried but makes you piss and gives you the same headache as any other beer.
The exact calculation is:
500 ml beer = 0.17 eurocents = 0.25 dollarcents. Median wage (according to The Economist / UBS()) = $3.05 per 15 minutes = $12.20 per hour = 9.50 euros/hr. That buys 48.75 pints of beer per hour = one pint every 1.2 minutes = 0.81 pints per minute
Any which way you round this, it falls in the one pint per minute range. Cheers.
() And then there's swiss banks.
@dhromed said:
@JvdL said:
Spain@JvdL said:The supermarket accros the road sells beer for 0.38 euros per literThat's just because your economy collapsed overnight. I bet you're considering switching to bottle caps for currency.
At least we're drunk, really.
There's lies, damn lies and statistics (*)
Spain's ranking of beer/minute is assuming a retail prices of $3.05 per pint to arrive at 15 labor-minutes per pint. The supermarket accros the road sells beer for 0.38 euros per liter, which is about 1 labor-minute per pint of beer beating USA by seven laps.
SPAIN!! SPAIN!! SPAIN!!
(*) And then there's swiss banks.@Salamander said:
@JvdL said:Logging to non-streams (email, SMS, event monitors) would be impossible.Whoever designed your logging framework is a moron anyway. You are not logging stream data; you are logging messages. Design a message handler interface and write a handler for streams, email, sms, whatever. Then, your logging framework doesn't have to give a shit about where it's sending data, just that its sending the correct priority message to the correct handler. Whether it is a stream or an email no longer makes any difference.
That moron is me and it is designed more or less the way you describe. To log, call
Logger.Warning("{0} is a moron because {1}", jvdl, designException);
The logger will handle such an invocation based on the severity (warning), the category derived by logger from CallingAssembly attributes and reflection. When sevcat merits logging, it compiles a message with time stamp, log text, user, source, machine, stack traces. It dispatches that to one or more handlers associated to that sevcat.
This has now been forbidden by the console police. According to them, every possible entry point needs to bootstrap the Logger with a std I/O reference wrapped in a TextWriter stream, regardless of whether the Logger will use it or not, or whether Logging is used at all. That's a fantastic design, especially for web apps that have thousands of entrypoints. Reminds of my Java days on Tomcat & Log4J. Fifteen years ago that was passable or maybe today when you get paid per line of code.Thanks but no thanks.
@morbiuswilters said:
I'm not arguing against using std I/O, I'm arguing against using std I/O in a shared library. If a library needs access to something like stdout, then the linking executable should be required to provide a stream (or equivalent). That way the executable can decide where the library writes, just as the user should be able to decide where the executable writes.
Confession: I use std I/O in shared libraries. Beat me up and explain how to improve this.
The system consists of about 50 DLL and 6 entry points (EXE or otherwise). It can be deployed for a variety of functional applications in a variety of UI.
The DLL are organized in a matrix. On one axis in technical layers: data access, number cruncing, statistic analysis, services, UI rendering, configuration, logging, etc. On another axis in functional layers: generic utilities, business logic, modeling, customizations, plug-ins, etc. The majority of the DLL can be run in any UI (or no UI at all). Only DLL dedicated to rendering UI are specific to that UI.
The system can be deployed in six environments: as a web app, as a silvelight applet, as a win forms app, as a service, as a test app and as a CLI app. The entry points are very slim: they only load the main business logic DLL, based on configuration, and then run the main UI DLL applicable the environment. Everything else is done in shared libraries.
Std I/O is used in two ways.
1. One DLL is dedicated to logging. It has logging categories and severity. Not satisfied with standard tools in the market, it is a re-invented wheel that can log to a variety of devices: files, databases, MS event viewer, HP openview, email, SMS, stderr or stdout. The destination depend on category and severity of the logged message and the IT environment. This is configured by the sys admin. It bootstraps autonomouly and by design it has more authority then the EXE as to where and how it logs. This reflects the requirement that a sys admin (an IT person) has more authority on logging than an app admin (usually a business person).
A logger that would require a stream parameter, as you suggest, makes for less capable and more complex system. Logging would be practically unconfigurable. Logging to non-streams (email, SMS, event monitors) would be impossible. The EXE would need to pass a stream parameter to every method in every class in every DLL, and so forth down the chain because any method might some day want to do some logging.
2. I'm not a big proponent of unit testing and piss on tools like NUnit but do apply testing to numeric algorithms, some of which are internal. So some DLL come with embedded unit tests. The test methods use stdout to indicate what they test, the input, the output. The DLL have one public test entry point that runs all the internal tests. To test a particular unit, one sets up data in a DB, uses the test app to run the public test of the DLL that contains the unit and pipes the output to a grepper that fishes out the results relevant to the unit. This can be done in development and in production. Std I/O makes this very easy, both to write code and to run tests.
@morbiuswilters said:
@JvdL said:Do you think it a bug because a shared library doesn't know where std I/O exactly goes to? Do you believe an EXE process (with or without UI) knows that? Run one of your console applications with an >> output.txt argument. That will clear up another misconception.I think that only strengthens the point being made. OSes let the user redirect output of applications; that's because it's the user's system, the application is a guest. By the same token, libraries should not presume they own the std* streams. If you had an application that always wrote to the same, hard-coded file on disk without any way for the user to control that, you'd probably think that was a broken program (I guess, I dunno.. you people seem to think every behavior is justified). And if a library simply wrote to an arbitrary file of its choosing, instead of taking a stream (or at the very least, filename) argument, you'd think "What a piece of shit!"
Why is stdout (or Console in this case) any different? Maybe you're writing a CLI app that uses stdout to communicate with a process; if your shared library is writing to stdout for no good reason, you could easily fuck things up, undermining the reusability of your library.
Agree with all of but fail to see why this makes using std I/O better or worse done in a DLL than doing so in an EXE or why it should be forbidden, which are the two things I took issue with.
Using std I/O is simple and crude and there are always better ways to accomplish it. That said, std I/O is the norm in Unix derivatives that run half the internet and about a billion smartphones, so it can't be all that bad. Personally, I apply std I/O (i.e. Console class) in development tools for personal and team use. Mostly, to transform files, to debug and to make release builds. Such tools use std I/O for a good reason. Doing something for no good reason is a WTF, period, not limited to std I/O.
@blakeyrat said:
@JvdL said:A console application is the simplest standalone executable. Unlike what some posters believe, a console application does not have a UI.
Yes it does. If it didn't have a UI, it wouldn't open a window when you double-clicked it.
@JvdL said:@The Rider said:[...]the only good solution is not to use Console [...] but to pass in the stdout, stderr and/or stdin [...]
RTFM. The Console class is the std I/O.
This changes... what exactly?
@JvdL said:EDIT: This does not imply that writing to std I/O is a sensible thing to do. But forbidding it isn't either.
Except that if you're writing to Console in a shared library, that's a bug.
@ShatteredArm said:
Sure, writing to the Console is almost always completely useless, but why can't a dll be intended exclusively for use in Console applications? In that case why bother with a DLL in the first place?
A DLL is a shared library. A shared library doesn't know or care what code is calling it. It doesn't know or care whether the calling code has a console UI or not.
As far as I'm concerned, the only good solution to this issue is not to use Console directly from inside DLL code, but to pass in the stdout, stderr and/or stdin streams as parameters to the corresponding DLL methods that need to write stuff to some output device [...].
EDIT: This does not imply that writing to std I/O is a sensible thing to do. But forbidding it isn't either.
@Cassidy said:
Are there any performance implications with building a view that uses views (that uses views, etc)...?
In SQL Server, yes. I've seen queries that [have implications]
Not true. SQL Server merges the submitted query with the view definition. [...]
Bullshit. [Generic counter example]. SQL is not modular.
I'm calling bullshit on your "bullshit". SQL Server views are inlined. [..] I want to see code and benchmarks. [...]
[Code and benchmarks of specific counter example]
So you're saying that "Bullshit... SQL is not modular" is the same as "It works 99% of the time, but has some edge cases where it doesn't work"
I called bullshit on the blanket statement "SQL Server merges the submitted query with the view definition" or "SQL Server views are inlined" because they are false.
And I call bullshit on blind faith in patterns that only work 99%of the time.
@havokk said:
@JvdL said:Bullshit. For a simple query like select * from someview union select something , replacing the select * from someview by the verbatim defintion of that view can already speed up performance enormously.Obviously, when it is a compllicated view in big database (billions of records) [...]I'm calling bullshit on your "bullshit". SQL Server views are inlined. The definition code for a view in inlined into the query at the Resolve stage, before it gets to the Optimise/Compile stage. Querying a view is no more or less optimial than querying the tables the view is accessing.
From http://technet.microsoft.com/en-us/library/cc917715.aspx : "When a view is referenced in the FROM clause of another query, this metadata is retrieved from the system catalog and expanded in place of the view's reference." This is not the case for a union (which as a whole doesn't hve a FROM clause). Also, there are various largely undocumented exceptions where inlining does not occur, see section "Exceptions" in http://sqlblog.com/blogs/merrill_aldrich/archive/2010/02/11/busting-a-persistent-myth-views-are-executed-before-enclosing-queries.aspx
@havokk said:
Your anecdote about 20 hours to 1 second? I want to see code and benchmarks because I don't believe the view was the issue. There was something else going on. My guess, based on situations I have seen, is that whoever rewrote it changed the order of outer joins.
The anecdote details are under NDA. A similar example is this: http://msmvps.com/blogs/greglow/archive/2006/04/02/88853.aspx . They are both examples of rewriting and answer the question "can using views have a bad impact on performance?" with "Yes, sometimes."
Code or the of the union (slightly anomyzed):
create view someview as
with deltas as (
select
model = rtrim(f.model),
f.resource_type,
resource = rtrim(f.resource),
f.quantity,
f.priority,
override = 1
from deltatable f
), originals as (
select
model = rtrim(f.model),
f.resource_type,
resource = rtrim(f.resource),
quantity = f.quantity * coalesce(p.quantity,1),
priority = null,
override = 0
from someotherview f
left join keytable p on f.uom = 'xx' and p.keycolumn = f.resource
where f.quantity > 0
and f.model_type = 'yy'
and f.model not in (select distinct model from deltas)
), formulas as (
select * from deltas union all select * from originals
), families as (
select f.model
from formulas f
join keytable p on p.keycolumn = f.model
join keytable r on r.keycolumn = f.resource
where f.resource_type = 'c'
group by f.model
having count(*) > 1
)
select
f.*,
is_family = case when fam.model is null then 0 else 1 end
from formulas f
left join families fam on fam.model = f.model
go
select * from someview union select model,resource_type,resource,quantity,null,0,null from someotherview
This takes 36 seconds to execute in SQL Studio. Replacing the part "someview" by the verbatim, it takes less than 1 second.
@havokk said:
Disclaimer: I am talking about nonmaterialised views. Materialised views have an effect on performance (that's their purpose). As an aside, if you are using SQL Server 2008 then you should look at replacing materialised views with filtered indexes.Disclaimer: I am talking about Microsoft's RDBMS product, SQL Server. If you are talking about another vendor's product then please indicate (and stop using the term "SQL Server").
I am talking about normal views in SQL Server 2008 R2.
@Jaime said:
@JvdL said:@Cassidy said:Not true. SQL Server merges the submitted query with the view definition (and any view definitions that the view itself may depend on) to make a single query plan. In order for direct table access to be faster than using a view, it has to be doing something different.Are there any performance implications with building a view that uses views (that uses views, etc)...? I'd be curious to know the experiences of those more in the knowIn SQL Server, yes. I've seen queries that took 20 hours using views and went down to under one second when broken down into direct table access.
That gets worse every new release of SQL server
Bullshit. For a simple query like select * from someview union select something , replacing the select * from someview by the verbatim defintion of that view can already speed up performance enormously.Obviously, when it is a compllicated view in big database (billions of records)
Also, if you have 10 views each referencing 10 tables out of a total of
20 and rewrite this into a query that reference each table only once it
can make a big difference. Believe me: the example of 20 hours to 1
second actually happened in a large company that had to shut down one
day every month to do a financial rollover because of this. It didn't help that whoever
made those views was not exactly a SQL expert. It only took a few hours to rewrite this.
All of this is because SQL Server query plan optimization
is, well, not optimal. It can't figure out to use the right indexes when the joins become convoluted.
SQL is not modular.
@Cassidy said:
Are there any performance implications with building a view that uses views (that uses views, etc)...? I'd be curious to know the experiences of those more in the know
In SQL Server, yes. I've seen queries that took 20 hours using views and went down to under one second when broken down into direct table access.
That gets worse every new release of SQL server
@PJH said:
@Someone You Know said:@JvdL said:It's not one I've come across.Liverpudlian soap factoryof a systemIs this an English expression,
It's the anonymization of a well-know multinational company with premises across the Mersey from Liverpool, in a town named after their 19th century killer app: a smoothly lathering soap bar.
The original sentence should be read: a demo (to that company) (of a system).
You have to be clever to develop software but anyone who can sell the kind of crap displayed in this forum is a sheer genius. I have had the privilege of working with a sales guy who was divine.
It was the early nineties. We had a demo for a Liverpudlian soap factory of a system that ran on NeXT. It had recently made the transition to Intel hardware but did not run on any laptop. It sported graphics that were best appreciated on a big screen.
The best you could hope for back then was an overhead projector so we carried a desktop tower and 19’’ CRT monitor on the plane to Manchester. You could do such things before homeland paranoia and full body scanners kicked in, provided there was enough room.
Upon arrival, the sales guy convinced me that you can’t travel that far without trying a fine selection of local ales. So under the weight of the hardware and consumed spirits our entrance in the presentation room couldn’t have made a very stable impression. Nevertheless, he managed to convince the soap boilers that our garage outfit was a seriously dedicated business partner and our software indispensible for their future success.
Running late on the way back he concluded that for us homies, Manchester Airport was the closest thing to tasting a good curry. So we sat the tower, the monitor and ourselves down for a good dinner. After nibbling half a papadam we heard the PA shout: "Will Mr. Slick and Mr. Nerd please proceed to the boarding gate, you are delaying the flight." This didn’t deter him. During a spicy vindaloo the PA got serious: they would proceed offload our luggage. We only have carry-on’s, he said and ordered desert.
When we finally set out for the plane the gate was closed but he charmed his way in nonetheless. On a Friday afternoon, the plan was chockfull. He opened an overhead bin and shoved in the monitor, crushing the tax free Chanel and single malt whisky bottles of our fellow passengers. The bins weren’t designed to carry such weight or volume and started bulging with a creaking sound.
The horrified flight attendants started reaching for crowd control devices but with commanding presence he convinced captain, crew and passengers that transporting this equipment was of vital importance to the world economy. So the equipment was strapped into his seat and he got a ride in the cockpit.
You gotta love sales people.
Write a genetic algorithm or use this tool which is free for up to thirty people: http://www.perfecttableplan.com/html/dinner-party-seating.html
@UNIX said:
Everything is a file
The UNIX and Linux crowd live by this dogma and apply it to the whole world.
Blakeyrat is a file called $home. If he doesn't want to be a file, he doesn't exist.
It's all Window's fault that it doesn't accept this dogma.
After a frustrating day, here's why I hate Java.
Background: a gig from 1998-2003, a basic Java/JSP/MySQL/Javascript vehicle, requires every now and then some new features. A twelve year old Windows laptop is used for this, rigged up with Java, Eclipse, Ant, MySQL, Perl, Tomcat, Apache common libs plus a dozen third party libs and plugins to make these actually work.
In execution, Java is fine: apps keep running well on a Linux host, taking a hundred thousand hits a day with some heavy database work on the side and only is taken down once or twice a year for hardware maintenance or software upgrades. Can’t say the same thing for .NET which is my primary vehicle since 2003. Also, no major issue with the language as such except that it's somewhat late in catching up with the rest of us.
Depending on a twelve year old box feels uncomfortable, so today I took another stab at configuring a new box without success. This time on a Mac but earlier attempts on Windows box were equally unsuccessful.
You would think that a working JSP/Java/Javascript/MySQL environment comes out of the box. But it doesn’t (tried MyEclipse once but that didn’t live up to its promises at all). So I downloaded the latest versions of Java, Eclipse, Tomcat, Ant, MySQL, etc. etc, etc. Plucked unsupported tools from the laptop.
Installation is more or less easy but then: how to make this work together? Impossible. Every seemingly simple thing to make this work needs to be done in the most complicated manner. Things that don't have anything to do with software development. Things without decent documentation, if any. With Google pointing to false clues that apply to only deprecated versions. Things that can only be done by editing XML files in a raw text editor. By hard-coding paths all over the place. By typing arcane commands in a shell.
So the day has come to a close and Eclipse and Tomcat now seem to nominally work with each other. Everything’s mirrored exactly from the configuration on the working laptop. Paths resolved. Permissions granted. Data loaded. The app compiles. When deployed as a war in a stand-alone Tomcat it runs fine. But debugging from Eclipse is impossible. Hot compiles don’t work. Which makes Eclipse just about as useful as NotePad.
Now this is probably because I'm only half capable of doing this. The point is, it shouldn't be necessary to do this shit to write code.
Java is the cause of this nightmare. Its lack of basic stuff implies the necessity of twenty third-party tools, ten of them relying on deprecated stuff, the other ten no longer supported at all and none of them compatible with any other. Can’t blame Eclipse for not getting this act together.
So I don’t hate Eclipse or Netbeans, I only feel sorry for them. Java is what I hate.
We've been using using GIT + GIT Extension (for VS) + an 80 GBP/year GIT hoster (atechmedia.com) in a small (four people) team for about a year now.
In contrast to scary stories from another thread, I never typed a single line in a CLI and never read a single line from the GIT manual. It took a day to set everything up and learn to use it (including two fuck-ups for not RTFM). Works like a charm. The GIT Extensions UI looks and feels ugly but works. Will never look back at SVN/TFS again.
@Sutherlands said:
@JvdL said:@Sutherlands said:[Not] if you get past the fat-finger that was mentioned earlier in the threadIs it worse than failure? Not really.It's a WTF alright.
All that null checking is unnecessary. It's OK to throw a NullPointerException
(*) Useful if the implementation would have been cleaner and less repetitive.
@Sutherlands said:
Is it worse than failure? Not really. As far as I can see, it's a working implementation of Comparable.
ALTER TABLE bigass_form ADD [null = null; drop table bigass_form; -- user_id] int
We use escrow4all (Dutch) as software vendor (Dutch/Spanish) with international clients (mainly US). Not sure if they do business in the UK. They prepare the legalese for a reasonable price and have various levels of source code verification. It takes about 10 minutes of time every few months to make a source code deposit (zip up project directory and upload it via secure FTP)
@Xyro said:
...assuming it has a perfectly adequate class scheme to start with
Multiply that times a mildly complex layout and you're back to the problem stated by dhromed
@dhromed said:
@JvdL said:
.large { font-size: 20px }You don't get to talk about CSS anymore.
Request denied.
My point was not about pixels or font sizes in particular but this: Different classes can style different aspects of the layout. By combining several such aspect classes on one element you can easily modify a particular aspect (like strong color) without changing anything in HTML and by changing just one line in the CSS.
@dhromed said:
@Monomelodies said:Oh, and if you don't want to specify your colours all over the place, you can always group the declarations, as in "p, a, em, strong, div { color; red; }". CSS supports that perfectly, just not in the way that you'd LIKE it to work.That is true, but it doesn't solve the problem. It only moves it. Instead of hunting down colors when you want to modify them, you're hunting down selectors when you want to change an element.
I only go that route — to great effect — when a website has some manner of skins of color schemes. Then one separates color definitions form the layout bits into a seperate stylesheet with the name of the scheme.
<style>
.strong { color: red; }
.large { font-size: 20px }
</style>
<p class="large strong">Zero maintenance</p>
@vic said:
@Jaime said:
However, if the hard drive is considered along with the device that formats it, the system will likely gain massFrom an information theory point of view, it might in fact lose mass. Indeed, a formatted disk contains almost no information (only a few bits), and so it will have higher entropy, therefore lower internal energy, therefore lower mass.
You overlooked the minus sign. A string of identical bits (for example, a disk with all bits zero) has the lowest possible information entropy. A completely random sequence of bits has the highest possible information entropy. Reference
This is similar to physical entropy. Other than that and some mathematical analogies, information entropy cannot be used to do energy book keeping, so your argument is not applicable to the mass discussion anyway.
@rad131304 said:
@JvdL said:PseudophysicsPossibly, but you have not prooven this with your above argument.
It is obvious that my post was in jest and that I'm not an expert in physics but since you have taken it seriously let me back it up:
1: Do you need to add energy to a disk to magnetize it in a particular form? Answer: yes.Does it loose energy when it demagnetizes? Yes. [Reference]
1+2=3: A magnetized disk has more mass than a non-magnetized disk.
Whether a formated disk is "more magnetized" than a non-formated one is debatable but I believe a better informed entropolgist would confirm it.
@havokk said:
@citking said:
"Does a hardrive weigh more when it gets formatted?"I've had that question and it led into an interesting discussion of
metaphysics.
A formated dist with every bit set to 0 has all its magnetic poles aligned. To make this alignment, you have to apply energy. Over time, the hard drive will gradually loose it magnetization until its all "white noise": the entropy increases. In this process, the drive will release its potential magnetic energy in the form of heat. Therefore, formating adds a bit of potential energy to the drive and as every energy field carries a certain amount of mass (E=mc2), a formated disk has more mass than a random disk.
That said, a formated drive that previously contained the complete works of Shakespeare may have lost mass...
Your comparison a < b returns a random value, -1,0,1 evenly weighted.
Most sort algorithms rely on the fact that comparison is symmetric, transitive and of course deterministic. In other words, a < b should imply b > a; a < b < c sjould imply a < c; etc.. This is not true of your sort function. So it is not strange that the sorted returns an unexpected result. Some algorithms might fail to terminate
If the algorithm is optimized for quick returns the result can be explained. Your sort function has a 1/3 change of returning equality. So the sort algorithm will find for 1/3 of its comparisons that they are equal and therefore there is no need to swap or pivot. The other 1/3 that are less also remain.
For example, quicksort will in each iteration swap only 1/3 of the partition, instead of 1/2 in normal sort.
This means an element with a start position in the top half has 2/3 probability to stay in the top half.Take a convergin limit will give a distribution bias as above.
@bjolling said:
"isa" and "hasa" are both easily represented in a DB schema. [...] First hit on google: http://mosesofegypt.net/post/Inheritance-and-Associations-with-Entity-Framework-Part-1.aspx
@JvdL said:
Did you read that? It is a clear demonstration that you cannot accurately represent an isa in a database [...]
@bjolling said:
Not thoroughly enough apparently. I only read it until I saw him demonstrating the ability. I'm not denying you can create bad models in any tool and putting different entities in the same table is a bad idea. Now take his model:
[snip]
Done properly, this would generate 4 tables:
- Person => PersonID, FirstName, LastName, PersonCategory (which is not needed)
- Person_Instructor => PersonID, HireDate
- Person_Student => PersonID, EnrollmentDate
- Person_Administrator => PersonID,AdminDate
The "PersonID" appears in all 4 tables so you can JOIN.
Wrong again: this database model allows one person record to be related to an instructor as well as a student. That may be an accurate representation reality: a teaching assistent tends to be an instructor as well as a student. But is not an isa realitionship. In your OO model, instructor and student are two classes inheriting person and unless you use multiple inheritance, student and instructors are distinct objects and can therefore not be the same person.
@bjolling said:
This database structure might not be perfect but it is good enough for my line of business application.
If I got paid everytime I heard this I would be a millionaire. No wait. I am getting paid to clean up shit like this and I am a millionaire. Please continue.
@bjolling said:
So how would you have modeled this in the DB in such a way that I would not have been able to use OO fully in my code
There is no short answer to this question but I will give a few hints
@bjolling said:
Seeing your ad hominem attack I can only assume you're the one who's trolling. "isa" and "hasa" are both easily represented in a DB schema. I know because I have done it. Why don't you read up on EF4.0 a bit. Maybe Hibernate cannot do it but that is hardly my fault
First hit on google
http://mosesofegypt.net/post/Inheritance-and-Associations-with-Entity-Framework-Part-1.aspx
Did you read that? It is a clear demonstration that you cannot accurately represent an isa in a database other than by construing bad data models from a DB perspective.Which isprecisely my argument: using ORM you either create havoc in the DB or you create shallow objects in OO.
This Moses guy believes he did a good job by stuffing students and employees in one table, with one set of columns for one type and another set of columns for the other. A constraint like "every employee must have a hire date" that a payroll system would require is out of the question. Not to mention that in the real world, he'd never get past QA for creating a redundant DB for his applet. Way to go!
@bjolling said:
[...] OO is very easy to map onto a database. A "Person" table represents a "Person" object. A "Person_Employee" table represents an Employee object inheriting from Person. [...]
Are you trolling or were you sleeping when they explained the difference between isa and hasa?
@dhromed said:
@Salami said:
Make a list of all 2 letter combinations that are impossible in any language. QW, YT, PD, etc.Ytterbium. (any language)
Opdonderen (Dutch)
QW might be correct, though, but I think I have sufficiently demonstrated the infeasibility of your cunning plan.
Qwerty is an English word, according to Merriam-Webster
@morbiuswilters said:
I read the title and closed the page in disgust. [...] That's not a citation. Knowing Atwood, it's a poorly-written opinion piece on a technology he discovered over the weekend and doesn't yet understand.
Actually, Atwood's blog entry is the five minute management summary of a more thorough piece by Ted Neward that he links to. It has the same poorly chosen and title and starts with a lengthy and needless opinion piece on Vietnam. According to the author, that tragedy was partly due to successive governments of both colors flip-flopping on the issue.
True or not, the analogy is that flip-flopping on OO or DB leads you to missing the benifits of one or the other or doing double work and usually all of that. And that, putting politics aside, is something I agree with.
More citations:
@bstorer said:
@JvdL said:
NeonSnake, what's your problem? Object-relational mapping is a fundamentally wrong concept. I would gladly get rid of it and have done so in every project where I had enough weight.I'm curious whether you're an idiot. What problem do you have with ORM, and with what do you replace it?
[-1 for quoting wiki]: "ORM is a technique technique for converting data between incompatible type systems in RDBMS and OO programming languages".
The operative phrase is "incompatible type systems". When designing an OO application of any complexity, you will leverage OO, something which is hard to represent in table structures. If you separate data and functionality too much, it may fit in a table but you will loose that leverage. Conversely, you may want to build an OO database on top of a legacy RDB. With ORM you end up with shallow objects reflecting the PK of your database. The problem is that either way you end up with a system that is the lowest common denominator of type systems with neither the acidity of RDBMS nor the flexibility of OO. Now "fundamentally wrong" doesn't mean it can't work in simple 1980's style forms applications, but who wants to use those.
Starting with EOF on NeXT as early as 1994, I was an early and happy adopter of ORM - it shielded my beautiful objects from those ugly tables. Moved to TopLink on Java in 1998 and later had brief stints with Hibernate. The latter after having already reached the conclusion: when in Rome do as the Romans and when in a database, speak SQL. Manipulate the data where it belongs and only transfer to and use in the application what you need. Which is often a dedicated single purpose object representing an aggregated data view. ORM will merely introduce many superfluous lines of code (*) compared to getting that straight from anonymous JDBC, ADODB or NET data objects. And ORM did never obviate the need to implement the data constraints on the DB: sooner rather then later other third party applications will share that data.
Refactoring ORM in various projects - including implementing business logic in SQL - has improved runtime performance and developer output considerably. It does require more than a passing knowledge of SQL though.
(*) XML declarations generated by a graphical OR mapper are, in the end, lines of code with their maintenance and all.@morbiuswilters said:
citation needed
NeonSnake, what's your problem? Object-relational mapping is a fundamentally wrong concept. I would gladly get rid of it and have done so in every project where I had enough weight.
Probably it's your screen grabber but the Linux antialiasing looks much better to me in all respects.
Also nice to let us whois your IP address
@Mithious said:
Surely one of the major advantages of the virtual delete is that if you do accidentally delete something important you can get it back again
Not in this scenario:
1: Create order, add order line 1 and order line2.
2: Intentionally delete order line 2.
3: Accidentally delete order and its cascaded order line 1.
4: Undo the delete of the order and cascade to order line 1 and order line 2
5: Another customer goes to the competition.
Because of such complexities, I believe the owners of this database have never used their audit trail to undelete something.
@Mithious said:
All database access goes through a common library that is 'aware' of the database structure
Famous last words. Twenty years from now the database is alive and kicking but the library is dead. Do the cascades in triggers.
@Auction_God said:
Simple: Create views (Matrialized or "normal") that reflect the undeleted rows. Use those in all your reporting queries.
Your suggestion has indeed been implemented. For every table, they have vw_table that does this. The view for orders is like the whole SELECT statement above but without the WHERE something clause! This view traverses almost the entire database, which has about 50 million records total.
A basic query like SELECT stuff FROM vw_orders JOIN vw_customers ON ... WHERE customer_zip = 'AB 12345' only returns a handful of rows but has horrendous performance (minutes). No surprise these neat vw_table are largely left unused.The alternative that works directly on the tables completes in milliseconds.
Fortunately I don't have to work with this database except extract some data, do some calculations and write stuff back in a side application.
In a project some smart ass adopted the principle that records should not be physically deleted but leave an audit trail instead. Each table has a "deleted" field which is time stamp. When a record has a NULL value, it is for real but otherwise it is considered deleted. Unfortunately, they didn't bother to implement cascades. The database is full of records that refer to others that are virtually deleted and by inference they must be considered deleted as well, that means not show up in queries.
Every query on that database has to take this into account. So instead of simply
SELECT stuff FROM orders WHERE something
It goes like this:
SELECT stuff FROM orders o
JOIN customers c on ...
JOIN products p on ...
JOIN warehouses w on ...
JOIN shipments s on ..
JOIN distributors d on ...
JOIN etcetera e on ...
WHERE something
AND o.deleted is null and c.deleted is null and p.deleted is null and w.deleted is null and s.deleted is null and d.deleted is null and e.deleted is null
There are thousands of such queries working their way up the reference tree. Obviously, this is error prone and many of them miss an important reference, leading to completely unreliable reports.
The suggestion to at least cascade the deletes throughout the database every once in a while wasn't appreciated because they're afraid they might loose something important.
CS screwed me over twice. Apart from the broken link, my intended reply was: Here's why you should not bother to validate JS, because 999 out of a 1000 times the JS validation will be incomplete and you're bound to one day piss off a customer who was just about to make a million dollar purchase on your web site.
@Zecc said:
Yeah, and why bother with JavaScript validation when you have to validate server-side anyway?
EDIT: Ok, fine. This is probably not a case where we can benefit from early validation because we are already server-side. But maybe, just maybe, we can save some bandwidth by not trying to send messages to "obviously invalid" addresses.
@asuffield said:
The whole concept is braindamaged anyway.
If you want to validate that
an email address is correct, send it a mail with a validation link in
it, and tell the user to go follow it.If you aren't going to
bother, then why waste time on partial tests that still don't tell you
whether it's the right email address? Either you care about having this
person's address (in which case you need to validate it properly), or
you don't (in which case you shouldn't be bothering).
@dhromed said:
Ok, so you're a low-level character, and need matching low-level armor.
Your armor should match who you're playing against, not how big you are yourself.
Case example. I run a low-level one-man software development shop. Making a decent living but very small fry for my customers. The legal entity is a Spanish SL. The software is sold through a Netherlands BV partially owned by the SL. The project implementations are done by a Colorado Inc (S-Corporation) of which the Dutch BV is the sole proprietor. This is not paranoid web of protections but the inevitable red tape consequence of handful of individual partners operating globally.
One of our customers insists on including a third party patent liability clause in the contract, making the Dutch BV liable for any infringement, like they do with their other software suppliers (IBM and SAP). This is a supply chain optimization project: there are millions of patents in that area; impossible to know them all and as software patents go, some are really trivial.
Over the course of several years, the customer can reduce millions of costs using our software (at least, that's what our sales guys said). If it turns out a patent was infringed, the patent holder can go to the customer to get a share of those millions and the customer will claim it from the BV, which will bankrupt.
But guess who wrote that software? Good faith or ignorance is no defense for patent infringement. We're all happy campers right now, but when a lot of money is at stake, the customer or my partners or the patent holders will look at me for indemnification. Limited liabilities won't protect me there. In fact, the SL is protected because it is only a capital share holder, but personally I may be liable.
Bottom line: as Alex said, be damn sure to cover your ass in contracts, regardless of your legal structure. In our case, we're trying to cap the patent liability - it's still being negotiated.