When is it the right time to rewrite code from scratch?



  • Goddamned, why are we talking about... ugh another potentially interesting thread muted.



  • How are you going to type check Haskell function calls in your C program?



  • What, strings and integers don't exist in Haskell?



  • @Captain said:

    Haskell function calls

    The library you are calling into states what it wants and double checks it. Though from what you are saying it sounds like Haskell can't due to it's "superior" type system.



  • The library you are calling into states what it wants and double checks it.

    That's called "dynamic typing." It's fundamentally different from static typing. The whole point of static typing is that you can avoid the dynamic typing bottlenecks of having to check each value.


  • FoxDev

    @Captain said:

    How are you going to type check Haskell function calls in your C program?

    How are you going to type check C++ function calls in your C program?
    How are you going to type check C# function calls in your C program?
    How are you going to type check VB function calls in your C program?
    How are you going to type check Java function calls in your C program?
    How are you going to type check Python function calls in your C program?
    How are you going to type check Ruby function calls in your C program?
    How are you going to type check JavaScript function calls in your C program?
    How are you going to type check Lua function calls in your C program?
    How are you going to type check PHP function calls in your C program?

    Exactly the same way you type check C function calls in your C program.


  • FoxDev

    @Captain said:

    The whole point of static typing is that you can avoid the dynamic typing bottlenecks of having to check each value.

    Oh.

    You're one of those people.



  • HeyHaskellPleaseSendSignal(3) will probably work.



  • @RaceProUK said:

    How are you going to type check C++ function calls in your C program?How are you going to type check C# function calls in your C program?How are you going to type check VB function calls in your C program?How are you going to type check Java function calls in your C program?How are you going to type check Python function calls in your C program?How are you going to type check Ruby function calls in your C program?How are you going to type check JavaScript function calls in your C program?How are you going to type check Lua function calls in your C program?How are you going to type check PHP function calls in your C program?

    uhhhh... you don't?

    Define "interop".



  • Inside of your Haskell library that is fine, but the access points used by others needs to be able to support dynamic typing as well. My point is that you've been saying that it can't, thus it is a piece of shit.


  • Java Dev

    IE you ignore any aspects of the type system which aren't supported by C. Like any object lifetime controls.



  • @Magus said:

    HeyHaskellPleaseSendSignal(3) will probably work.

    What's the return type? IO ()?



  • Who cares? That's not what you're sending to Haskell. You're just sending it an integer. If that's impossible, Haskell does not exist.



  • @Magus said:

    Common Lisp. Oh, you don't count CL as an OO language, despite having one of the most powerful type systems ever invented? This is yet another orthogonal concern.
    CL is a weirdo (in a good way!) and I'm never sure how to describe it... but no, I wouldn't call it "an OO language" despite CLOS.



  • @Captain said:

    The whole point of static typing is that you can avoid the dynamic typing bottlenecks of having to check each value.
    I think 95% of the point of static typing (especially when considering Haskell) is that some properties are proved automatically by the compiler and are guaranteed to hold across all executions... eliminating dynamic overhead is a side benefit.

    Of course, that doesn't change the fundamental picture of "dynamic typing defeats much of the purpose of using Haskell."



  • I think 95% of the point of static typing (especially when considering Haskell) is that some properties are proved automatically by the compiler and are guaranteed to hold across all executions... eliminating dynamic overhead is a side benefit.

    Yes, that's fair enough. As a practical matter, I see the two as equivalent.

    In principle, one could have a dynamic, but strong type system. (As far as I know, nobody has done this. Is C# strong and dynamic?) In this case, the runtime would have to prove properties at run-time. The point being -- the same theorems could be proved at run-time. But doing that doesn't eliminate complexity -- it just moves it to the runtime system, at the cost of slowing down execution.


  • FoxDev

    @Captain said:

    Is C# strong and dynamic?

    Yes; the compiler includes a lot of metadata about dynamic call sites.

    โ€ฆwell, OK, to be fair it's statically-typed with dynamic typing added on; it's not a true dynamic language.



  • @Captain said:

    In principle, one could have a dynamic, but strong type system. (As far as I know, nobody has done this. Is C# strong and dynamic?) In this case, the runtime would have to prove properties at run-time. The point being -- the same theorems could be proved at run-time. But doing that doesn't eliminate complexity -- it just moves it to the runtime system, at the cost of slowing down execution.

    Python and the Lisps are the closest I've heard of -- although neither of them has the theorem-proving ability of Haskell. (Nor does C# for that matter...)



  • @powerlord said:

    I would love nothing more than to do a complete rewrite on the web applicationforum software I'm currently workingcommenting on (using raw Java Servlets and JSP, manually writing JDBC database calls, etc...) but I'm not actually allowed to do that.

    FTFM



  • @Captain said:

    In principle, one could have a dynamic, but strong type system. (As far as I know, nobody has done this. Is C# strong and dynamic?) In this case, the runtime would have to prove properties at run-time. The point being -- the same theorems could be proved at run-time. But doing that doesn't eliminate complexity -- it just moves it to the runtime system, at the cost of slowing down execution.
    It also makes it... well, I was going to say "useless" for helping correctness, but my opinion of JavaScript and other weakly typed languages betray that opinion. But the point is to get that as a proof that things can't go wrong in certain ways, because detection only helps so far as you are able to and include code to recover from violations. Which I trust more than -- but not much more than -- the ability to avoid them in the first place.

    (Now, this is only somewhat a value judgement. I am too big of a Python fan and user to be able to honestly try to wear the "static typing fan" hat. But I do think that there is a world of difference between static and dynamic typing; that's my main point. And to me, like I said, almost all of the attraction to statically-typed languages is the static safety.)



  • @Bulb said:

    I am used to it.

    I am just saying it would be great if there was something that would make it easier. And to be honest I have not seen many attempts at trying to do something about it generally.

    True, we have to tweak and adapt. And I know many people that would jump at the chance to rewrite horrible code into good code if given half a chance.

    Business isn't generally keen to do so when it comes to their precious "IT Systems". I have seen it many times over. Where inheriting a code base that is so bad, that maintaining it almost seems more taxing than rewriting it.

    Business does not care about the most efficient code. Business cares about return on investment (and a pretty UI). I have seen this both as a consultant and an in-house developer. The problem is that money has to be spent (real or otherwise), and a ton of it in the case of a rewrite. Essentially, almost the cost of the project (if we're going for a full rewrite). Even if you sweeten the deal with a few extra features, business is often too myopic to see the larger picture of "less support required, more time for new features, better performance, increased support for upgrades (if you're working with products)" etc.

    This has been the bane of my professional existence for a very long time. And the sad thing is, I don't see how this ever improves unless things change dramatically in the business world...

    I recently inherited a overly complex system, and every time I dig through a different portion of code, I keep wondering why the original developers always took the most complex (read "least maintainable") route. The problem is that the system went live quite recently, so I can kiss any prospect of a rewrite goodbye for the foreseeable future...

    Okay, early morning rant over. Soon I hope to post something fun / constructive / less rant-like ๐Ÿ˜›



  • @AgentDenton said:

    This has been the bane of my professional existence for a very long time. And the sad thing is, I don't see how this ever improves unless things change dramatically in the business world...

    The problem is that most of the time, the business world is actually correct. Scheduling a rewrite, and to large extent even a larger refactoring, is an undertaking with high cost, high risk and uncertain return. And business does not like risk and uncertainty.

    The way you as a programmer can, and should, deal with it is that when you need to make a change and come across an ugly bit that is getting in your way, then you include the effort to replace that bit with something saner in the estimate of the task.

    So from business point of view, you are working on the feature or bugfix they want. And you explain that it is taking long because there is a bug deep in the logic with many adverse effects on what you need to achieve. And it won't be a lie.

    It takes some decency and judgement to select appropriate amount of things to rewrite so you still make progress on the business-visible things and at the same time continuously improve maintainability as per the boy scout rule. But it's the most practical approach in most situations.



  • @Bulb said:

    The way you as a programmer can, and should, deal with it is that when you need to make a change and come across an ugly bit that is getting in your way, then you include the effort to replace that bit with something saner in the estimate of the task.

    True, and this is how I approach most of my estimations and additions. But sometimes a problem is so fundamentally low level in the application's architecture that the only solution is to rewrite certain core portions of it. Which has a domino-effect on everything further up the chain.

    I do like the programmer boy scout rule, though ๐Ÿ˜„



  • @RaceProUK said:

    a language that doesn't use IEEE754 could have difficulties on the CLR.

    Or one that uses it properly (there are none that I am aware of, but the point remains).



  • @Buddy said:

    Or one that uses it properly (there are none that I am aware of, but the point remains).

    Yeah, does the CLR support dynamic changes to the rounding mode, or the full use of the IEEE754 mandated FP traps?


  • FoxDev

    Not that I'm aware of; it's not really designed for ultra-high-accuracy floating point stuff. And let's face it, if you need that level of accuracy, then you should be using something designed for that sort of accuracy.
    But what about financial stuff? Well, that's what Decimal is for.



  • No, but it does support launching DirectX, which by default changes the rounding mode for the thread to single-precision round-to-nearest, and provides no way to change it back.



  • @RaceProUK said:

    Not that I'm aware of; it's not really designed for ultra-high-accuracy floating point stuff. And let's face it, if you need that level of accuracy, then you should be using something designed for that sort of accuracy.

    Here's the rub: lack of support for directed roundings and FP traps (as well as extended-double, even if emulated in software) hurts because it means that these capabilities are going to become scarcer as time goes on and more and more software gets forced onto standardized platforms by business decisions completely decoupled from technical realities. Furthermore, subjecting naieve but well-meaning programmers to needless numerical misadventure is bad, mmkay?

    See http://www.cs.berkeley.edu/~wkahan/JAVAhurt.pdf for Java's version of this problem.

    @RaceProUK said:

    But what about financial stuff? Well, that's what Decimal is for.

    Agreed here -- IEEE 754:2008 incorporates support for Densely Packed Decimal floating point partly for this reason.

    @Buddy said:

    No, but it does support launching DirectX, which by default changes the rounding mode for the thread to single-precision round-to-nearest, and provides no way to change it back.

    bludgeons the DirectX devs with a numerical analysis textbook


  • BINNED

    @powerlord said:

    So, when exactly is there more value in rewriting something instead of attempting to modify what's already there?

    You know the latter can result in the former, right? You can modify the hell out of the code but never re-write the entire thing. This ensures you will not loose functionality but gradually increase the quality of the code. Of course this assumes the original code is object oriented and modular enough, otherwise yes it is time to change that ancient relic.



  • The point is to never get too far from shipping code.

    If it takes five years to rewrite from scratch, that's five years of no revenue.

    If you do it in 6 month pieces, sure it takes ten years, but that's ten years of revenue.


  • Garbage Person

    Impossible. I have to post more. We just moved to a new (to us, not to the company) building and I have a bunch of idiocy surrounding that to share.


  • Discourse touched me in a no-no place

    @Weng said:

    We just moved to a new (to us, not to the company) building and I have a bunch of idiocy surrounding that to share.

    We (company as a whole) have the head office moving to a new (to the company) building in a couple of months.

    I'm already aware of one potential idiotic proposal - put Development and QA together at one end of the (long, thin office plan*) buildingfloor, and the Test Lab which both are the main users of...

    At the fucking other!

    Edit: Scratch that - I've just remembered another idiotic proposal - put the company's on-site server-rack & patch-panels et alia (which the sysadmins control, and are currently in their own locked room) in the Test Lab.


    * Think:

    But more like double width and quintuple (if not more) length.


Log in to reply