Because not enough of the people here have their Lips on my weenie.


  • BINNED

    @Kian said:

    Does Lisp and friends offer me the ability to set the layout in memory of a structure, and arrange a bunch contiguously in memory? My understanding is that it doesn't, and the basic native data structure, the linked list, is awful in the systems I work with.

    Your knowledge of Lisp is based on a 70s time pod. If you're really interested in anything other than proving that you're not missing out on anything by not using it, why don't you read Practical Common Lisp (link below) and come back when you're done?



  • @antiquarian said:

    If you're really interested in anything other than proving that you're not missing out on anything by not using it, why don't you read Practical Common Lisp (link below) and come back when you're done?

    See, I can answer that question in C or C++ with a simple "yes" and a code snippet as example:

    struct One {
      int i;
      float f;
    }
    // In memory, i goes before f. Since they both (probably) have the same alignment,
    // I know there's no padding and if I put 10 in a row, 
    // the size of the whole block is 10*(sizeof(int)+sizeof(float))
    struct Two {
      float f;
      int i;
    }
    // In this struct, the elements are flipped. f goes first, i goes second. 
    // The struct will have the same size.
    

    If the same answer for Lisp requires that I read a whole book, then it's probably not that easy to accomplish the same task.

    Also, you keep using absolute statements. Like if I don't want to use something it must be because I can't or because I think it's worthless. I do understand it's valuable, and I would like it if I could get new language features without waiting for the compiler team to finish them. But getting that kind of flexibility into the language comes at a cost.


  • BINNED

    @Kian said:

    If the same answer for Lisp requires that I read a whole book, then it's probably not that easy to accomplish the same task.

    Well, are you in fact interested in Common Lisp or do you just want to prove that you're not missing out on anything? There's no point in looking through the book for a snippet if it's the latter.



  • I'm curious. I've gone over tutorials for it, but none of them mention anything about the representation in memory of the program. Although my interest level is decreasing the longer this conversation goes.

    Seriously, just a no/yes would be enough. Although, if someone familiar with the language needs to look in a book for an example of how to do it, it's probably not straightforward to do.


  • BINNED

    @Kian said:

    Although, if someone familiar with the language needs to look in a book for an example of how to do it, it's probably not straightforward to do.

    I'm familiar with the language but I code in SQL for a living and haven't done any programming at home in years, so I'm rusty. I do remember something in the book about memory mapping, but I never actually tried it. Common Lisp does have arrays and structures; the linked list hasn't been the only option for a long time. One of the programs in the book reads mp3 files, so I'm sure what you're looking for is possible.



  • @Kian said:

    [lisp]'s just not very practical.

    Does Lisp and friends offer me the ability to set the layout in memory of a structure, and arrange a bunch contiguously in memory? My understanding is that it doesn't, and the basic native data structure, the linked list, is awful in the systems I work with.

    OK. So it's impractical for your use case. Is it impractical for 50% of programmers? Even 5%? Is Clojure impractical (50%+ of use cases) or are you just talking about ANSI CommonLisp?

    @Kian said:

    If they're great for library maintainers, good for them, but most people aren't library maintainers.

    It allows library writers to better serve regular users of the language. Which is good for regular users. I remember the same debate about things like higher kinds in Scala. Devs building business software aren't going to use them very often, but they are better off for having them there. This parallels what blakey said about no caring about lambda per se, but what they enable. Same for setting memory layout.



  • @Kian said:

    What if someone decides to write their own macro to do this? Or if two different libraries implement some other functionality that the standard doesn't?

    What if someone writes a function string_copy(const char *src, char *dest)?



  • @Kian said:

    Does Lisp and friends offer me the ability to set the layout in memory of a structure, and arrange a bunch contiguously in memory?

    Here's a thought experiment: is it possible to write a TCP (or UDP) implementation in Lisp?


  • BINNED

    @tar said:

    Here's a thought experiment: is it possible to write a TCP (or UDP) implementation in Lisp?

    It appears the answer is yes.



  • There are even web servers and such. It's rather surprising, sometimes, since all of the examples are on websites with baby blue backgrounds, table layouts, bad fonts, and other early-90s web design choices. The hyper-spec doesn't even have transparency on their logo.

    And yet apparently they do have things that work I guess. I've always wanted to put in the effort and learn enough lisp that i get it and can go on my way better at programming, but all websites about it seem to be designed for terminal web browsers.


  • BINNED

    @Magus said:

    And yet apparently they do have things that work I guess. I've always wanted to put in the effort and learn enough lisp that i get it and can go on my way better at programming, but all websites about it seem to be designed for terminal web browsers.

    The GUI situation is messy unless you're either coding a web application or using one of the commercial Common Lisps.



  • @tar said:

    Here's a thought experiment: is it possible to write a TCP (or UDP) implementation in Lisp?

    Yes, but you can accomplish those tasks (and reading an mp3 too, for example) by reading and writing to and from "streams" (file, memory, doesn't matter). It's no more complex than string manipulation. You can do it in any language that's not completely stupid.

    The question is aimed more at the code that is running in order to achieve that. There's the task you want to accomplish, and the code that is running to accomplish that task. Lisp makes it easy to define what you want to do, but not how you want to do it. In C-like languages, how and what are intertwined. It makes describing the what harder, but it gives you more power to describe the how.

    Functional programmers will be quick to point out that the how should be left to machines, and that programmer productivity is improved by letting programmers focus on the what. But if you are making a game like Civ, for example, having control over the how can be the difference between a late-game turn taking seconds or minutes.

    Or for a webserver, between a request taking 1 ms and 0.1 ms. Which can mean the difference between one maxed out server and ten maxed out servers.



  • @Bort said:

    more.

    And if you were using a language with lisp macros, many of your favorite language's features could have been half-implemented as a library instead of waiting for 10 years for the compiler team to get around to it. In Clojure, core.typed (the type system) and core.async (like c#'s async/await) are libraries. Because you can do that in Clojure


    Flexibility is not necessary a good thing in a language. For example, I love about English that it lets you use ‘there’, ‘their’ and ‘they're’ interchangeably, but I'm sure they're are some stick-in-the-muds that would prefer if their was a compiler team there could count on to reject that feature.

    @Bort said:

    make reference to the Blub parable

    You mean like Mikael “I've never use visual studio and I have a hard time imagining what features it could offer” Svahnberg?


  • BINNED

    @Kian said:

    Lisp makes it easy to define what you want to do, but not how you want to do it. In C-like languages, how and what are intertwined. It makes describing the what harder, but it gives you more power to describe the how.

    And now you're back to convincing yourself and others that you're not missing out on anything. You'll be saying Common Lisp doesn't have loops next (hint: it does).



  • @antiquarian said:

    And now you're back to convincing yourself and others that you're not missing out on anything. You'll be saying Common Lisp doesn't have loops next (hint: it does).
    I'm not saying it doesn't give you that ability. I'm ASKING if it does, and how easy it is to use it. In C and C++, it's one of the first things you learn. I have yet to see a Lisp tutorial that goes into details of how the interpreter running your code works, or how to get it to do what you want. And you told me to read a book to find out.

    I did read the MIT book that uses Scheme ("Structure and Interpretation of Computer Programs", good thing I have it bookmarked) and it taught you how to build your own interpreter on top of the actual interpreter, so you can scheme while you scheme. Meant to play with how function parameters are evaluated and other such things. But even though it lets you write an interpreter in a few lines, you're no closer to knowing what the actual computer is doing. There's guidance to know how not to do needless operations over and over, memoization, etc, but not what's in registers, cache, instruction cache and such.



  • @Kian said:

    There's guidance to know how not to do needless operations over and over, memoization, etc, but not what's in registers, cache, instruction cache and such.

    I personally consider that a good thing. I don't want my languages to be C.



  • @Magus said:

    I personally consider that a good thing. I don't want my languages to be C.

    That's great. And I don't want my players spending five minutes waiting for five hundred units to decide their moves. That means I need to tell the computer how to do it, and that currently means basically C++. C is icky.



  • Five hundred things take that long? What kind of horrible code do you write?



  • I don't particularly care about Lisp, but there's a reason why memory management is one of the first things you learn in C and C++. It's because those are "low level languages", and they operate by poking and peeking at values in memory. So you definitely need to know how to manage memory to do anything non-trivial in them.

    In Lisp, the situation is different. Let's talk about Scheme instead, for a minute. It is in the Lisp family, and is extremely simple. The first thing you learn when you're learning Scheme is how to push and pop values onto a list--how to treat a list as a stack. The first intermediate-level thing you learn is how to traverse over a list and apply a function to each element.

    In other words, Lisp and C have different primitives, and doing direct memory access in Lisp is not easy. It doesn't matter that it's the first (intermediate) thing you learn in C, because it isn't necessary for doing intermediate-level things in Lisp.

    Similarly, you can twist C's arm into defining a map function that takes an array and a function pointer and applies the function to each element of the array. In Lisp, you would define that as

    (defun mapcar* (function &rest args)
      "Apply f to successive cars of all ARGS.
      Return the list of results."
      ;; If no list is exhausted,
      (if (not (memq nil args))
        ;; apply function to CARs.
        (cons (apply function (mapcar 'car args))
          (apply 'mapcar* function
          ;; Recurse for rest of elements.
            (mapcar 'cdr args)))))
    

    I don't even like this. Haskell makes both Lisp and C look like poop here.

    map f (x:xs) = fx : fmap f xs
    map f [] = []
    

    If you want to understand how a Lisp interprets your code, read "The Structure and Interpretation of Computer Programs." It focuses on Scheme, but it's pretty great. :hanzo:'d



  • +1 for Sick Pea. Well worth reading.


  • Notification Spam Recipient

    @Buddy said:

    I'm sure they're are some stick-in-the-muds that would prefer if their was a compiler team there could count on to reject that feature.

    I was this close to figuring out what you meant. But then I Error'd out. @CodingHorrorBot


  • 🔀

    @‍Tsaukpaetra Is Doing It Wrong™



  • @Buddy said:

    half-implemented

    That was one of the well-known problems in lisp land - it's so easy to implement some things that everyone implemented their own incompatible, incomplete version that served their specific needs. I remember reading that back in the day, everyone had their own OOP system (because they were so easy to write).

    That might be less of a problem in today's software environment. In Clojure, there are the main contributed libraries like core.async, core.logic, core.typed that are community developed and centrally approved. So most projects (every that I've seen) use the standard ones.

    Perhaps you could get a similar effect just with an open-sourced compiler. People could make forks with their feature, but then you'd have 1000 versions of the language... namespaced macros sound better than that... hmmm...

    @Buddy said:

    You mean like Mikael “I've never use visual studio and I have a hard time imagining what features it could offer” Svahnberg?

    Everyone has this problem to some degree. I never got into Smalltalk because I couldn't figure out how to use the (Squeak?) IDE and just gave up. Too different from what I was familiar with. I could draw rectangles but that was it.

    I believe there's good stuff to be learned there. Just haven't gotten around to it yet.

    There's a paradox here. Do you like jazz music (or whatever thing)? Are you familiar with it? How do you know if you like it if you're not familiar? How do you know if it's worth getting to know if you don't already know it? Other people's opinions? Do those other people know what they're talking about?



  • @Bort said:

    I never got into Smalltalk because I couldn't figure out how to use the (Squeak?) IDE and

    Squeak is a bit weird IIRC. VisualWorks is a bit easier to get started with. (I wrote a Game of Life with WV over a weekend once, although I did have a PDF from somewhere which helped me out...)



  • @Bort said:

    That was one of the well-known problems in lisp land - it's so easy to implement some things that everyone implemented their own incompatible, incomplete version that served their specific needs. I remember reading that back in the day, everyone had their own OOP system (because they were so easy to write).

    So it is, so it is. I remember having a conversation with Paul Graham about why not including an OOP framework in Arc was a bad idea; what I told him (which I'm sure he'd heard from many, many others) was that the problem was that any Lisp worth its salt would make it trivial to implement an OO framework, and instead of banning OOP from his brainchild, he'd end up with a few thousand incompatible OO frameworks clogging up the library space. This isn't speculation: nearly every Scheme implementation has its own special-snowflake OO framework, and none of them work alike or interoperate to any sane degree. Scheme can get away with that, because it is a teaching language and meant to show you exactly that sort of thing; Arc was intended to be industrial-strength, and that sort of thing would be disastrous for it. As it happens, Arc ended up sort of grounding out as he got busy running Y-Combinator, so it hardly matters now. While I don't intend Thelema to be much at all like Clojure (though I do mean to crib some of the better parts), I have to agree that they Got It Right™ in leveraging the Java and .Net infrastructure. Thelema fills a different niche than Clojure, so I don't see them competing much, assuming I ever get anywhere with it.

    Here's the thing: I am trying hard not to be a SLW. I'm not out convert you, per se. I know very well that Lisp is really not for everyone. Progammers are weird in general, but Lispers are weird by IT standards, in much the same way Smalltalkers, Forthists, and diehard assembly programmers are.

    Thing is, there are things to learn from all of those languages, important things that make it easier to grasp fundamental issues in other languages. As ESR said, learning Lisp is a lot like Latin: it may not be immediately practical, but knowing even a little of it makes a lot of other things fall into place. Lisp is a sort of anti-Assembly language - whereas assembly makes the details clear in an overwhelming manner, Lisp (especially Scheme, which pares the process of programming down virtually to an atomic minimum) makes the abstractions clear. For most coders, it's worth understanding less for its immediate use than for its explanatory power.

    I have my reasons for working in a Lisp, but I don't expect others to, any more than I would expect them to work in Python or Smalltalk if they don't like them. There's a lot of personal feelings in language choice, and that bond can be as intense as religion in its own way. And for some languages, including Lisp, that bond has definite numinous overtones: in my .sig on OS Dev, I have a line that reads: Lisp programmers tend to seem very odd to outsiders, just like anyone else who has had a religious experience they can't quite explain to others. It even has the XKCD link in it just like that. So yeah, I know my love of Lisp is pretty irrational. But IME, most other programmers (including pretty much every SLW) are at least as irrational about their preferences, and a lot less self-aware about it.



  • @blakeyrat said:

    It makes set-based operations in otherwise-procedural code actually look and behave like set-based operations.

    And that's the key word here. Functional elements work as elements, but making your whole application fully functional top to bottom will easily become a nightmare.

    @antiquarian said:

    OK, how would you implement using in C# with extension methods, assuming it didn't already exist?

    One WTFy solution, stat!

    using System;
    using System.Linq;
    using System.Linq.Expressions;
    using System.Collections.Generic;
    
    
    public class UsingProvider
    {
    	
    	private readonly List<IDisposable> _resources = new List<IDisposable>();
    	
    	public UsingProvider Using<T>(Func<T> construction, out T result) where T : IDisposable
    	{
    		var resource = construction();
    		_resources.Add(resource);
    		result = resource;
    		return this;
    	}
    	
    	public void Run(Action action)
    	{
    		try
    		{
    			action();
    		}
    		finally
    		{
    			foreach (var res in _resources) res.Dispose();
    			_resources.Clear();
    		}
    	}
    }
    
    public class TestType : IDisposable
    {
    	private int x;
    	public TestType(int x) { this.x = x; }		
    	public void DoStuff(int i, bool throwEx = false) { if (!throwEx) Console.WriteLine("DoStuff #" + x + " : " + i); else throw new Exception("Oops!"); }
    	public void Dispose() { Console.WriteLine("Disposing #" + x); }
    }
    
    public class Program
    {
    	public static void Main()
    	{
    		TestType t1, t2;
    		(new UsingProvider())
    			.Using(() => new TestType(42), out t1)
    			.Using(() => new TestType(666), out t2)
    			.Run(() => 
    				 {
    					 t2.DoStuff(-5);
    					 t1.DoStuff(7, true);
    				 });
    	}
    }
    


  • @Maciejasjmj said:

    And that's the key word here. Functional elements work as elements, but making your whole application fully functional top to bottom will easily become a nightmare.

    While the fans of Haskell and ML may disagree in principle, Lisp coders (and to varying degrees OCaml and Python coders) don't; while modern Lisps tend to emphasize functional programming, few if any Lisps are pure-functional, and the classical Lisp approach was basically procedural. So it becomes less a matter of principle and more one of degree. They emphasize functional programming, especially to the students just starting out, but that's because from a design analysis standpoint it has a lot of advantages, and generally speaking if a design is going to become ugly if you stick to strict FP, FP will go out the window. True, the language itself has a lot of characteristics that tend to push things towards FP, but as I said earlier, any Lisp that has both mutation and closures at all can have OO whenever you want it (interoperability is another matter, unfortunately).



  • @Captain said:

    It's because those are "low level languages", and they operate by poking and peeking at values in memory.

    I've grown to dislike the term "low level language" mostly because it's not very descriptive of what actually is going on. I mean, you could write an x86 assembly interpreter in lisp (would actually be pretty easy) and run it on a different architecture. So how "low level" is the assembly then?

    A better description, I feel, is that assembly, C, C++ and the like share a similar language model, and that model maps better to certain kinds of hardware, to the point that assembly can be ran almost directly by some hardware, while C and C++ can be more easily converted to something the hardware can run directly. So it's not a matter of language "level", but of choice of abstraction and how well those abstractions map to hardware. For example, in C and C++, the sizes of primitive types like char, int, etc, are chosen by each implementation as "whatever best fits the architecture the compiler is targeting and has at least this size". As a result, expressing what you want that hardware to do is comparatively easy.

    The choices that languages like Lisp made have little to do with the underlying hardware. As a result, expressing what you want the hardware to do is more difficult. On the other hand, expressing intent is easier.

    I should point out, I've read "Sick Pea" (linked to it above), enjoyed it, completed some of the exercises, and generally see it as a positive experience. It's just that as cool as the ideas in the language are, they don't fit the problems I want to solve. This doesn't mean, as @antiquarian seems to think, that I don't value the language or its features.



  • While the fans of Haskell and ML may disagree in principle.

    We don't. Haskell is purely functional, and it's type system makes implementing strongly typed imperative languages straight forward.

    The problems with most languages are

    1. Imperative by default, and there it's no easy to distinguish between values that are the result of imperative operations and those that are purely values.

    2. Weak abstraction. Object orientation (more specifically, classes) is a bad way to organize code. The drive to object orientation is a consequence of the weak type system.
      .

      innocentLookingVariable = launchTheMissiles() && return 1;

    Has the same type as

    pureValue = 1;
    

    In a strongly typed language,

    innocentLookingVariable + 1
    

    Is a type error.

    Haskell is a great imperative language, even though it is purely functional.



  • I hate C macros for the same reason I hate lisp: too much parenthesis.



  • I think reddit started being built on lisp, if imnot mixing stuff on my head, then they switched to python



  • @blakeyrat said:

    But the only product I've ever used that made silly cow jokes was the open source APT, which had "Super Cow Powers" or whatever the fuck it says when you run it.

    ~$apt-get moo
                 (__) 
                 (oo) 
           /------\/ 
          / |    ||   
         *  /\---/\ 
            ~~   ~~   
    ..."Have you mooed today?"...

  • BINNED

    @Kian said:

    and that currently means basically C++. C is icky.

    But Lips macros, if anything, look better than C++ templates (no trolling, just observation). Same arguments can be said, that yes TMP is a hack-job but only library writers should use it to maybe implement boost then all sane people should use it instead.

    @Kian said:

    Does Lisp and friends offer me the ability to set the layout in memory of a structure, and arrange a bunch contiguously in memory?
    @antiquarian said:
    why don't you read Practical Common Lisp (link below) and come back when you're done?

    That was a fair question, why not write few lines so we learn? The answer is like missionaries pointing you to read an entire bible when asking them a single simple question! life is short we try but after we see something and like it.



  • @Bort said:

    Perhaps you could get a similar effect just with an open-sourced compiler. People could make forks with their feature, but then you'd have 1000 versions of the language... namespaced macros sound better than that... hmmm...

    That does sound better, but it doesn't sound enough better to be worth the effort. Like I said, reducing the friction associated with syntax change just doesn't seem like a worthwhile thing to me. Even just between c# and java, I've been appreciating the former's slower rate of change compared to c#'s relentless syntax bloat (linq syntax, particularly, was an abortion; you can see so answers even just 2 years after its release saying “just use the extension methods already!”)

    @Bort said:

    haven't gotten around to it yet.

    There's a paradox here. Do you like jazz music (or whatever thing)? Are you familiar with it? How do you know if you like it if you're not familiar? How do you know if it's worth getting to know if you don't already know it? Other people's opinions? Do those other people know what they're talking about

    Right, sounds like a multi-armed-bandit problem. Iirc, that's a fun one because no matter how sophisticated your algorithm, you're not gonna get much better than epsilon-greedy, which is literally just keep using the best one, but with some probability epsilon choose another at random and adjust its expected value based on the result. Anyway, I'm not sure that applies here anyway, because how much you like a language doesn't matter much to the bottom line. Rather choose whichever language best suits your problem domain, that is, it's better if the programmer is more of a blank slate.

    And I know the counter to that is that lisp itself could be the blank slate, that the programmer could just learn lisp, and then any domain-specific syntax could just be added as a library, but my experience with DSLs written in dynamical languages is always that there seem to be rough edges and leaky abstractions that mean you end up learning way more about the language than you ever wanted; all that weird shit that it was promised upthread only library writers would need to know about. My hypothesis is that this is because homoiconic languages are not the ideal tool for specifying grammar; BNF derivatives are the ideal tool for specifying grammar.



  • Do you mean

    var innocentLookingVariable = () => { launchTheMissiles(); return 1; };
    
    innocentLookingVariable() + 1;
    

    or else why should 1+1 be a type error? If you're trying to say that ruby is a bad language because you can have method calls without the clarifying () you don't have to tell me twice.

    Also, can you please learn a normal-people language so we can understand wtf you're talking about when you post code snippets?



  • :frystare:

    Yeah, what you posted should be a type error too. The point of my original code is that normal people languages can't distinguish between things that are different and should be treated differently: to wit, a value and what is effectively a procedure that returns a value. You, as a programmer, can't treat

    innocent = someFunctionThatGivesAwayYourBankPassword() && 1;
    

    as a literal int and so should not be typed as one. In oo terms, it violates the Liskov Substitution principal. Numbers don't launch missiles. That's a part of their contact.



  • Oh right, because the method returns int plus some change to the global scope. Why should that affect the type of the expression, though? If you're calling something that returns int, you probably want to get at the int it's returning. Don't see what advantage your way has over method-scoped annotations.


  • Discourse touched me in a no-no place

    @antiquarian said:

    Example, please.

    Pretty much all the scripting languages can do it. Here's how in Tcl (slightly simplified because I'm being lazy):

    proc with-open-file {streamVar filename body} {
        upvar 1 $streamVar stream
        set stream [set f [open $filename]]
        try {
            return [uplevel 1 $body]
        } finally {
            close $f
        }
    }
    

    With that, I can do this:

    with-open-file abc "~/examples/somefile.txt" {
        set line [gets $abc]
        puts "the first line is $line"
    }
    

    It's got a lot of similarities (and I could make it have more if I was really all that bothered) but it's not called a macro. It's just a Tcl command, implemented by a procedure. It's just that Tcl's procedures have quite an awareness of their stack context, allowing them to do neat stuff like referring to variables in their caller or running code in their caller's context. Pretty much anything that you'd do with a Lisp macro would be done in Tcl with a procedure, but then again, Tcl doesn't need to really distinguish between compile-time and run-time processing.

    I'm pretty sure that Ruby and Perl can do similar things, and I wouldn't be at all surprised at Python being as capable as well (though I'm slightly less certain there; the language is a little bit more attached to special forms).



  • @dse said:

    That was a fair question, why not write few lines so we learn? The answer is like missionaries pointing you to read an entire bible when asking them a single simple question! life is short we try but after we see something and like it.

    I am actually going to agree with you here; at the very least, the missionaries would be likely to quote a specific chapter and verse (or group of them) to illustrate their point. The example in the book in question is at chapters 24, 25 and 29, mostly, but that's still not a very good answer, especially since those chapters mostly deal with reading and writing structures in in files, not the layout of structures in memory, unless @antiquarian is so kind as to pick up the details on this.

    To @antiquarian's defense, suggesting that you read the book isn't so uninformative as it sounds. Even if we gave you the code, the answer isn't likely to jump out at you unless you know enough Common Lisp to understand it. To make an analogy, it would be like explaining manual memory management in C to someone who has hasn't covered structs and pointers yet. While the core language structures in Lisp are simpler, more of the things you'd need to know are buried in the libraries, or in idioms and practices commonly used by Lispers, so it still would take a bit of explaining to get you to the point where the explanation makes sense.

    You would be right to point out that the Common Lisp structures and (especially) classes given don't necessarily have to lay the data out in a specific order and size from the given description, and it would take a bit of digging into the language spec on my part to determine just what it says as a matter of standards. I don't personally know CL well enough to say off of the top of my head if that is universally the case or not, and to be honest, I doubt it. It probably is implementation dependent. What's more, the author is doing some slight of hand with using his own libraries, and a casual reading isn't going to convince you that he isn't using something that is specific to his code. I will have to read the book more closely myself looking for the specific answers.

    Part of the problem is that the question isn't one that comes up in most Lisps much, simply because it is an implementation detail. It doesn't come up in most Lisp dialects for the same reason it doesn't come
    up in Python or Java - that's not the level of abstraction it is usually used at. In the case of Common Lisp, I am pretty sure defstructure guarantees some aspects of the memory layout, but that defclass doesn't.

    Also, some of the types that in C/C++ are fixed size, are of variable sizes in Common Lisp, most specifically numerical values. Almost every Lisp dialect supports bignums, and whether a given value is stored as a 32-bit integer or a variable-sized bigint is going to depend on the implementation (and in some cases, the current value of the variable). This comparable to the situation in Python and Ruby, so it shouldn't be entirely unfamiliar. If memory serves, there are ways in Common Lisp to specify the size of a particular integer or float value, but they are only used for, well, precisely the cases where you need to specify the size of the value. Most Lispers would run across such a case once in a blue moon, unless they are writing a library that deals with things like socket protocols.

    In the book, the examples are all about the inner structures of MP3 files, especially the ID3 tags. Whether that fits what you are asking about or not, I'm not sure - I'm pretty sure it wouldn't. If you give me a specific use case I'll see if I can write examples of it in both Common Lisp and R6RS Scheme (earlier versions of the Scheme standard didn't address that sort of thing at all, but then the language wasn't really meant for that kind of purpose; most implementations had some way to do it, but except when a SRFI they were usually incompatible with each other). Since this is sort of like asking a Python programmer to do the same sort of thing without using a C extension, this may be a bit iffy.

    There's isn't any specific reason why a Lisp language couldn't support this; Zetalisp did, but it was specific to the Symbolics LispMs (which had hardware support for a lot of Lisp features) and not standardized. I fully intend for Thelema to do so, though as I've said before, that's nowhere near a working implementation yet. It's more a matter where it simply hasn't been a priority for the designers of most Lisp languages.



  • @Kian said:

    Isn't the big selling point of macros that you can change the syntax of the language, not just do text substitution? So the great benefit it gives you is that you can laboriously build a more expressive language on top of your more limited one? If you just use a language with those features baked in, you save yourself the trouble of building it.

    A hammer is not a house, even if you can use a hammer to build a house. And while it may be more "useful" than a house, because it can be used to build a house or a boat while a house is just a house, if all I want is a house the ability to also make a boat is useless. Also, if I want a house but don't know how to build one, all a hammer gives me is a shitty house.

    Lisp macros are a hammer. They're not useful in and of themselves, they just make the thing you actually want to do easier. The problem is that to use them you need to know both the thing you actually want to do, and how to use macros to build a language to describe how to do the thing you want to do. Or you can use a language already intended to do that thing.

    OK, you've got me there. Lispers, including me, tend to really oversell macros. They are one of the more unique aspects of the language, and they really highlight the places Lisp fans tend to see as its major strengths, but no, they aren't really something your typical Lisp program should lay heavily on implementing. They are a lot like C++ templates in this (and many other) regards: a powerful but easily abused tool that 90% of the time should be in the background in a supporting role rather than front and center.

    Macros aren't really even a defining characteristic of Lisp, though it is hard to imagine a non-toy Lisp dialect that doesn't use them. SICP doesn't cover them at all, but then again, SICP deliberately uses a very, very minimal sub-set of Scheme in order to make it's points, and at the time the second edition was written, the Scheme macro system was being completely re-worked. It isn't even necessary to have them built into the language: as the author of this article points out, any Lisp that has a function to parse text input into a list of symbols and variables (usually provided gratis through the (read) function) can write a basic macro system in Lisp itself, since Lisp is homoiconic.

    Even more than macros, homoiconicity is the secret sauce of Lisp. If you have a way of inputting a string from a file or a console, and can parse it into a Lisp list, you have enough to make a macro system, a Lisp compiler, or a Lisp interpreter REPL, without having to change the implementation of the underlying language. But homoiconicity is sort of abstract, and its value is hard for newcomers to pin down, so Lispers tend (much to the annoyance of others) to blather on about macros and meta-circular evaluators and other things related to it without really coming out and explaining the real significance.

    I've been as guilty of this as anyone, though in the case of this particular OP, my goal was more to explain why I wanted to use s-expressions (the Lisp list format) for my assembler, which is sort of working backwards towards the goal (actually, I originally wrote it mainly as a joke in order to to derail yet another iteration of the "C vs. Assembly" flame war on OSdev, but that's another matter altogether).


  • Discourse touched me in a no-no place

    @ScholRLEA said:

    Lisp is homoiconic.

    It's strictly only fully homoiconic on platforms like a LispMachine (who uses them nowadays?). Everyone else does something different, and then it looks a lot more like what other languages do.


  • BINNED

    @Maciejasjmj said:

    And that's the key word here. Functional elements work as elements, but making your whole application fully functional top to bottom will easily become a nightmare.

    The Haskell thread is :arrows:.

    Or are you referring to Scheme? Common Lisp is not a functional programming language. It is a multi-paradigm language with support for functional programming, procedural, and object-oriented.

    @Kian said:

    I should point out, I've read "Sick Pea" (linked to it above), enjoyed it, completed some of the exercises, and generally see it as a positive experience. It's just that as cool as the ideas in the language are, they don't fit the problems I want to solve. This doesn't mean, as @antiquarian seems to think, that I don't value the language or its features.

    Which language? SICP uses Scheme. Most people writing production code are either using Clojure or Common Lisp. Clojure is a functional language. Common Lisp is not.

    @fbmac said:

    I think reddit started being built on lisp, if imnot mixing stuff on my head, then they switched to python

    That is correct, and a whole different set of :wtf:s worthy of its own thread.

    @dse said:

    That was a fair question, why not write few lines so we learn?

    Because it's already been done. If you had followed the link I posted, you would have seen that it goes to a book that's available to read for free online and has plenty of code samples.

    @Buddy said:

    my experience with DSLs written in dynamical languages is always that there seem to be rough edges and leaky abstractions that mean you end up learning way more about the language than you ever wanted; all that weird shit that it was promised upthread only library writers would need to know about.

    That wasn't my experience. Leaky abstractions result from poor design, not the fact that you've made a DSL. The bigger problem that I had was having to keep the different types in my head. I guess I need static typing.

    @ScholRLEA said:

    I am actually going to agree with you here; at the very least, the missionaries would be likely to quote a specific chapter and verse (or group of them) to illustrate their point. The example in the book in question is at chapters 24, 25 and 29, mostly, but that's still not a very good answer, especially since those chapters mostly deal with reading and writing structures in in files, not the layout of structures in memory, unless @antiquarian is so kind as to pick up the details on this.

    I could, but I think the question itself has an invalid assumption behind it: that you have to worry about how structures are laid out in memory to get code to perform reasonably fast. That may have been true of the time-pod Lisp that @Kian is talking about, but definitely not true of Common Lisp.



  • @antiquarian said:

    I could, but I think the question itself has an invalid assumption behind it: that you have to worry about how structures are laid out in memory to get code to perform reasonably fast. That may have been true of the time-pod Lisp that @Kian is talking about, but definitely not true of Common Lisp.

    Hmmn, I hadn't even considered that @kian might be referring to efficiency, actually. I assumed that the question was in regards to the ability to work with fixed data structures in memory, such as (for example) the GDTs (the virtual memory tables on the x86 architecture), or memory-mapped device registers. These are big deals in C and assembly, because that's the sort of things C was meant to work with, and some of them are only accessible with assembly language.

    For the most part, this is a non-issue in... well, every language that isn't C or assembly. It's still nice to be able to do so, and (for example) C++, Objective-C, D, and Ada have facilities for it (in the case of C++ and Objective-C, it is mostly because they were built on C rather than out of any specific need for it), but unless you are writing device drivers or an OS kernel, it simply doesn't come up.

    Since I do intend to write an OS in Thelema, I mean to have the ability to do so, but it will have to be specified in the cases where it is needed. Given that even C doesn't actually ensure the absolute sizes and positions of the fields without using some sort of annotation such as the pack pragma (otherwise the compiler may insert padding between fields if data alignment is important on the specific processor family), so this doesn't seem too arduous a requirement.



  • Yes, you got my point now.

    You probably want to get at the int that gets returned by the launchMissiles && return 1 block. Haskell treats this by giving the (semantically equivalent)block the type IO Int. Now, if you see that innocent might not be a plain number. It's a thing that can do arbitrary actions, which ultimately results in a number. Haskell makes it easy to get at the resulting value, but every use of the result is in the context of an IO calculation, since it depends on an IO computation.

    Now consider your job as a programmer, reading code. You would "skip over" all the string literals, and literal numbers, because you know that the evaluation of a literal can't affect the semantics of the program. And if you're willing to assume that the string building class won't launch the missiles, you can skip over string building code, in the same way.

    If you can distinguish between effectual and pure values, you don't have to assume. You can skip over all of the pure values, because they don't cause any effects that can change the semantics of the program or break contracts.



  • Ok, but what I'm saying is that innocent shouldn't (and doesn't in c# or java) be of type int in the first place. It's a Func<int>, which means that you have to use the method call operator, (), to get an int out of it in the first place (in java, you would actually have to call a method in the variable itself, such as innocent.apply(), or innocent.get(), or whatever). If you've been using oo for any length of time, you probably know not to trust any method call too much, especially not a lambda that doesn't take any arguments (it pretty much has to affect global state, otherwise you'd be using an actual literal instead of calling a producer).

    What I'm saying is that it seems more useful to modify the function's type to reflect its value, something like [IO] Func<int> rather than Func<[IO] int> (an int that does io? Wtf?) or Func<Tuple<IO,int>> which would be use like innocent().Item2 + 1, which isn't any better.



  • @dkf said:

    It's strictly only fully homoiconic on platforms like a LispMachine (who uses them nowadays?). Everyone else does something different, and then it looks a lot more like what other languages do.

    Not sure what you mean by that...


  • BINNED

    Thanks for the comprehensive answer! I have many bookmarks learn Haskell (it is influenced by Lisp so I guess I will end up reading more about that too).

    @ScholRLEA said:

    Since this is sort of like asking a Python programmer to do the same sort of thing without using a C extension, this may be a bit iffy.

    In Python (even if it is not CPython) there are memorybuffers that help with that sort of stuff, also there is of course numpy but that is a library and not part of the language.



  • @antiquarian said:

    That wasn't my experience. Leaky abstractions result from poor design, not the fact that you've made a DSL.

    Right, but what I'm saying is that if you're writing your dsl using a lexer and compiler compiler, the workflow encourages a better design than something implemented using macros. And I'm pretty sure there's a tie-in with strong typing here: in BNF, every production must necessarily have a static type, whereas if you're passing code around as data, it becomes very easy just to have a peek at the content of the variable to decide what type of thing to treat it as.


  • Fake News

    @ScholRLEA said:

    Arc ended up sort of grounding out

    I'm glad to see you took a neutral position on this.


  • BINNED

    @lolwhat said:

    I'm glad to see you took a neutral position on this.

    :iseewhatyoudidthere.mp3:

    @Buddy said:

    Right, but what I'm saying is that if you're writing your dsl using a lexer and compiler compiler, the workflow encourages a better design than something implemented using macros.

    Well, there is a Lisp with static typing (typed Racket). There's also Liskell, but that project seems to have been abandoned.


Log in to reply