How not to check the validity of an email address



  • @Arnavion said:

    What you quoted is not a reason - it is a justification of the current system.

    Also known as a reason.

    @Arnavion said:

    There is no reason that cannot be inverted.

    Yes there is. As was already said, many programs rely on this behaviour and changing it could be a security risk.
    I'm not sure what world you live in, but here on Earth you can't just change decades-old behaviour because it suits you.



  • @Salamander said:

    @Arnavion said:

    What you quoted is not a reason - it is a justification of the current system.

    Also known as a reason.

    No. Arguing to keep the current system because changing a certain behavior of the system will break it is circular reasoning. It is not a reason to not replace the system with a new one.

    @Salamander said:

    @Arnavion said:

    There is no reason that cannot be inverted.

    Yes there is. As was already said, many programs rely on this behaviour and changing it could be a security risk.
    I'm not sure what world you live in, but here on Earth you can't just change decades-old behaviour because it suits you.

    Please quote where I said I want to the current system to be changed instead of a new one being made. That's right, I didn't. Keep your whispering shoulder aliens in check.


  • ♿ (Parody)

    @Arnavion said:

    Please quote where I said I want to the current system to be changed instead of a new one being made. That's right, I didn't. Keep your whispering shoulder aliens in check.

    I don't get it. You're both filthy Excel deniers. Excel could handle all of this and it would be trivial. It's very powerful, and if you could just get over your primitive preconceived notions and get on board the Excel train, the world would be a better place.



  • @Arnavion said:

    there should be one TLD (say, .local) for local hostnames and everything else becomes an internet hostname.

    There's a lot of advice out there saying that using unofficial TLDs on your LAN is just asking for trouble which is of course true. Unfortunately, using properly registered domain names can also ask for trouble. I run a couple of small AD networks in a school: they were curric.local and admin.local when I walked into this job and I've had no compelling reason to change them. Making sure everything still works after a domain name change is work I'd much rather not have to do, and I was pleased to be able to avoid it a couple of years back when the school district changed its public domain naming scheme.

    So I agree with you in principle, but .local is no longer a good one to pick. As I found out last year at the cost of some wasted time, .local has been glommed by Apple and most modern browsers and various other devices will attempt to use MDNS instead of DNS to resolve domain names within it (hint: if you're trying to make WPAD available on your LAN, don't give your proxy server a domain name ending in .local).

    The .local TLD was at one time a Microsoft recommendation and setup default. It's by far the most common invalid TLD seen by the DNS root servers (last time I looked, they actually got more requests for .local than for .org) so there are clearly a hell of a lot of AD networks still set up that way. I would have thought Apple would cause less trouble by picking something else for their zeroconf stuff (.auto?) but that ship has sailed.

    It would be good to see the IETF officially reserve .lan for use on private networks or at least have ICANN guarantee that the root servers would always return NXDOMAIN when queried for it.



  • @flabdablet said:

    So I agree with you in principle, but .local is no longer a good one to pick.

    Oh, I didn't suggest .local because of any existing meaning it has. I just picked a word at random.

    @flabdablet said:

    It's by far the most common invalid TLD seen by the DNS root servers (last time I looked, they actually got more requests for .local than for .org) so there are clearly a hell of a lot of AD networks still set up that way.

    That is quite funny.



  • @Arnavion said:

    I just picked a word at random.
     

    .plarf



  • @Salamander said:

    I'm not sure what world you live in, but here on Earth you can't just change decades-old behaviour because it suits you

    What if we want to change it because it suits everyone?



  • @boomzilla said:

    @Arnavion said:
    Please quote where I said I want to the current system to be changed instead of a new one being made. That's right, I didn't. Keep your whispering shoulder aliens in check.

    I don't get it. You're both filthy Excel deniers. Excel could handle all of this and it would be trivial. It's very powerful, and if you could just get over your primitive preconceived notions and get on board the Excel train, the world would be a better place.

    Clearly the solution to the TLD question is to re-implement all DNS servers using Excel.



  • @Arnavion said:

    @joe.edwards said:
    Actually, one link deeper into El_Heffe's link gives the reason.

    Yes, I am aware of how hostname resolution works. What you quoted is not a reason - it is a justification of the current system. There is no reason that cannot be inverted. Rather than make 50 special TLDs so that non-TLD-qualified hostnames are local, there should be one TLD (say, .local) for local hostnames and everything else becomes an internet hostname.

    Maybe it should be like that. But it isn't. And we can't change it.



  • @Arnavion said:

    Please quote where I said I want to the current system to be changed instead of a new one being made. That's right, I didn't.
    You said: @Arnavion said:

    There is no reason that cannot be inverted.

    Inventing a new system would be meaningless unless you actually used the new system.  In order to do that you would have to <font size="4">CHANGE</font> from the current system to the newly invented system.  So yes, you did say that. @Arnavion said:
    Keep your whispering shoulder aliens in check.
    What are you, Blakeyrat's twin brother?



  • @El_Heffe said:

    Inventing a new system would be meaningless unless you actually used the new system.  In order to do that you would have to <font size="4">CHANGE</font> from the current system to the newly invented system.  So yes, you did say that. 

    Are you trying to say you see no difference between changing from a system and changing a system? By that logic, we could never have IPv6, fiber internet, digital TV and post that isn't written on paper and isn't delivered by human beings.



  • @Arnavion said:

    @El_Heffe said:
    Inventing a new system would be meaningless unless you actually used the new system.  In order to do that you would have to <font size="4">CHANGE</font> from the current system to the newly invented system.  So yes, you did say that. 

    Are you trying to say you see no difference between changing from a system and changing a system? By that logic, we could never have IPv6, fiber internet, digital TV and post that isn't written on paper and isn't delivered by human beings.

    Except that none of those things conflict with preexisting things. IPv6 addresses do not remove IPv4 addresses. Fiber internet doesn't remove MilwaukeePC. Digital TV is on a different set of frequencies than analog TV. Email existing doesn't prevent you from sending a letter in the mail.

    Domains without dots are already used for local hosts. You can't suddenly change things like that. Everything will break.



  • @Ben L. said:

    Except that none of those things conflict with preexisting things. IPv6 addresses do not remove IPv4 addresses. Fiber internet doesn't remove MilwaukeePC. Digital TV is on a different set of frequencies than analog TV. Email existing doesn't prevent you from sending a letter in the mail.

    And a hypothetical DNS 2.0 won't have to remove the existing DNS 1.0 either. It's like you people are being dense on purpose.



  • @Arnavion said:

    @Ben L. said:
    Except that none of those things conflict with preexisting things. IPv6 addresses do not remove IPv4 addresses. Fiber internet doesn't remove MilwaukeePC. Digital TV is on a different set of frequencies than analog TV. Email existing doesn't prevent you from sending a letter in the mail.

    And a hypothetical DNS 2.0 won't have to remove the existing DNS 1.0 either. It's like you people are being dense on purpose.

    I'm just waiting for the right moment to bring up jQuery 2.0 and their fantastic solution to support older* browsers:

    
    <!--[if lt IE 9]>
        <script src="jquery-1.9.1.js"></script>
    <![endif]-->
    <!--[if gte IE 9]><!-->
        <script src="jquery-2.0.0b2.js"></script>
    <!--<![endif]-->
    
    




    *by "older" they mean IE7 + IE8; and those still represent a bigger market share than Firefox or Chrome (actually almost the same share as Firefox PLUS Chrome).



  • @Ronald said:

    @Arnavion said:
    @Ben L. said:
    Except that none of those things conflict with preexisting things. IPv6 addresses do not remove IPv4 addresses. Fiber internet doesn't remove MilwaukeePC. Digital TV is on a different set of frequencies than analog TV. Email existing doesn't prevent you from sending a letter in the mail.

    And a hypothetical DNS 2.0 won't have to remove the existing DNS 1.0 either. It's like you people are being dense on purpose.

    I'm just waiting for the right moment to bring up jQuery 2.0 and their fantastic solution to support older* browsers:

    <!--[if lt IE 9]>
        <script src="jquery-1.9.1.js"></script>
    <![endif]-->
    <!--[if gte IE 9]><!-->
        <script src="jquery-2.0.0b2.js"></script>
    <!--<![endif]-->
    
    




    *by "older" they mean IE7 + IE8; and those still represent a bigger market share than Firefox or Chrome (actually almost the same share as Firefox PLUS Chrome).

    It still confuses me how outdated auto-updating software can have a bigger market share than... well, anything.



  • @dhromed said:

    @Arnavion said:

    I just picked a word at random.
     

    .plarf


    Do all your words start with pl?



  • @Arnavion said:

    a hypothetical DNS 2.0 won't have to remove the existing DNS 1.0 either.

    We actually already have a nascent DNS replacement in the form of Magnet links and DHT.



  • @Ben L. said:

    It still confuses me how outdated auto-updating software can have a bigger market share than... well, anything.

    The first thing I do when I get a new Windows laptop is turn off Windows updates. They always end up rebooting in your face when you have to pack your bag quickly and leave the hotel before the pimp comes by to see what is taking so long.


  • Discourse touched me in a no-no place

    @flabdablet said:

    We actually already have a nascent DNS replacement in the form of Magnet links and DHT.
    I think I'll leave off writing those magnet URIs by hand; they lack a little something in the humanly-synthesizable department.

    OTOH, the notion of content addressability is really quite interesting; I wonder what it would look like if you combined, say, git with distributed hash table systems. Well, apart from annoying blakey even more, of course. Are P2P systems reliable enough to be used as DVCSes? What on earth would that even mean?



  • @Arnavion said:

    And a hypothetical DNS 2.0 won't have to remove the existing DNS 1.0 either. It's like you people are being dense on purpose.
    So how would a program know whether to resolve somedomainwithoutdots through DNS 1.0 as a local domain, or through DNS 2.0 as a global one?


  • Discourse touched me in a no-no place

    @El_Heffe said:

    @toon said:

    If domain names had to have at least one dot in them that would make sense.
    According to ICANN regulations. you can't have a domain name without a dot.

    Then someone at ICANN needs to have a word with God then. (Yes, I know it redirects.)



  • @PJH said:

    Then someone at ICANN needs to have a word with God then. (Yes, I know it redirects.)

    That link doesn't seem to work in any browser (using Windows 8).



  • @dkf said:

    I think I'll leave off writing those magnet URIs by hand; they lack a little something in the humanly-synthesizable department.

    That's fast becoming irrelevant. Search is now the most common starting point for human interaction with the Web, so human-readable links scarcely matter at all. If you can search for it and it turns up, that's enough for most people; if you can bookmark it, you've even covered the priesthood.

    @dkf said:

    the notion of content addressability is really quite interesting

    I've long thought so. I think it would be fun to play with a scheme where everything bigger than 4KiB is a Merkle tree with 512-bit SHA-3 hashes linking 4KiB pages and it totally doesn't matter where any of the pieces are stored.



  • @ender said:

    @Arnavion said:
    And a hypothetical DNS 2.0 won't have to remove the existing DNS 1.0 either. It's like you people are being dense on purpose.
    So how would a program know whether to resolve somedomainwithoutdots through DNS 1.0 as a local domain, or through DNS 2.0 as a global one?

    I take it you're talking about interpreting user input, not doing the actual resolution (since for the actual resolution you just need a new resolve2() API). Considering an IPv6 string differentiates itself from IPv4 via the use of colons, perhaps all DNS 2.0 hostnames can start with a colon.



  • @dkf said:

    Are P2P systems reliable enough to be used as DVCSes?

    @flabdablet said:

    I've long thought so. I think it would be fun to play with a scheme where everything bigger than 4KiB is a Merkle tree with 512-bit SHA-3 hashes linking 4KiB pages and it totally doesn't matter where any of the pieces are stored.

    The notion of publically (*) distributed internet always loses steam because it makes resolution and/or fetching data from the server (depending on what part you distributed) as slow as the weakest link, which is the guy on dial-up or satellite and always happens to have the 4K piece you're looking for. People spend vast amounts of money on building high-speed internet infrastructure so it's a shame to waste it, even for the benefit of a globally distributed DNS table.

    (*) I might be using the word "publically" wrong. I meant distributed between such peers as consumers, as opposed to CDNs like CloudFlare or Akamai.



  • @Arnavion said:

    @ender said:
    @Arnavion said:
    And a hypothetical DNS 2.0 won't have to remove the existing DNS 1.0 either. It's like you people are being dense on purpose.
    So how would a program know whether to resolve somedomainwithoutdots through DNS 1.0 as a local domain, or through DNS 2.0 as a global one?

    I take it you're talking about interpreting user input, not doing the actual resolution (since for the actual resolution you just need a new resolve2() API). Considering an IPv6 string differentiates itself from IPv4 via the use of colons, perhaps all DNS 2.0 hostnames can start with a colon.

    That'll end well.

    "I went to http://google but it didn't work"
    "No, you want http://:google"

    Plus the problem of what the hell listening on :http means with your DNS.



  • @Arnavion said:

    it makes resolution and/or fetching data from the server (depending on what part you distributed) as slow as the weakest link, which is the guy on dial-up or satellite and always happens to have the 4K piece you're looking for. People spend vast amounts of money on building high-speed internet infrastructure so it's a shame to waste it, even for the benefit of a globally distributed DNS table.

    If everything is made out of content-addressable 4K pages, then not only does it not matter where any given 4K page is stored, it's also quite likely that multiple sources for any given 4K page will exist. Anybody interested in keeping particular content highly available is perfectly free to cache all the 4K pages comprising that content on any high-availability server(s) they control. If this scheme were in widespread use, availability of any given chunk of data should be no worse than it is on the WWW as presently organized and may occasionally be better.

    Fully content-addressable data broken into small pages like this could also interact with multicasting and anycasting in some quite interesting ways. It's fun to think through the properties of a network layer that uses universal 512-bit content addresses directly for packet switching, where the closest a network address ever gets to mapping to a physical device location is choosing which router port(s) a given 4K packet will get shipped out on when it arrives.



  • @Ben L. said:

    Do all your words start with pl?
     

    Yeah. It's like one of those things where you ask people for a random number out of [1-10] and you'll mostly get 3 or 7.



  • @flabdablet said:

    Anybody interested in keeping particular content highly available is perfectly free to cache all the 4K pages comprising that content on any high-availability server(s) they control.

    So suppose I have a site arnavion.com that I'm interested in keeping "highly available". Right now, I have a single server that hosts the site, and every client hits it to get its contents. You suggest I do what I'm doing right now, i.e., keep a public server available that contains the content of arnavion.com, and advertise to the network the content and its merkle hash. However, other peers are free to have copies of some of the pieces of that content as well. How do I guarantee that everyone who wants to access arnavion.com only gets the pieces from my high-bandwidth, low-latency server as opposed to the guy on satellite internet? That was my point.

    @flabdablet said:

    If this scheme were in widespread use, availability of any given chunk of data should be no worse than it is on the WWW as presently organized and may occasionally be better.

    Unless I'm misunderstanding your scheme, you've increased availability via redundancy but made the average experience worse.


  • Discourse touched me in a no-no place

    @anonymous235 said:

    @PJH said:
    Then someone at ICANN needs to have a word with God then. (Yes, I know it redirects.)

    That link doesn't seem to work in any browser (using Windows 8).
    Ah - could be my Firefox automatically adding .com on the end...



  • @PJH said:

    @anonymous235 said:
    @PJH said:
    Then someone at ICANN needs to have a word with God then. (Yes, I know it redirects.)
    That link doesn't seem to work in any browser (using Windows 8).
    Ah - could be my Firefox automatically adding .com on the end...
    Yesterday, clicking ont the link (Firefox / Win 7) took me to  <font face="courier new,courier">http://va.com</font>,  today I get an error message that Firefox can't find  <font face="courier new,courier">http://va/</font>



  • @PJH said:

    Ah - could be my Firefox automatically adding .com on the end...

    Yes, I'm pretty sure it does that. Even links like http://flickr/photos/81499140@N03/9692436380/ work (Opera 12 actually lets you customize which TLDs to try first). So it can't be that much of a security risk if a major browser already does it. And admittedly I haven't read the specification, but it sounds like Google already thought out the details.

    Doesn't matter, in 3 years, after the "TLDs for everyone" craze has finally happened, URL addresses will be such a clusterfuck that hopefully everyone will want a new standard.

     

    (update: that link doesn't work if you just click it, but it does if you open it in a new tab or enter it in the URL bar)


  • Discourse touched me in a no-no place

    @Arnavion said:

    How do I guarantee that everyone who wants to access arnavion.com only gets the pieces from my high-bandwidth, low-latency server as opposed to the guy on satellite internet?
    Guarantee statistically or absolutely? Statistically, you just pay a specialist high-quality service to be guaranteed peers for particular pages, much as you currently might pay a CDN to sit in front of your site. Absolute guarantees are for pussies (or people with lots of bandwidth). And if the chunk is already close to the particular consumer (i.e., they've already got it on their side of the satellite internet) then there's nothing wrong with them using that copy. You just need a reasonable way to decide what is a “close” source from an infrastructure perspective, but you always want that with P2P.



  • @Arnavion said:

    How do I guarantee that everyone who wants to access arnavion.com only gets the pieces from my high-bandwidth, low-latency server as opposed to the guy on satellite internet?

    You don't, and shouldn't have to; the network should respond as quickly as possible to a request for any given set of 4K pages. It should be designed in such a way that pages usually end up cached near (where distance is measured in milliseconds) hosts where requests for them come from. As long as your high-bandwidth, low-latency server is available to feed all the caches that end up relaying page requests between it and clients, the mere fact that some peer cache on the far side of a satellite link from your server also has some pages of your content should not make it a preferred source for those - unless, of course, the client requesting them is also on the far side of that link.

    Requests are themselves just 4KiB content pages that happen to be interior Merkle tree nodes; 4KiB / 512 bits gives each node up to 64X fan-out. If the network is built on multicast segments, caches or clients waiting for their turn to issue a request for popular content might well be able to satisfy that request without ever needing to actually issue it: sniff the passing traffic, hash each passing page, and compare the hashes to those they're about to request.


  • Considered Harmful

    @dkf said:

    @Arnavion said:
    How do I guarantee that everyone who wants to access arnavion.com only gets the pieces from my high-bandwidth, low-latency server as opposed to the guy on satellite internet?
    Guarantee statistically or absolutely? Statistically, you just pay a specialist high-quality service to be guaranteed peers for particular pages, much as you currently might pay a CDN to sit in front of your site. Absolute guarantees are for pussies (or people with lots of bandwidth). And if the chunk is already close to the particular consumer (i.e., they've already got it on their side of the satellite internet) then there's nothing wrong with them using that copy. You just need a reasonable way to decide what is a “close” source from an infrastructure perspective, but you always want that with P2P.

    OK so anyone can be a mirror for any site. What's to stop a competitor from mirroring my site with my company name and logo replaced with huge-throbbing-shlong.jpg? If they have better hardware than me, do they become de facto preferred mirror?



  • @flabdablet said:

    @Arnavion said:
    it makes resolution and/or fetching data from the server (depending on what part you distributed) as slow as the weakest link, which is the guy on dial-up or satellite and always happens to have the 4K piece you're looking for. People spend vast amounts of money on building high-speed internet infrastructure so it's a shame to waste it, even for the benefit of a globally distributed DNS table.

    If everything is made out of content-addressable 4K pages, then not only does it not matter where any given 4K page is stored, it's also quite likely that multiple sources for any given 4K page will exist. Anybody interested in keeping particular content highly available is perfectly free to cache all the 4K pages comprising that content on any high-availability server(s) they control. If this scheme were in widespread use, availability of any given chunk of data should be no worse than it is on the WWW as presently organized and may occasionally be better.

    Fully content-addressable data broken into small pages like this could also interact with multicasting and anycasting in some quite interesting ways. It's fun to think through the properties of a network layer that uses universal 512-bit content addresses directly for packet switching, where the closest a network address ever gets to mapping to a physical device location is choosing which router port(s) a given 4K packet will get shipped out on when it arrives.

    Reading your post I see three possible explanations:

    1. You wrote this bullshit using SciGen
    2. You forgot to take your pills
    3. You are so used to make stupid shit up that you can't tell anymore when you are making sense or not

    To people reading this forum that don't know shit about networking: don't worry, he doesn't either. We're talking about the moron who blocked Google because he wanted to prevent fifth-graders from downloading porn on the school network.



  • @joe.edwards said:

    OK so anyone can be a mirror for any site. What's to stop a competitor from mirroring my site with my company name and logo replaced with huge-throbbing-shlong.jpg? If they have better hardware than me, do they become de facto preferred mirror?

    Maybe the computers on the network could vote?



  • @joe.edwards said:

    What's to stop a competitor from mirroring my site with my company name and logo replaced with huge-throbbing-shlong.jpg? If they have better hardware than me, do they become de facto preferred mirror?

    That's the point of the merkle tree.


  • Discourse touched me in a no-no place

    @joe.edwards said:

    OK so anyone can be a mirror for any site. What's to stop a competitor from mirroring my site with my company name and logo replaced with huge-throbbing-shlong.jpg? If they have better hardware than me, do they become de facto preferred mirror?
    Which will require them having a way to reliably construct an alternate page chunk that has the same cryptographic hash. That's quite difficult, significantly more so than the current shenanigans that can be done with DNS poisoning. (If they can break the crypto hash to order, they should focus on more profitable activities like stealing lots of money my monkeying around with financial transactions.)

    Plus even a small change on your part will force the attacker to have to recompute everything from scratch, so the attacker would be at a real disadvantage. You'd still probably want to keep some parts of the page going over a real live connection or you'd be stuck with serving exactly the same page content to everyone, but that usually doesn't need to be very large.

    There's also some industries that would welcome someone injecting page changes; they'd be able to argue that it would be the attacker that was liable for the stock image's licensing fees. (“huge-throbbing-shlong.jpg is not available in your region due to the content owner not authorising expansion overseas.”)



  • @joe.edwards said:

    OK so anyone can be a mirror for any site. What's to stop a competitor from mirroring my site with my company name and logo replaced with huge-throbbing-shlong.jpg? If they have better hardware than me, do they become de facto preferred mirror?

    Fair question.

    If I grab a chunk of content from you, and replace a few selected 4K pages of it with my own stuff, a Merkle tree constructed from the result will share many pages with your own tree, so my stuff will benefit from most of the same caches that yours does (including your own expensive high availability root cache, if you run one). So as a sleazy scamming spammer, I don't need much of my own dedicated bandwidth to make my knock-offs very nearly as highly available as your originals.

    The root page for my knock-off tree (and therefore the hash required to request the root page) would obviously be different, though, so anybody starting with your original root hash would still get your original unadulterated content regardless of which mirrors are involved.

    Making it possible for a client to distinguish a spliced-together knock-off from an original should be pretty easy: add a concept of signed content, where the last 64 bytes in each Merkle tree interior page is an Ed25519 signature for the rest of it. I can't generate new hash pages that verify against your signature, so I can't make stuff that looks like it came from you without making it identical to what actually does.



  • @flabdablet said:

    It's by far the most common invalid TLD seen by the DNS root servers (last time I looked, they actually got more requests for .local than for .org)
     

    Okay, that one needed [url=http://stats.l.root-servers.org/cgi-bin/dsc-grapher.pl?plot=qtype_vs_all_tld&server=L-root]some verification[/url].

    As of today the top TLDs queried at the root servers are, in order, "com", "domain", "net", "local", "home", ".", "org", "localdomain", "localhost", "belkin", "lan", "arpa", "cn", and "internal".   Honourable mentions go to "router", "dlink", and "shtml" just for showing up.

    I'm impressed that "domain" is being queried over three thousand times a second, overcoming its handicap of not being a valid TLD. Bad news for fans of "org", which seems unlikely to make the playoffs this year.



  • @DCRoss said:

    Bad news for fans of "org", which seems unlikely to make the playoffs this year.
    At least they're beating the Russions (ru).

     



  • @joe.edwards said:

    @toon said:
    @Faxmachinen said:

    @boomzilla said:

    TRWTF is using 4 asterisks to separate the addresses. There are much more secure numbers of asterisks, plus using more gives you better future proofing and optimization opportunities.

    Four is not even a prime number. To do it properly you need to salt it and encode it in base64 or other cryptographically secure method. Some people really don't take security seriously.

    ...don't forget XML; you'll need a SaltBridgeFactoryBuilder, too.
    You guys are clueless about cryptography. The only truly secure method of encryption is a one-time pad. I suggest using 42 or some other hard-to-guess value as the pad.
     

     

    Overkill.. I mean, they're just email addresses, after all. Just ROT13 them bad boys and call it good. 

     


Log in to reply