I'm pretty sure browsers will always detect my Surface as a device with an accurate pointing device, which is not how I use it most of the time, so I doubt the API is reliable. Also, according to caniuse.com, Firefox doesn't support it.
I don't know if they still have those screens though - those pages got discontinued last year.
They probably use a smartphone app now. (One of my friends is a traffic controller at ProRail, and if I’m over at his place and there’s some question about whether the trains run on time, he checks his phone. I’ve never looked, so it could be be just uses the NS app, but I get the impression he has access to more info than that provides.)
Are there issues that could be fixed by disabling the DHCP server setting in the routers and connecting the modem to their WAN ports instead?
Either/or; not both.
If you connect your routers' WAN ports to your modem, then you'll typically be adding another layer of NAT at the router; even if you're not doing that, you'll be creating a subnet for each router's clients that will be separate from the subnet between the router and the modem.
DHCP is a broadcast-based protocol, and broadcasts don't cross routers. So if you reconfigure your routers as above and turn off their own DHCP, clients that connect to those routers won't automatically get IP addresses assigned to them; the upstream DHCP server inside the modem will be visible only to the WAN ports on the routers.
Assuming the only reason you have these routers is to provide wifi service on your LAN - that is, assuming that what you actually want to create is a single subnet with all your (and your neighbor's) Ethernet and Wifi devices visible to each other and able to access the Internet using the modem as their gateway - then what you need to do is turn off the wifi routers' routing brains and set them up as straightforward WAPs. This is the simplest configuration that could possibly work, and it means that the only routing device whose settings you then have to worry about will be the one built into the modem. Also, all connected clients (including the WAPs, unless you configure them with static IP addresses which is probably a good idea) will then be in the same subnet, able to read and write broadcast traffic to that subnet, and will therefore be able to get IP addresses from the modem's DHCP server.
Better routers will have explicit settings that let you do this, after applying which you will find that the WAP's WAN port works just like another LAN port. If you have shit-grade wifi routers, your best bet is simply to turn off their own DHCP and wire them to the modem via one of their LAN ports.
If you actually want you and your neighbor not to be able to see each other's devices on the same LAN subnet, that's when you'd use the configuration with a separate wifi router for each household, each connected to the modem via the wifi router's WAN port and all three devices running their own DHCP servers. The routers would then pick up IP addresses for their WAN sides from the modem's DHCP server, and clients in each building would get IP addresses from the DHCP server inside that household's wifi router.
If you're going that way, you'd want to connect nothing to the modem but WAN ports from wifi routers.
@Tsaukpaetra I'm pretty sure I have a zip file at home with the necessary working instructions. It was a while ago so I can't remember for certain but I might've found some instructions initially that were hard to understand / didn't work.
I'll have a look see. I might've posted the bad instructions here. If so I'll change it...
@Tsaukpaetra Maybe they couldn't be bothered putting in separate code paths, and just said "eh, if you buy a full license we'll set it to MAX_SHORTINT days and if it's a problem for you after that you can call customer service."
I just love how when the client side detects that shit has happened at the connection level that the user can't do anything about, it throws the fact in the user's face and doesn't even attempt to reestablish the communications link. It's awesome usability, right there!
It's not an outright negative correlation, but the wise do not expect scientific code to be good. Most of it is just custom one-off stuff with a single author-user, when the crappiness doesn't matter very much, but when it's going to the next level of usability (i.e., use by a second person!) then the awfulness hits. And there are exceptions; some scientists produce really good code.
This is my entire fucking job. Ugh. I taught one scientist about version control and the idea of 'libraries to do stuff'. There's a thread where I bitch about it somewhere.
If you let the user do something without even attempting to inform them that it's not something they should be doing, then when it breaks it's not the users fault when things break for them, it's your fault.
Is it just me, or are the censor-blurred sections completely readable?
If you squint.
I never understood this. How does reducing the amount of input improve clarity?
Smaller aperture increases depth of field. There are, of course, limits and counfounders to the benefit of the effect, but it comes down to looking through a smaller pinhole increases sharpness at the expense of brightness. It does this by increasing collimation by filtering out light rays outside some collimation boundary, with a smaller aperture having stronger filtering.
It's a good thing that no school ever has been named after a saint.
Ah, the ideal world…
I've definitely heard of schools named something like Firstname Lastname Elementary, so even if you get rid of the person's St. title and just name the school after the person you'll still have spaces in the names of some of the schools.
I meant religious names.
A bunch of them were sainted for doing legitimately good stuff, though. Stuff worthy of having schools named after them.
To explain a bit more thoroughly, PHP has a setting that sets a "soft" limit of memory that scripts are allowed to consume.
In current versions the default is 128M, meaning that for every script that runs (= one page being served, typically), that script could theoretically consume 128M of memory, and trying to allocate above that results in a fatal error. This is still subject to the system physically having enough memory to give out to the script; you can't just say "let my script consume 100GB" on a machine that doesn't have it - the alloc request will fail and you get the real "Out of memory" error.
So, at some point in the lifetime of that script, either at config time or during runtime, the limit was upped to 2G, so that the script could allocate anything up to that but not beyond.
It's really just a safety measure to prevent runaway scripts eating all the memory. But 2G is a stupid high number, even for a platform that flat out asks you to up the numbers from the defaults - the default is 128M, last I checked Magento they insisted on 256M minimum, 512M recommended... And this is the bottom limit to serve any page on the platform.