This seems like the kind of warning you might need to give Patrick Star....
<patrick's voice> Oh wow, Good thing I haven't eaten them yet! I'd have never known! </patrick's voice>
This seems like the kind of warning you might need to give Patrick Star....
<patrick's voice> Oh wow, Good thing I haven't eaten them yet! I'd have never known! </patrick's voice>
@mott555 said in Pressure to upgrade to Windows 10 ratchets up. AGAIN.:
I am really starting to hate Windows 10. I've had two ruined weekends in a row. The first weekend, I woke up early and had a few hours to get some stuff done before my friends showed up. I powered up my PC, and suddenly 6 hours of Windows Updates using the Windows install interface that made it look like a fresh install!!! I did catch up on some reading, but dangit I had plans!
Then, the next weekend I was traveling and brought my laptop. This laptop gets powered on maybe once per 6 weeks. I started working on one of my Node.js apps, and had absolutely terrible performance like I'd never seen before. After an hour or two of debugging, I couldn't find anything wrong with my code, when I finally discovered the Windows Update Service was eating 95% of my CPU (this laptop is a dual-core 1.2 GHz "i7"). I could not find out how to pause this, and ended up getting no real work done that day either.
What worked for me, was to disable the BITS service. But your results may vary, but I've seen that before.
I didn't know what it was either. And originally the explanations I found of it were really obtuse.
@blakeyrat said in Programming Confessions Thread:
Here's another one: I've had the concept of "dependency injection" explained to me about half a dozen times by half a dozen people, and I still don't really understand what it is, or why I would want to use it in my own code. I especially don't understand IoC (Inversion Of Control) containers.
Basically, you use an interface which your higher level classes use to communicate with lower level concrete classes through an abstraction. Instead of your higher level classes creating instances of the concrete classes it depends upon, even if it's through that interface (so for example the code below), instead you give the class the object that you want it to use that implements that interface.
class Watcher
{
INotifyAction act = null
public void Notify(string message)
{
if (action == null)
{
// here you map abstraction
act = new EventLogWriter(); // implements INotifyAction
}
act.ActOnNofitication(message);
}
}
You can give the dependency object to the higher level object in one of three ways:
Through the constructor:
Watcher (INotifyAction action)
Through a method:
myWatcher.Notify("This is the message", new EventLogWriter());
or through a property:
myWatcher.actor = new EventLogWriter();
The benefit to doing this is that the Watcher class doesn't have to know anything about how the INotifyAction implementers do their stuff or what methods they implement, all it needs is that INotifyAction interface to talk to them. And the INotifyAction doesn't actually have to do anything if you're not at that part of the project. If you're writing a class that depends on other classes, you neither need to know or care about the details of the dependencies, you only need to know the interface. You can even sub in a mock object that does nothing when any of the interface methods are called and implement them later. Since they're decoupled, they can be tested independently without worrying about bugs in the dependent code (since you've subbed in a mock object that can't have bugs because it doesn't actually do anything, the method bodies are all blank or return a preset value).
This makes the software more flexible to change and maintenance.
IoC (Inversion of Control) containers are used when your dependency objects themselves rely on an abstract interface and a dependency injection. The biggest challenge is to write the entire non-trivial application this way which can be done using DI frameworks like (in the case of Java) Swing, Nano-container, or Pico-Container (or in the case of C#) or Spring.NET, CastleWindsor, or StructureMap. Apparently Ninject is another good framework according to stack overflow.
@dkf said in Check for all zeros:
@ben_lubar said in Check for all zeros:
Also, in Go, nil pointers only give you trouble when you dereference them, not when you call a method on them.
So it always uses compile-time-resolved method dispatch?
It sounds more like to me that it does what Objective-C does with method dispatch, you can send messages to nil objects all day long and nothing will happen. But if you try to dereference a nil pointer, then you'll segfault. Objective-C (and likely Go, I don't know for certain as I've not studied it) is runtime (late) bound as if all its methods were C++ methods declared as "virtual" (obviously there are differences but that's the general gist of it).
@djls45 said in Check for all zeros:
OP converted to NodeBB/Markdown:
When doing a code review for a library developed by an offshore company I came across the following parsing function.
It's purpose is to take a string containing a datetime from mainframe and convert it to a valid .NET datetime object. If mainframe doesn't have a date it can either leave the field empty or fill it with zeros.
public static DateTime? GetDateTime(this string value, DateTimeFormat format) { if (string.IsNullOrEmpty(value) || string.IsNullOrEmpty(value.Trim())) { return new DateTime?(); } //check for all zeros string zero = value.Trim(); int length = zero.Length; string check = new StringBuilder().Append('0', length).ToString(); if (zero == check) { return new DateTime?(); } //snip //... }
So the caller provided the format and the DateTime was nullable. Did the function ever return null or did it just return a nullable object? Does it always return a new DateTime?() if it can't parse the string or only for empty and zero filled strings?
@lordofduct said in Anonymous ex-Microsoft coder on what it's like to work on Windows 10:
Thing is at the time I had also just gotten into linux hardcore. Only a few years prior I installed it for my first time, but it was this time I started actually using it as a primary OS. I'd dual boot, use linux for the most part, and could boot into windows for video games. I even had a virtual machine on the linux side to boot up windows directly from linux off the second hard drive with a virtual wrapper around it so I could run flash from linux without wine (i was big into actionscript 3 back then... paid the bills).
Using vista... made me think of linux a lot.
Everyone cried about the whole vista constantly asking for administrator rights (UMC I think it was called?).
And my linux communities ragged on it the most.
And I was like, "but wait... I have to sudo this and sudo that all the fucking time to! In a gui system I'm prompted for root/admin access when doing all sorts of crap daily. Why aren't we bitching about that???"
Aside from that thing, I don't remember what exactly people hated about Vista. It pestered you a lot for admin rights, that's about it. It was annoying to support because it was such a huge workflow change for plebe users that support people were annoyed of having to explain to Jody in accounting to just click the fucking OK button, after she's been trained to NEVER click any popup window cause it's probably a "virus" (popup ads, circa late 90's internet browsers sucking balls).
As @Scarlet_Manuka said, the hardware requirements were advertised lower than what the system would more smoothly run on. UAC (you were close: User Account Controls) didn't just insist on popping up an "are you sure" for everything "administrator" but a lot of tasks that were really unlikely to screw up your computer. And often times it popped up multiple times for the same "task" (because it was composed of various tasks). This "are you sure" model let you know when something administrative happened and so in that sense it was a good thing, but it was unclear to the user and the user was trained to just hit OK and move on ignoring it... except they couldn't. You see when UAC popped up, it didn't just pop up a box like gksudo or kdesu, it darkened the screen and demanded you address the box... Even if it had to do the darkening as slowly as possible in software rendering. This ended up with you giving the computer a command, it thinking about it for 30 seconds to a minute, and then darkening the screen at which point you couldn't do anything, and then another 15-30 seconds later popping up an "are you sure" which you HAD to address with the mouse, rather than just being able to skip it knowing you intended to do what you intended to do. At which point the box would disappear and your desktop would rerender in about 15-30 seconds. That's if you had the average 2006 computer and ran Vista on it. That even happened to me running Vista on a 2007 computer new from the store that I got for graduation from my parents and grandmother with my older brother's (who worked at circuit city in the tech bench) help. Additionally you couldn't "hit enter" to dismiss it and continue, you had to use the mouse to click the "Allow" button. UAC requires an entire "context switch" for the user.
Contrast this with what Linux does when you need to do something that requires root privileges: you click a link to a program that requires root, the shortcut already has "gksudo" or "kdesu" prefixing the command and a box appears asking for your password. There's no screen darkening, you can address it whenever you want, you can control it with an alt+tab and hitting enter in the box is as good as hitting ok. So when it appears, you type in your password which you already have memorized and can type rather quickly because you use it to log in all the time, and hit enter and the window disappears and the program that needed root privileges shows up and you're on your way and you NEVER HAVE TO ACKNOWLEDGE AN ARE YOU SURE DIALOG from the OS for that instance of the program again. There's no "context switch" in what the user is doing. If you used the search function in the various "start" menu alikes for Linux's desktops, the shortcut is what was found and the menu was smart enough to launch that (along with the gksudo/kdesu prompt) rather than try to run the program in user mode and fail. So you could keep your hands on the keyboard if you meant to.
In software development, most of the common "this requires root on the command line but shouldn't in a desktop gui" commands, if not all of them, have a DBUS interface which doesn't require sudo privileges to use (for example, shutting down or rebooting the computer). The things that do obviously won't work unless you're root at which point again, you can write your program to examine if it's being run in a root context or a limited user context quite easily and even have the program re-run itself as root prompting for the password using "gksudo" or "kdesu". Windows admittedly does this a bit better now with being able to add an entry to the app manifest where the program always elevates however the OS is set to do that (whether quietly or with a UAC box) but the linux solution is still simpler.
Finally on the command line, bash has a shortcut for running the last command in the context of another command:
!!
This command is substituted for the text of the last command issued in that terminal session. So if you forgot to "sudo" the last command on the command line, retrying as root is as simple as:
$ sudo !!
In windows, your command prompt session MUST START as an administrator command prompt, if it doesn't it HAS TO BE relaunched. Windows does not let you "sudo" nor does it allow you to elevate privileges dynamically. The whole thing has to start from an administrative context even if it makes zero sense for it to do so except when this one task is performed. Even if it makes more sense not to, it MUST BE ADMIN. There's no way to easily prompt the user for permission to do something in a way that the OS will respect and allow you to perform the task or at least in all the times I've gone searching for it to incorporate into my code, I've never found it. Every single solution I have found is basically either to change the manifest so that the program is run in an admin context every time it's run, or to have the program detect if it has been run as admin, and if not, to relaunch itself as admin. Maybe it's more secure that way some how, but it sure is darn annoying that it's not documented anywhere I've been able to find. Or if there IS a way, that THAT isn't documented anywhere either.
TLDR: There are vast differences between "UAC" and "having to sudo" that make the former very painful and the latter pleasant and yet secure. Sometimes Linux just does UX better.
TRWTF is that @weng gave a different hungarian notation suffix to the municipalities in each state. I had to s///g all instances of "B County" and "Municipality D" with "County A" and "Municipality C" respectively to keep track and wrap my mind around this.
@bb36e Did you notice the line under "state of the market" that says "spoiled kids who know their value"
So knowing that your work is valuable and attempting to set the terms of your work and the price of your labor is "spoiled"?
REALLY!?!?!?
So, if I walked into McDonalds and demanded a hamburger for free because I bought a bunch of other hamburgers before, and then staged a boycott if they didn't comply.... that would morally be ok?
SMH