Help Bites



  • @hungrier That is indeed what I am assuming, but I haven't found any clear indication of it. For what it's worth, the card is an NVidia Quadro K2100M.

    The problem when searching for "multiple monitors" is that most people want to use 2x 1920x1200 or similar, and it's entirely possible that by going up to 2560x1600 I'm overdoing it compared to what the card supports, but I'd like to be sure it's the reason...



  • @remi said in Help Bites:

    NVidia Quadro K2100M

    has a maximum resolution of

    3840×2160

    all of your monitors must fit within that rectangle

    with one monitor at 2560x1600 that leaves you with the following possible areas to lay out the monitors:

    • 1280x2160 (Landscape only)
    • 2160x1280 (Portrait only)
    • 3840x560 (Landscape only)
    • 560x3840 (Portrait only)

    so 1920x1080 just ain't gonna fit into that pixel buffer even though there are enough pixels left over in that buffer area they're in the wrong shape so it won't work. :-(



  • @Vixen said in Help Bites:

    so 1920x1080 just ain't gonna fit into that pixel buffer even though there are enough pixels left over in that buffer area they're in the wrong shape so it won't work.

    In landscape orinetation. If you put it into portrait orientation of 1080x1920 it would fit.



  • Do ruby subroutines need to be defined "before" they're used in the source code, or can they go last in the file?


  • Banned

    @Captain a very quick google search by someone who doesn't even know how to Hello World in Ruby suggests: everything is late-bound, you can put everything in any order and it still works.



  • @Gąska I've done production work in Ruby before, but it was a long time and many forgotten languages ago.

    For the record, I tried to ggl but couldn't think of the right keywords (apparently).


  • Banned

    @Captain I meant it more like "take everything I say with a huge grain of salt".



  • @Vixen said in Help Bites:

    @remi said in Help Bites:

    NVidia Quadro K2100M

    has a maximum resolution of

    3840×2160

    Ah, I thought it might be something like that, but couldn't find the information anywhere. Many thanks for that!

    That number matches the one I found in HP specs that I mentioned initially, but since in the specs it was in the context of one specific type of output, I wasn't sure whether it applied to all combinations of outputs!

    with one monitor at 2560x1600 that leaves you with the following possible areas to lay out the monitors:

    • 1280x2160 (Landscape only)

    Since the screen itself is laid out in landscape, 1280 is the max horizontal size I will get (and do indeed get). I guess that the 800 vertically comes from simply matching the screen aspect ratio (1280 x (1600/2560) = 800), so yup, 1280x800 is the best I can get.

    • 3840x560 (Landscape only)

    How do you get that? ... Oh, by laying the second screen above (or below) the first one, not next to it. Somehow that layout never came to my mind. But in any case I would end up with one dimension maxing out at 560, which will look even worse than my current setup (1280x800).

    So I guess that proves that I cannot get anything better than what I currently have. Which is in a sense good news, in that I know that I have reached the maximum and there is no point in tweaking it one way or another, it'll never go higher.

    Although now that I know that, maybe I could try to lower the resolution on one screen to try and get the same on both, maybe it would look better than having two screens at two wildly different resolutions? A max of 3840 makes, uh, 2x 1920x1200, I'll have to try and see how it looks.

    Thanks again for your help!

    (also and not totally unrelated, the purchase order for my new laptop has been validated, so I'm just hoping it won't be too long before it's delivered!)



  • @remi said in Help Bites:

    Although now that I know that, maybe I could try to lower the resolution on one screen to try and get the same on both, maybe it would look better than having two screens at two wildly different resolutions? A max of 3840 makes, uh, 2x 1920x1200, I'll have to try and see how it looks.

    And the verdict is, I can't even do that. Which goes back to something I said before and that I had already forgotten: even when the DVI screen is the only one plugged in, I can't get anything above 1280x800 on that screen. That part is really weird, I really can't understand why. Everything I can read tells me that there shouldn't be any issue with DVI preventing me from going higher (maybe not the full 2560x1600, but at least 1920x1200 or so).

    The only thing I haven't tried is to reboot the laptop after having the DVI screen (only) plugged in, but since I've done various hibernate/suspend and other plugging in and out of the docking station, I can't really see how that would change (and my laptop takes about 10-15 min for a full reboot + login, which is one reason why I want to change it...).

    Anyway, since the best I might get is 2x 1920x1200, and that 1920x1200 looks quite ugly on one screen (when I can do 2560x1600 on it), it's unlikely that I would like 1920x1200 on both screens better than what I now have. Plus, new laptop coming soon etc.

    Anyway anyway, no one cares about my ranting about screen layouts (and my first-world problem of having 2 screens capable of 2560x1600!), so I'm gonna shut up and actually... do some work (:eek:).


  • Notification Spam Recipient

    @remi said in Help Bites:

    Anyway anyway, no one cares about my ranting about screen layouts (and my first-world problem of having 2 screens capable of 2560x1600!), so I'm gonna shut up and actually... do some work (:eek:).

    Some people would likely jeer at my single 4k monitor running at 96dpi (which is a lie, thanks Windows for the mental gymnastics whenever I need to think about that).



  • @remi said in Help Bites:

    The screen plugged through DisplayPort has no issue displaying at 2560x1600, but the screen on DVI only goes up to 1280x800, which is tiny (especially when blown up on a 30'' screen!). It's the same when I plug only one of the screen or both.

    I poked around and found this AnandTech review of an HP monitor that did 2560x1600 on dual-link DVI but 1280x800 on single-link. Any chance your monitors do something similar?



  • @Parody said in Help Bites:

    @remi said in Help Bites:

    The screen plugged through DisplayPort has no issue displaying at 2560x1600, but the screen on DVI only goes up to 1280x800, which is tiny (especially when blown up on a 30'' screen!). It's the same when I plug only one of the screen or both.

    I poked around and found this AnandTech review of an HP monitor that did 2560x1600 on dual-link DVI but 1280x800 on single-link. Any chance your monitors do something similar?

    Umm... I don't think so, but I'm not sure. Because before plugging my laptop, those exact same two screens were plugged into a desktop, one screen with DVI and the other with, uh, DisplayPort I think, and they both showed 2560x1600. So the screen itself is capable of getting a 2560x1600 input from DVI. However I have no idea whether the desktop/laptop are sending dual/single-link on the DVI, so maybe this is indeed the issue.

    (but since I'm limited by the overall output of the video card, that specific point is of little interest -- even if I managed to get 2560x1600 through DVI, I'll still be too limited on the sum of both)



  • Does anyone know what's safe to fix an apparently broken GPT Linux partition?

    I have a computer with a disk that doesn't work anymore (well the whole computer is broken down for as-yet unknown reasons, but even just plugging that disk into another machine, it's not working). Although I'm not quite sure as to how that disk is supposed to be treated as the machine had been, at one point in its life, using 4 disks in RAID but I think that it's no longer the case (would be easy to check if I could boot the machine, but refer to above...). So anyway, I'm assuming that one disk should work by itself.

    As far as I can tell, the disk itself is OK (I don't see any obvious error with e.g. smartctl -a), although of course I might have missed something. A raw copy of the disk with dd succeeded without any error either.

    Once plugged in a different machine, the disk reports (with e.g. gparted or fdisk) a single partition of type ee which corresponds to GPT. So far so good, this is apparently how tools that can't read GPT should report it (although I'm surprised that gparted isn't able to read it, but maybe this is due to the errors?). gfdisk can read the GPT and reports a single Linux partition, which is also what I would expect.

    But I can't mount the disk nor the partition e.g. /dev/sdX isn't a mountable FS, which is normal, but there is no /dev/sdX1 which is bad news -- and of course trying to mount that fails. I'm assuming that a modern Linux system should be able to Just Work with a valid GPT disk?

    And probably more useful, gfdisk reports an error when reading the partition table. I haven't copied it (I forgot...) but it's something along the lines of "the main partition table is corrupted as something something points to somewhere after the end of the disk, I'm using the backup partition table which seems OK".

    So I'm assuming there is an error on the disk and that's why everything else fails. Hopefully the disk is not toast, it's just a single error which means if I can work around it I'll be able to recover its content. Now the question, can I trust gfdisk tools to fix that error if I follow the suggestions (i.e. rewrite the main partition table from the backup)?

    Also, how can I access that Linux partition on the disk? Do I even need to fix the partition table itself, or should I be able to read it directly, and how? Is there for example a tool that could read the raw disk (or its image) and just extract a single partition as another image?



  • @remi said in Help Bites:

    "the main partition table is corrupted as something something points to somewhere after the end of the disk, I'm using the backup partition table which seems OK".

    that would explain the lack of /dev/sdx1.... and the specific error message would seem to indicate that the disk was part of a RAID0 configuration (which would be why the end of the partition is after the end of the disc. there's supposed to be another disc there.

    you could try having gfdisk write the partition table it's found using the backup partition table, that should cause linux to find the gpt partition table and create /dev/sdx1 to be mounted.... but if you do that and the backup is wrong because RAID..... you could still end up with gibberish. best to use DD to save the original partition table so you can restore it afterwards if that makes the problem worse instead of better.



  • @Vixen said in Help Bites:

    @remi said in Help Bites:

    "the main partition table is corrupted as something something points to somewhere after the end of the disk, I'm using the backup partition table which seems OK".

    that would explain the lack of /dev/sdx1.... and the specific error message would seem to indicate that the disk was part of a RAID0 configuration (which would be why the end of the partition is after the end of the disc. there's supposed to be another disc there.

    That suddenly makes a whole lot of sense... especially with what I said of having had a RAID setup at one point in the past...

    OK, so, assuming there is still an active RAID array (or should I say, an active redundant array of RAID disks? 😉), now I have a different (set of) question(s). (I don't know much about RAID...)

    There are 3 other disks in that machine (well 4 if you count the system disk, but that one is clearly identified and separate), so if there was a RAID, it was with (some of) those disks. Now I've plugged all 4 disks into another computer, and apart from the one mentioned in the previous post, the 3 other disks all report (with gfdisk) having a GPT, which itself does not contain any partition at all. OK, that might be normal for disks other than the first one of the RAID. Maybe. Hopefully. Is there anyway I can check if those disks are part of a RAID, apart from actually rebuilding the array (see below)?

    None of the disks report any error (with smartctl -a), so it does not look like the disks are physically dead. So far so good (the fact that the whole computer is currently not-even-reaching-POST means that the problem is anywhere but not in the disks, and that the disks themselves should be OK when plugged in a different machine, so that's why I should be able to read them straightaway. Right?).

    But then I tried rebuilding the array with the disks i.e. mdadm --assemble /dev/md0 /dev/sd[XYZ] and it fails saying there is no array. To make things worse I have no idea how many disks were actually used in the array, but with 4 disks there are not that many combinations (ab, abc, abcd, bc, bcd etc.) and I tried all of them (I think?) and I always got the same message. No RAID. Does the order of disks matters? Well of course it does in theory, but does it matter when calling mdadm? I might not have tried all possible permutations...

    So at that point, I assumed that there was actually no RAID on those disks and switched back to trying to mount a single one, but maybe I was wrong. But then how do I get the RAID to work again? (also I assumed that the RAID was built with mdadm but TBH for all I know it might be a hardware thing... although they're all plugged randomly on the MB, not like in some specific ports, but maybe that doesn't mean anything)

    best to use DD to save the original partition table so you can restore it afterwards if that makes the problem worse instead of better.

    Yeah, I'm not planning on writing anything on the disks themselves until I'm pretty sure of what I'm doing, I'll be working on images as much as possible.



  • @remi said in Help Bites:

    RAID. Does the order of disks matters

    Yes for hardware RAID controllers, No for MDADM as that embeds metadata in the discs themselves that tell MDADM how the array is structured and how to configure itself. That's why you can just do a mdadm --assemble /dev/md0 /dev/disks/blah[0-5] and have it work....

    @remi said in Help Bites:

    I assumed that the RAID was built with mdadm but TBH for all I know it might be a hardware thing...

    At this point signs say that this was some sort of hardware thing.... which is fun.... because without the original controller and configuration it is possible that it is impossible (or at least wildly impractical) to recover the RAID configuration to get the discs mounted and the data recovered.

    You do have backups, yes?



  • @Vixen said in Help Bites:

    At this point signs say that this was some sort of hardware thing.... which is fun.... because without the original controller and configuration it is possible that it is impossible (or at least wildly impractical) to recover the RAID configuration to get the discs mounted and the data recovered.

    Ugh. At that point my best guess is to find out what is wrong in the rest of the computer and fix it. Assuming it's not the MB itself, because if I change that then I'll lose the original hardware RAID controller (since all disks were plugged into the MB directly, I assume that the hardware controller is built-in -- I haven't seen any other extension card or similar). Double ugh.

    I was also thinking about plugging the system disk to dig into config files to check the mdadm config, but that's useless if it's a hardware thing.

    I still have one little shred of hope: the disks likely were in RAID 1 (if anything, because the disks are 500 GB each and the owner remembers that the disk as visible by the user was 500 GB -- and they vaguely remember using only 2 of the disks for that, not the 4, so that all adds up), so in that case simply rebuilding the partition table should be able to make the first disk readable, and that single disk should contain all the information. So I'll try editing the partition table on the copy I made and see if that works.

    You do have backups, yes?

    Well, technically it's not my machine, so I can truthfully answer that I have backups. But as to whether the owner of the computer I'm fiddling has backups of the disks I'm playing with, well... I think that they relied on the disks being RAID 1 to kind-of make a sort-of not-really-but-almost backup. Which of course proves to be a complete folly with a hardware RAID controller... :homer_slowly_backing_up_into_the_hedge:



  • @remi said in Help Bites:

    I think that they relied on the disks being RAID 1 to kind-of make a sort-of not-really-but-almost backup.

    Then they have learned the first lesson of RAID the hard way.

    1. Do not act incautiously when confronting little bald wrinkly smiling men!1

    Wait, sorry. Wrong rule one.

    1. RAID is not a backup.

    and they have also learned the fifth rule of RAID

    1. RAID is not a backup. We mean it.

    and the seventeenth rule of RAID

    1. Trusting a RAID array to be its own backup will come back to bite you. If you're lucky it will only take a hand so you can get one of those cool hook things, if you're unlucky it will be a resume generating event.

    and the ninety seventh rule of RAID too

    1. DO NOT FORGET RULE ONE

    in summary, best of luck to you! if we ever meet at the pub i'll shout you a round.



  • @Vixen Should I mention that "they" are someone very close to me, who's now counting on me to fix up the problem?

    I don't think this story will have a happy ending...

    (well at least they're now saying that they'll buy external disks to make backups, so at least they'll have learnt something... maybe...)



  • @remi said in Help Bites:

    @Vixen Should I mention that "they" are someone very close to me, who's now counting on me to fix up the problem?

    it would only affect the numebr of rounds i'll shout you at the bar i'm afraid.... up to 5 now.

    @remi said in Help Bites:

    I don't think this story will have a happy ending...

    Don't give out hope yet. if it is a RAID1 array reparing the partition table will fix it.... otherwise you'll be in the realm of computer forensics.... which is a deep and rather expensive rabbit hole to go down.

    @remi said in Help Bites:

    (well at least they're now saying that they'll buy external disks to make backups, so at least they'll have learnt something... maybe...)

    It's a painful lesson... but sometimes pain is the only teacher that works.. Be sure to mention to them that they should have at least one backup offsite to protect against acts of god (like lightning strike to the electrics of your house which fries the computer and the local backup discs.)

    and of course backups not tested aren't backups either.....



  • @Vixen said in Help Bites:

    if it is a RAID1 array reparing the partition table will fix it

    If it's RAID1, you can probably mount it by specifying the FS type, ex

    mount /dev/sdc1 /mnt/tmp -t ext4

    This page could also be helpful
    https://blog.sleeplessbeastie.eu/2012/05/08/how-to-mount-software-raid1-member-using-mdadm/



  • @TimeBandit said in Help Bites:

    @Vixen said in Help Bites:

    if it is a RAID1 array reparing the partition table will fix it

    If it's RAID1, you can probably mount it by specifying the FS type, ex

    mount /dev/sdc1 /mnt/tmp -t ext4

    This page could also be helpful
    https://blog.sleeplessbeastie.eu/2012/05/08/how-to-mount-software-raid1-member-using-mdadm/

    that would work if the primary partition table wasn't corrupt or wrong (there;s only /dev/sdx not /dev/sdx1 to mount)


  • Notification Spam Recipient

    @remi said in Help Bites:

    And probably more useful, gfdisk reports an error when reading the partition table. I haven't copied it (I forgot...) but it's something along the lines of "the main partition table is corrupted as something something points to somewhere after the end of the disk, I'm using the backup partition table which seems OK".

    This smells to me like the disk in question was originally GPT, then the RAID was created in hardware without full or partially zeroing out the array. Now that you have them separated, it can see the original backup GPT and since that one makes more sense than the primary one, it's using that instead.

    So, steps to recovery:

    1. Determine the most likely topology of the disks.
    2. Attempt to recreate said topology minimally
    3. ....
    4. Profit?

    Based on future posts I'll assume your minimum disks is 2, so we need to find which two to use. Further assuming the user just "used the defaults" this should be either the first port drive and second port drive, or first port drive and third port drive.

    We'll assume the drive you've been talking about is the first disk, so in theory we just need to find the second one. To do so, try the gparted and see which one talks about a missing primary GPT and corrupted backup GPT.

    Once we've identified our golden disks, it's time to visit the SATA controller's Option ROM and set them up. Exercise for the reader, but the anticipated config will be a RAID 0.

    If all goes well, it should Just Work.

    Oh, and the reader is working off of a clone, right?

    Best of luck!


  • 🚽 Regular

    This post is deleted!


  • @remi said in Help Bites:

    But then I tried rebuilding the array with the disks i.e. mdadm --assemble /dev/md0 /dev/sd[XYZ] and it fails saying there is no array.

    Could it be a shudder fake-RAID? The one you bring up using a conroller intergated into the motherboard, dmraid, and lots of dark magic? (Acrually, man dmraid says it might be able to just parse the RAID metadata and work without the help of the hardware controller. Does dmraid detect any fake-RAID metadata on your hard drives?)

    Also, have you tried running testdisk in partition scan mode on the images of the harddrives you have got? It might take a while (so start the procedure while trying to do something else) and yield no results (if it was, indeed, a fake-RAID or something), but it might also help recover the partition table (in case it has been botched somehow).



  • @Vixen @Tsaukpaetra Thanks for your advice. This story does have a happy ending after all! After fixing the partition table, I was ultimately able to read the contents of the disk and everything seems to still be there (as far as we could ascertain with a quick check). The disk was about 10% used which is probably a good thing as this reduces the likelihood that whatever happened to botch the disk botched a part that actually contained data.

    I think that in the end the RAID was a RAID herring. Out of the 4 disks, there was only one that showed anything inside the GPT, which always seemed weird to me, but I thought that maybe that was normal and only the first disk of the RAID would show a "normal" partition. In all likelihood, that's just that there was no RAID and there were 3 unused disks in the machine. Go figure. :mlp_shrug:

    On that single disk, the partition inside the GPT was indeed longer than the disk, but if that was because it was a RAID thing (as hypothesised up-thread) then I would expect that partition to be something close to twice (or four times, depending on the number of disks in the RAID) as large as the physical disk (give or take a bit), but it was actually just a tiny bit larger. Something like "disk is 999 001 000 sectors long, partition is 999 003 000 sectors long". The difference was suspiciously close to 2048, which is the first sector used by the partition (the part before being apparently reserved for the GPT itself?), but it wasn't exactly that. I have no idea how the disk could get in that state: no bad sector on it (as far as simple checking tools could tell me), a partition that contains valid data, but with a partition length that is wrong by a tiny margin. That does not seem like something you could get with a random failure, so...? 😕

    The GPT error was not "primary table is botched, I'm using backup" but the other way round: "primary is OK but backup is toast, because it's located after the physical end of the disk. Which makes sense with the partition-too-long issue.

    Anyway, on the copy I'd made (with dd), I tried fixing the GPT (with gdisk) but it kept telling me that it couldn't actually write the new backup to the end because it was overlapping a partition, and gdisk cannot resize a partition and parted refused to do it because of the botched GPT. However I managed in gdisk to delete the partition and recreate it with a slightly shorter size, and at that point after much fiddling I managed to use losetup to point to the actual partition and mount it (after a couple of rounds of fsck and fsresize to fix the FS). Success!

    At some point during all that, I found out about testdisk, which would actually have solved all my problems from the start in less then 5 min. After testing it on the image (that I had already tweaked several times at that point), I tried it on the original (untouched) disk directly and it did find out correctly that there was a single partition, how to fix the partition table to make it readable (I haven't yet committed that to the disk, since I could recover everything from the image I'm keeping the disk untouched until the very end!), and it could browse through all files and correctly recover them.

    So that seems a very good and simple to use tool for this kind of issues, I'll have to remember that!

    Hopefully that's the end of the story. I've now got an image that I can mount, the raw disk (still unfixed but I now know how to fix it) and a copy of the files on another disk. I'll pass the disk to the user and wait until they give me the green light before erasing the image or touching the disk. Belt and braces and another belt.

    Thanks for your help! I would probably have gotten there by myself in the end, but you helped me get there faster and with more confidence.



  • @aitap said in Help Bites:

    Also, have you tried running testdisk in partition scan mode on the images of the harddrives you have got?

    :hanzo:'d while I was writing the next post. I did not know about testdisk before yesterday. I tried it and it was indeed able to fix things for me, so I've learnt a new thing (which I'll have forgotten by the next time I need it, but...). Thanks anyway!



  • Been a long time since I had to use Python.

    Have to subclass a class, but the number of arguments on its __init__ is somewhat ridiculous and I'm lazy and I'd like to confirm I'm not doing it wrong before I go and do it to a bunch of classes and their methods.

    Is there a better way than just:

    def __init__(self, ... 15 arguments)
      self.static_weights = kwargs.get('static_weights')
      super('HammingFeedForward', self).__init__(self, ... 15 arguments)
    

    ❓



  • @Captain can you just do self, *args, **kwargs as the signature? Then grab the one you want off of kwargs and pass the rest through using the same syntax?



  • @Benjamin-Hall I don't know but I'll give it a try.



  • Wow apparently the tensorflow folks at google decided to filter out the keywords I can do kwargs.get on. Thx guys.



  • @Captain said in Help Bites:

    Wow apparently the tensorflow folks at google decided to filter out the keywords I can do kwargs.get on. Thx guys.

    kwargs is just a dictionary, so you should be able to address it normally.



  • @Benjamin-Hall It's their template method that is doing the call to get. I suppose it's somewhat obvious now that I could probably filter the "bad" keyword out before I call super. (This is the sort of nonsense monads handle "once")

    Thanks for your help.



  • Does this pattern make sense? Or can I just pop/modify kwargs?

    self.kwargz = kwargs
    
    if bool(self.kwargz):
      self.static_weights = self.kwargz.pop("static_weights", None)
    
    super(HammingFeedForward, self).__init__(self, *args, self.kwargz)


  • @Captain said in Help Bites:

    Does this pattern make sense? Or can I just pop/modify kwargs?

    self.kwargz = kwargs
    
    if bool(self.kwargz):
      self.static_weights = self.kwargz.pop("static_weights", None)
    
    super(HammingFeedForward, self).__init__(self, *args, self.kwargz)
    

    you can do the pop on kwargs directly. no need to save it to self unless you want to refer to it later.

    as for the pattern...... don't like it but i've had to do similar when overriding certain braindead classes.......



  • @Vixen yeah. Python lets you modify mutable inputs at will, so just do it straight to the input.


  • 🚽 Regular

    @Captain said in Help Bites:

    super(HammingFeedForward, self).init(self, *args, self.kwargz)

    Aren't you missing ** in front of self.kwargz there?



  • @Zecc If I understand your code, yes I think so. This is the other half of the pattern I was missing. I kept getting a "you have too many positional arguments" error from Tensorflow's type checker.

    So if I understand you, I could theoretically do:

    def __init__(self, *args, **kwargs):
    
      self.static_weights = kwargs.pop("static_weights", None)
    
      super('MyClass', self).__init__(*args, **kwargs)
    

    ❓



  • @Captain yup.


  • 🚽 Regular

    @Captain said in Help Bites:

    So if I understand you, I could theoretically do

    Yes, ** will unpack your argument dictionary into keyword arguments, as per https://docs.python.org/3/tutorial/controlflow.html#unpacking-argument-lists


  • 🚽 Regular

    @Captain said in Help Bites:

    I kept getting a "you have too many positional arguments" error from Tensorflow's type checker.

    Because you were passing kwargz as a positional argument, as you have probably figured out.



  • A tiny update on my disks story: it almost went disastrously wrong yesterday evening 😖

    Because of the numbers of disks I was playing with at the same time, my main computer was looking like this:
    c2ddf86c-4d24-48d1-b15d-c8a6160aabb6-image.png

    And guess what, at one point the power supply to one disk came loose. I did not think and almost reflexively pushed the cable back in. With the power on. Yeah, that was stupid, I know. My hand just was faster than my brain.

    The whole computer went... poof. Oops. OK, it reboots by itself, good. OK, it can still boot and read my system disk. OK, OK, it's not going to be a catastrophe. Um, what's that? All these errors? On my backup disk (one of the two plugged through USB on the picture)? OK, no problem, I got it, "stand back, I know testdisk!" Uh, no, it doesn't see any partition at all, not even a partition table, on that disk. Oops.

    To make a long story short, in the end after a big round of turning off everything, unplugging everything and replugging carefully, everything came back up normally. But I would rather have done without that...

    Machine is now closed again in its normal state, faulty disk copied on one of the empty disks which has been tested as working (plus data is copied on the backup disk, plus I still have the original faulty disk). At least now I should not risk my main machine again... 🤞


  • Notification Spam Recipient

    @remi said in Help Bites:

    pushed the cable back in. With the power on. Yeah, that was stupid, I know.

    👀 :seye: I swear I have never personally done this constantly for about two weeks while waiting for replacement disks no-sirrey...



  • @remi said in Help Bites:

    I did not think and almost reflexively pushed the cable back in.

    A few years ago I bought a shiny new terabyte SATA HDD in addition to my older 320G 5400RPM IDE HDD. Being all excited for the speed and storage volume, I was less than careful while installing it, and the system didn't boot up.

    I had lots of time to think why is MBR on the old HDD suddenly corrupted, but instead just I reinstalled the bootloader, repaired the partition table and rebooted.

    Guess what, I had somehow managed to damage the IDE cable so that it botched a few bits on transmission, so now the bootloader code and the partition table actually were corrupted (by virtue of me overwriting them using a borked IDE cable) and I still didn't have a bootable system.

    Thank $deity I didn't decide to check the filesystem. That would have been fun to repair after I got the replacement cable.



  • @Tsaukpaetra Can confirm, SATA is hot-pluggable.



  • Quick docker question:

    Suppose I have a Docker image I want to use plop in and use on a system. BUT, I need to change some of the command line options that get passed in to the "final" command.

    What do I need to do to accomplish that? Does it depend on what the container author exposes?

    For concreteness, the last step in the dockerfile is

     CMD ["bash" "-c" "source /etc/bash.bashrc && jupyter notebook --notebook-dir=/tf --ip 0.0.0.0 --no-browser --allow-root"]
    

    and I'd like to pass in an additional command line option to the jupyter call.



  • @Captain said in Help Bites:

    @Tsaukpaetra Can confirm, SATA is hot-pluggable.

    unless your motherboard has support for it and has it enabled (it's enabled by default for ESATA connections, but not for internal SATA connections.... probably because if you're setting up something that would be hotplugged and it's on an internal connection you'd better know what the fleem you are doing.



  • @Captain said in Help Bites:

    Quick docker question:

    Suppose I have a Docker image I want to use plop in and use on a system. BUT, I need to change some of the command line options that get passed in to the "final" command.

    What do I need to do to accomplish that? Does it depend on what the container author exposes?

    For concreteness, the last step in the dockerfile is

     CMD ["bash" "-c" "source /etc/bash.bashrc && jupyter notebook --notebook-dir=/tf --ip 0.0.0.0 --no-browser --allow-root"]
    

    and I'd like to pass in an additional command line option to the jupyter call.

    is your change static?

    write a new docker file that is just

    FROM jupyter/jypeter
    
    CMD ["bash" "-c" "source /etc/bash.bashrc && jupyter notebook --notebook-dir=/tf --ip 0.0.0.0 --no-browser --allow-root --my-extra flags"]
    

    and you should be good to go.



  • @Vixen It's not static. :-(

    I'm having Vagrant generate an auth token for Jupyter. (Jupyter sets up a random token and shows it to you on the console, but Vagrant's output seems to block it)

    I THOUGHT passing in a command line option would be the easiest way, but then it seemed to turn into a cluster of problems.



  • @Captain said in Help Bites:

    @Vixen It's not static. :-(

    I'm having Vagrant generate an auth token for Jupyter. (Jupyter sets up a random token and shows it to you on the console, but Vagrant's output seems to block it)

    I THOUGHT passing in a command line option would be the easiest way, but then it seemed to turn into a cluster of problems.

    This might work if you can get that authtoken into a text file?

    or you could possibly play around with putting the token into an environment variable instead of a persisted file on the filesystem. that should in theory work too...

    FROM jupyter/jypeter
    
    ADD authtoken.txt /authtoken.txt
    
    CMD ["bash" "-c" "source /etc/bash.bashrc && jupyter notebook --notebook-dir=/tf --ip 0.0.0.0 --no-browser --allow-root --auth-token $(cat /authtoken.txt)"]
    

Log in to reply