Finding (and getting) a career-change job



  • Continuing the discussion from Finding (and getting) a beginner/entry level programming job:

    @Yamikuronue said:

    I strongly suggest contracting. Everyone who works at my company came in via contracting first, as it's a great way to test out a candidate on a time-limited basis and a great way to get a feel for the kind of company you want to work for.

    Starting a new topic to avoid derailing @Dreikin's.

    After 15+ years of hardware (chip) verification, I'm finding myself having difficulty getting a job because I lack hands-on experience with the latest framework for that. Someone I was talking to yesterday suggested trying to transfer the verification skills to SQA. Is this feasible? Any tips for getting potential employers to consider transferable skills? They don't seem to want to consider real experience with the previous chip verification framework, which is 90% identical to the newer one, so how can I expect them to consider a skill set that's much less similar? Are there any SQA tools that I absolutely must know to have any shot?



  • Can I add to this Thread as well ? I have some pressing questions similar to yours.



  • Sure, I guess. Maybe you'll get more answers than I have so far.


  • I survived the hour long Uno hand

    @HardwareGeek said:

    Someone I was talking to yesterday suggested trying to transfer the verification skills to SQA. Is this feasible?

    I don't know, what sort of things do you do to verify hardware chips?

    @HardwareGeek said:

    Are there any SQA tools that I absolutely must know to have any shot?

    If you're looking for web, Webdriver's a good one to know. Enterprise people tend to use shitty enterprise tools, but having used one you'll pretty much understand them all, and you'll understand the first one within a few days anyway, so I wouldn't sweat that too much.

    Understanding how and when and why to test is the key.

    @HardwareGeek said:

    Any tips for getting potential employers to consider transferable skills?

    The state of SQA is pretty poor in a lot of areas. Around here, if you can speak coherently about testing and testing tools, you've got a leg up on half the candidate pool.



  • @Yamikuronue said:

    what sort of things do you do to verify hardware chips?

    My profile has a pretty good, although brief, description of what I do. The "hardware" is actually a software model of the chip's logic, either the entire chip or one of the subsystems that will be hooked together to make a complete chip, generally written in either of two HDLs.

    One of the things I do is write a test harness (testbench in hardware parlance) to exercise this piece of software, in either the same HDL, or these days in a language called SystemVerilog, which is a superset of one of those HDLs, Verilog. SystemVerilog adds a bunch of features primarily aimed at verification, although some of them are useful for logic design as well. Foremost is a whole OOP structure. (Verilog models logic modules that have input and output ports and communicate with each other by connecting "wires" between the ports. SystemVerilog adds classes that communicate by calling methods, and "virtual interfaces" to bridge the gap between those two worlds.) An important feature that it adds (within the OOP paradigm) is constrained random number generation (more on that later).

    There are a few frameworks to make writing testbenches easier. The framework of choice these days is UVM. This provides useful classes (that you have to extend with the logic for your specific design) like sequences and sequencers (stimulus for the DUT), drivers (converts transaction-level stimulus (e.g., "write n bytes starting at address a) into hardware ("pin-level") stimulus (e.g., "assert bus_request; wait for bus_grant; drive a onto bus_address[] and BUS_WRITE onto bus_ctrl[]; wait one clock; drive first bus width's worth of data onto bus_data[]; etc.")), monitors (reverse of drivers — "here's n bytes of data that were read from address a" or "DUT responded to the attempt to write with BUS_ERROR on bus_status[]"), scoreboards (compare actual response from DUT with the testbench's expected response), agents (hierarchical wrapper that bundles other bits into reusable components), configuration database ("for this test, 'foo_driver' should be passive;" "Hi, my name is 'foo_driver'. Am I active or passive?"), etc.

    There is an older framework called OVM, that UVM is based on. They are so similar that in UVM 1.0, you could take an OVM testbench, do a couple of global search-and-replaces and be 95% of the way to having a working UVM testbench.

    Once the model and testbench are (more-or-less) done, I write tests. Simple tests might be something like "Send 100 packets of data from source s to destination d. Each packet should have length l. I don't care what the actual data is, but there must be l bytes. (s, d, l and the data are randomized for each packet.) 10% of the packets should have l < 10, 20% should have LEN_MAX * .9 <= l <= LEN_MAX, and 1% should have l > LEN_MAX. After calculating the CRC for the data, flip one bit in 2% of the packets." More complex tests might say that if a certain field in the packet has a certain (random) value, other fields must be assigned values with one probability distribution; if it has a different value, the other fields must be generated differently. (That's where SystemVerilog's builtin constrained random number generator is really handy. The constraint solver takes all the rules and generates a set of random values that meet all the rules, or if you gave it conflicting rules, tells you it's insoluble.) The testbench checks that all the packets arrive at the correct destinations, except the ones with errors; for those the DUT should send a NAK packet back to the source, or whatever the spec says it's supposed to do in case of an error.

    The model, testbench and test get compiled and linked with a proprietary run-time/debugger (from one of three vendors; there are others, but nobody uses them) into an executable that can be executed and debugged. If the test fails, debug it to figure out whether the error is in the DUT, testbench or test. If a new test passes, it's a really good idea to check that it's actually doing what you intended it to do.

    Because of the randomized nature of the testing, coverage is vitally important to testing. We use both code coverage (Have you executed every line of code? Have you taken every conditional branch? Have you exercised every state and state transition in state machines?) and feature coverage. Feature coverage is implemented by either the logic designer (I wish) or the verification engineer, and consists of coverpoints for "interesting" events. Have you generated stimulus that includes a CRC error? Have you generated stimulus that exercises features X and Y in modes A, B and C? (Depending on the logic, code coverage may not give you this. It should tell you that you've exercised both features and all three mode, but may not tell you that exercised all the combinations.) Because silicon bugs can't (easily) be updated in the field (although sometimes they can be worked around with software updates), coverage requirements are high, such as 99.8% coverage with NO failing tests.


  • I survived the hour long Uno hand

    Yup, you'll fit right in in SQA. That last paragraph especially could have been word-for-word out of the mouths ofa software tester, right up until the part about "silicon bugs" (though desktop software testers lament the low patch rate)



  • @Yamikuronue said:

    Yup, you'll fit right in in SQA.

    Great. Any tips for getting my foot in the door?


  • I survived the hour long Uno hand

    If I were interviewing you and you said that, I'd be a yes :) There's always a need for skilled QA people, pretty much any place that develops software.


  • Discourse touched me in a no-no place

    @Yamikuronue said:

    If I were interviewing you and you said that, I'd be a yes 😄 There's always a need for skilled QA people, pretty much any place that develops software.

    Well, that's that, then. So the next question is, does your company offer relocation assistance? 😄



  • @HardwareGeek said:

    My profile has a pretty good, although brief, description of what I do. The "hardware" is actually a software model of the chip's logic, either the entire chip or one of the subsystems that will be hooked together to make a complete chip, generally written in either of two HDLs.

    ...

    Wow! That sounds super specialized and obscure.

    These kinds of careers can be lucrative. But also dangerous, if the niche closes or you lose touch with the current standards, as it seems to have happened to you.

    @Yamikuronue thinks you could switch to QA, but do you want to? Seems like a shame to throw away all this arcane specialized knowledge you spent years building up.



  • @cartman82 said:

    lose touch with the current standards

    Yes, that is a danger if you work for one company for a while. There is a big investment of time and effort in developing testbenches, tests, and overall flow for a particular set of tooling, and they're understandably reluctant to discard that in favor of a new framework. If they're late adopting a particular framework, they'll likely be even later adopting the next one. There's also the possibility of working for a really big company, big enough to develop their own tools in-house, and lose touch with anything industry-standard. BTDT, too.

    @cartman82 said:

    arcane specialized knowledge you spent years building up
    Some of it, yeah, the specifics of the languages and frameworks used for writing testbenches and tests. A lot of it is generic: Read a spec, identify the behavior that needs to be tested, figure out how to test it, what the expected result should be for a given input or series of inputs, write a test, run it, and debug why it fails. Some of the debugging is a bit different; most of it tends to be post-analysis — looking at a history of what happened during the test using a sort of "virtual oscilloscope" rather than setting breakpoints in a debugger, although debugger/breakpoints are more of a thing in debugging SystemVerilog parts of the system.

    What I wrote earlier about the frameworks and stuff is knowledge acquired over the course of just a couple of years. The general stuff about how and why to test is more of a mindset that has been acquired over a lot of years, but that mindset is equally applicable to QA in almost any field.

    Also, although that description sounds pretty specialized and arcane, there's also a of stuff that's more generic. There's still occasions that testing is done by writing C (or some other system programming language, but I've never seen anybody in the hardware world use anything other than C, except assembly in very rare circumstances that exactly n clocks must happen between between two events to hit some edge case) programs to run on (a software model of) an embedded processor to exercise other parts of the hardware. There's usually a lot of scripting in perl or python or whatever to tie pieces of the flow together.

    I guess I'm saying that there's some arcane specialized knowledge, but not as much as it might look on the surface, and I wouldn't be too reluctant to set it aside for a new challenge, especially if it broadens my career opportunities.


  • Grade A Premium Asshole

    @HardwareGeek said:

    My profile has a pretty good, although brief, description of what I do. The "hardware" is actually a software model of the chip's logic, either the entire chip or one of the subsystems that will be hooked together to make a complete chip, generally written in either of two HDLs.

    -Lots and lots of words-

    @HardwareGeek said:

    Because silicon bugs can't (easily) be updated in the field (although sometimes they can be worked around with software updates), coverage requirements are high, such as 99.8% coverage with NO failing tests.

    Holy fuck, I would consider a career change also. That sounds mind-numbingly boring.



  • @Polygeekery said:

    That sounds mind-numbingly boring.

    I disagree, I find it challenging and rewarding. However, that might be because I don't do what @HardwareGeek does 100% of the time, but testing is nowhere near the most boring part of my job.



  • Thanks to a TDWTFer, I have my first phone interview for an SQA position tomorrow morning. Since I don't have real SQA experience, I'm guessing heavy on the "soft-skill" questions and how my experience is transferable. Any other suggestions on what to expect?


  • ♿ (Parody)

    @HardwareGeek said:

    tomorrow morning

    I think that was today. It's now afternoon there...how'd it go?



  • Ok. HR/internal recruiter getting a feel for whether my resume is worth sending to the hiring manager. (It is.) The next step is up to the hiring manager; I should find out whether he wants to move forward by the end of the week, or beginning of next week at the latest.

    Java. :| Dug out one of my old personal projects to refresh my memory, and starting to learn about JMeter, which they use. While also trying not to let the memory of the technical details of my current career rust any further than they already have, for those leads that are trickling in. And keep up with TDWTF. And occasionally play games with my son. And there aren't enough hours in the day.


  • I survived the hour long Uno hand

    yeah, Java's big in the test automation world because the Selenium folks prefer the Java bindings with Maven.


  • Discourse touched me in a no-no place

    Also Java is the best ⭐🎣



  • @Yamikuronue said:

    Java's big in the test automation world

    Not just testing. From what I gathered, their product is written in Java. Oh well, it could be worse; at least it's not Go. :)



  • @HardwareGeek said:

    Oh well, it could be worse

    WAY worse. C# is better, and java does some incredibly stupid shit, but its not WTF bad, just has some quirks (as opposed to PHP that's WTF bad).


Log in to reply