Assessing performance--it's not just for programs



  • I've been thinking a lot about how I decide if I'm doing a good job at X (whatever I happen to be doing). Part of this is that I struggle with what appears to be imposter syndrome, part of it is pulling the mule tactic at my current management's push for what seems to be change for the sake of change (chasing trends).

    How do you professionally decide if a change (in methodology, policies, procedures, etc) is needed? How do you assess the effects of that change? What have you found works best?

    I eventually want to organize my thoughts so I can present them to my fellow faculty, and feedback from widely varying disciplines will help (I think).



  • @benjamin-hall said in Assessing performance--it's not just for programs:

    I've been thinking a lot about how I decide if I'm doing a good job at X (whatever I happen to be doing). Part of this is that I struggle with what appears to be imposter syndrome, part of it is pulling the mule tactic at my current management's push for what seems to be change for the sake of change (chasing trends).

    How do you professionally decide if a change (in methodology, policies, procedures, etc) is needed? How do you assess the effects of that change? What have you found works best?

    I eventually want to organize my thoughts so I can present them to my fellow faculty, and feedback from widely varying disciplines will help (I think).

    Short answer: ROI

    Decades ago I worked at an electronics engineering firm. The VP would call each person in once a year specifically to talk about "We have paid you, what have you earned us?". Despite how it sounds, it really was a great experience. Espcially because of the openness to making changes to improve that form both parties perspective.



  • @thecpuwizard said in Assessing performance--it's not just for programs:

    Short answer: ROI
    Decades ago I worked at an electronics engineering firm. The VP would call each person in once a year specifically to talk about "We have paid you, what have you earned us?". Despite how it sounds, it really was a great experience. Espcially because of the openness to making changes to improve that form both parties perspective.

    I couldn't disagree more.

    Reducing the assessment to monetary (or equivalent) terms risks chronic short-term-ism and low-risk quick gains to the detriment of long term planning or any sort of high-risk blue-sky (likely to fail, but potentially hugely beneficial) projects.

    It favours selfish, individualistic sales-bods over low-key team players (who can't easily point to their individual contribution, but may be a critical part of overall success).

    I don't know the answer to @Benjamin-Hall's question though - it's something that also bothers me. Probably something based on feedback/assessment by your peers. In general, most people have a fair idea of who around them is competent and pulling their weight and, unless you work with a bunch of psychopaths, your co-workers will be happy to give credit where it's due. Your colleagues are also best placed to give feedback on proposed changes (obviously everyone has their own axe to grind and turf to defend - so the value of some responses might be suspect).



  • @japonicus said in Assessing performance--it's not just for programs:

    Reducing the assessment to monetary (or equivalent) terms risks chronic short-term-ism and low-risk quick gains to the detriment of long term planning or any sort of high-risk blue-sky (likely to fail, but potentially hugely beneficial) projects.

    Yes, it does potentially introduce that risk, but it is far from certain (speaking from direct experience).

    In the end the only thing that matter is ROI for (almost) all things, with the understanding that the terms many not always be financial/monetary.



  • @japonicus said in Assessing performance--it's not just for programs:

    @thecpuwizard said in Assessing performance--it's not just for programs:

    Short answer: ROI
    Decades ago I worked at an electronics engineering firm. The VP would call each person in once a year specifically to talk about "We have paid you, what have you earned us?". Despite how it sounds, it really was a great experience. Espcially because of the openness to making changes to improve that form both parties perspective.

    I couldn't disagree more.

    Reducing the assessment to monetary (or equivalent) terms risks chronic short-term-ism and low-risk quick gains to the detriment of long term planning or any sort of high-risk blue-sky (likely to fail, but potentially hugely beneficial) projects.

    It favours selfish, individualistic sales-bods over low-key team players (who can't easily point to their individual contribution, but may be a critical part of overall success).

    I don't know the answer to @Benjamin-Hall's question though - it's something that also bothers me. Probably something based on feedback/assessment by your peers. In general, most people have a fair idea of who around them is competent and pulling their weight and, unless you work with a bunch of psychopaths, your co-workers will be happy to give credit where it's due. Your colleagues are also best placed to give feedback on proposed changes (obviously everyone has their own axe to grind and turf to defend - so the value of some responses might be suspect).

    I'm in a school, so strict ROI is really hard to assess. I'm also wary of judging organizational and policy changes (curriculum, focus of teaching, teaching methods) based on subjective criteria. I know personally there's lots of inertia--I like this method because I'm used to using it, so I'll find a justification somehow.

    I'd like to be able to define metrics that I can use to judge my own performance with decent granularity. Is this new style of test working? Should I switch away from lecturing to "active learning" (the new buzzword)? Is this homework assignment effective?

    I've tried calculating correlations between how well the students score on various areas of the class (labs, homework, etc) and how they score on formal assessments (tests and quizzes). As of last year, there was absolutely no correlation whatsoever. But I'm not sure that there should be correlation.

    I guess it goes to defining what the purpose of each piece is and trying to figure out how to measure if it's suitable for its defined purpose. Why must this be so hard? Ugh.



  • @benjamin-hall said in Assessing performance--it's not just for programs:

    I'd like to be able to define metrics that I can use to judge my own performance with decent granularity. Is this new style of test working? Should I switch away from lecturing to "active learning" (the new buzzword)? Is this homework assignment effective?
    I've tried calculating correlations between how well the students score on various areas of the class (labs, homework, etc) and how they score on formal assessments (tests and quizzes). As of last year, there was absolutely no correlation whatsoever. But I'm not sure that there should be correlation.

    I doubt your sample sizes are large enough and there isn't much scope (technically or morally) for control groups, so rigorous evidence-based analysis is bound to be fraught (not that I don't applaud you for trying...)



  • @japonicus said in Assessing performance--it's not just for programs:

    @benjamin-hall said in Assessing performance--it's not just for programs:

    I'd like to be able to define metrics that I can use to judge my own performance with decent granularity. Is this new style of test working? Should I switch away from lecturing to "active learning" (the new buzzword)? Is this homework assignment effective?
    I've tried calculating correlations between how well the students score on various areas of the class (labs, homework, etc) and how they score on formal assessments (tests and quizzes). As of last year, there was absolutely no correlation whatsoever. But I'm not sure that there should be correlation.

    I doubt your sample sizes are large enough and there isn't much scope (technically or morally) for control groups, so rigorous evidence-based analysis is bound to be fraught (not that I don't applaud you for trying...)

    Definitely not big enough for publication, but for getting a sense of things it can help...if I knew what the correlations (or other data) should look like. The tests and quizzes are relatively constant from year to year, so relative changes might be important. I did the same analysis against other teachers gradebooks (with their permission) and found that some teachers had very strong correlations--the homework predicted very strongly how the tests would go (which was what I was expecting).

    But all of this depends on knowing what I'm looking for--distinguishing signal from noise. How I wish people came with specification documents...



  • @japonicus said in Assessing performance--it's not just for programs:

    team players (who can't easily point to their individual contribution, but may be a critical part of overall success)

    A bit OT, but this is what always frustrates me about all the advice for describing accomplishments on one's resume/CV. "I did thing and saved/generated $27 gazillion for the company." "I was a small part of a team that made a product, and bleep if I'll ever know how much revenue that product generated, much less how much my contribution to it was worth."



  • @hardwaregeek said in Assessing performance--it's not just for programs:

    @japonicus said in Assessing performance--it's not just for programs:

    team players (who can't easily point to their individual contribution, but may be a critical part of overall success)

    A bit OT, but this is what always frustrates me about all the advice for describing accomplishments on one's resume/CV. "I did thing and saved/generated $27 gazillion for the company." "I was a small part of a team that made a product, and bleep if I'll ever know how much revenue that product generated, much less how much my contribution to it was worth."

    I agree and that's a big part of my struggle with this concept.

    An idea from a friend on Facebook was to do value added assessments: give a test, then the same test again later, examine differences. Sort of red-green refactoring applied to teaching. I may try something like that in miniature--give a low stakes (just enough so they take it seriously) assessment, then reuse parts of it on the actual assessment and examine the changes.


  • 🚽 Regular

    @Benjamin-Hall Let me ask this: are these changes the result of a shifting of responsibilities in upper management? Did someone recently get tenure or become a dean (or some other position of power)?

    I ask because in my experience new managers, whether they were hired from outside or promoted within, tend to change policy to establish their authority and hope that their new policy has good results which in your case would be better grades or in the long term students who have good careers so they can donate. While you can try to argue that the results may not be beneficial, the former motivation is still very strong. They don't want to be the manager who just kept the status-quo. They want to make something out of their name so that they're recognized for doing something, and they're willing to risk doing something bad.

    It's a tad silly, especially if the results turn out to be negative (or even just the same as they were before), but they're trying to do something that will put "did X which resulted in Y" in their resume at the very least. Their greatest achievement they're looking for is to have a new wing to the main library dedicated to them.



  • @the_quiet_one said in Assessing performance--it's not just for programs:

    @Benjamin-Hall Let me ask this: are these changes the result of a shifting of responsibilities in upper management? Did someone recently get tenure or become a dean (or some other position of power)?

    I ask because in my experience new managers, whether they were hired from outside or promoted within, tend to change policy to establish their authority and hope that their new policy has good results which in your case would be better grades or in the long term students who have good careers so they can donate. While you can try to argue that the results may not be beneficial, the former motivation is still very strong. They don't want to be the manager who just kept the status-quo. They want to make something out of their name so that they're recognized for doing something, and they're willing to risk doing something bad.

    It's a tad silly, especially if the results turn out to be negative (or even just the same as they were before), but they're trying to do something that will put "did X which resulted in Y" in their resume at the very least. Their greatest achievement they're looking for is to have a new wing to the main library dedicated to them.

    I'm at an independent (private, non-religious) school. There's been no turnover of admin, but I'm pretty sure that most of the changes are driven by admins going to conferences and hearing about the "new hotness," as well as admissions trying to make the school look progressive and new. There is some honest "let's try that," but very little is data-driven.



  • @thecpuwizard said in Assessing performance--it's not just for programs:

    Short answer: ROI

    What's the ROI of learning to play the piano? I mean, there is some, but how do you value it?



  • @japonicus said in Assessing performance--it's not just for programs:

    It favours selfish, individualistic sales-bods over low-key team players (who can't easily point to their individual contribution, but may be a critical part of overall success).

    That's true. A lot of teams have the person who may not be quite as productive as everybody else, but they're well-worth keeping around:

    • Maybe their attitude is great, and they're the only one who's doing the legwork in keeping the company softball team going
    • Maybe they're the one person who's perfectly happy doing The Boring Task, thus freeing everybody else up to be more productive
    • Maybe they don't write a lot of output, but they have great ideas. Or they're skilled in an area (say, clear communication) that most team members aren't

    Yeah I agree ROI might be an element to assessing performance, but it can't be the end-all, be-all.


  • 🚽 Regular

    @benjamin-hall said in Assessing performance--it's not just for programs:

    @the_quiet_one said in Assessing performance--it's not just for programs:

    @Benjamin-Hall Let me ask this: are these changes the result of a shifting of responsibilities in upper management? Did someone recently get tenure or become a dean (or some other position of power)?

    I ask because in my experience new managers, whether they were hired from outside or promoted within, tend to change policy to establish their authority and hope that their new policy has good results which in your case would be better grades or in the long term students who have good careers so they can donate. While you can try to argue that the results may not be beneficial, the former motivation is still very strong. They don't want to be the manager who just kept the status-quo. They want to make something out of their name so that they're recognized for doing something, and they're willing to risk doing something bad.

    It's a tad silly, especially if the results turn out to be negative (or even just the same as they were before), but they're trying to do something that will put "did X which resulted in Y" in their resume at the very least. Their greatest achievement they're looking for is to have a new wing to the main library dedicated to them.

    I'm at an independent (private, non-religious) school. There's been no turnover of admin, but I'm pretty sure that most of the changes are driven by admins going to conferences and hearing about the "new hotness," as well as admissions trying to make the school look progressive and new. There is some honest "let's try that," but very little is data-driven.

    You might have a better chance at having them see the light, then. Those stupid conferences are poison, though. I had a previous employer with an exec who went to a friggen Tony Robbins seminar. The changes that came out of that directly resulted in the quitting of at least two employees.



  • @japonicus said in Assessing performance--it's not just for programs:

    @thecpuwizard said in Assessing performance--it's not just for programs:

    Short answer: ROI
    Decades ago I worked at an electronics engineering firm. The VP would call each person in once a year specifically to talk about "We have paid you, what have you earned us?". Despite how it sounds, it really was a great experience. Espcially because of the openness to making changes to improve that form both parties perspective.

    I couldn't disagree more.

    Reducing the assessment to monetary (or equivalent) terms risks chronic short-term-ism and low-risk quick gains to the detriment of long term planning or any sort of high-risk blue-sky (likely to fail, but potentially hugely beneficial) projects.

    "I earned you a more robust software package that reduced bugs in features and resulted in less money spent on support"



  • @xaade said in Assessing performance--it's not just for programs:

    "I earned you a more robust software package that reduced bugs in features and resulted in less money spent on support"

    Yes, but not an easy sell to a senior manager.

    It gets even harder when it's "I prevented the major hack that in five years time would have cost the company 100 million in damages and lost reputation."

    Or, "I spent years working on a project that ultimately failed (through no fault of anyone on the team)."



  • @blakeyrat said in Assessing performance--it's not just for programs:

    What's the ROI of learning to play the piano? I mean, there is some, but how do you value it?

    Well, if you are planning to become a professional pianist, it's relatively easy — whatever you earn or realistically expect to earn from playing it. If you want to learn just because you enjoy it, yeah, it's almost impossible to put a monetary value on that.



  • @japonicus said in Assessing performance--it's not just for programs:

    @xaade said in Assessing performance--it's not just for programs:

    "I earned you a more robust software package that reduced bugs in features and resulted in less money spent on support"

    Yes, but not an easy sell to a senior manager.

    It gets even harder when it's "I prevented the major hack that in five years time would have cost the company 100 million in damages and lost reputation."

    Or, "I spent years working on a project that ultimately failed (through no fault of anyone on the team)."

    Well, it depends on the purpose of the meeting.

    If it's just there to sell continued employment to your manager, then you're screwed.
    If it's there to see if you can improve, then it can be an angle for that purpose.



  • @blakeyrat said in Assessing performance--it's not just for programs:

    Maybe their attitude is great
    Maybe they're the one person who's perfectly happy doing The Boring Task, thus freeing everybody else up to be more productive
    Maybe they don't write a lot of output, but they have great ideas.

    Pretty much myself.


  • ♿ (Parody)

    @benjamin-hall said in Assessing performance--it's not just for programs:

    I did the same analysis against other teachers gradebooks (with their permission) and found that some teachers had very strong correlations--the homework predicted very strongly how the tests would go (which was what I was expecting).

    Some potential confounding factors, off the top of my head:

    • Some kids won't take the homework very seriously.
    • Some will still be struggling with the material during the homework phase.
    • Some will copy their homework answers from other studentswork in groups.


  • @hardwaregeek said in Assessing performance--it's not just for programs:

    it's almost impossible to put a monetary value on that.

    Depends.

    Did you have a heart attack before, and your doctor said it was because of stress?
    You could save your life.



  • @boomzilla said in Assessing performance--it's not just for programs:

    @benjamin-hall said in Assessing performance--it's not just for programs:

    I did the same analysis against other teachers gradebooks (with their permission) and found that some teachers had very strong correlations--the homework predicted very strongly how the tests would go (which was what I was expecting).

    Some potential confounding factors, off the top of my head:

    • Some kids won't take the homework very seriously.
    • Some will still be struggling with the material during the homework phase.
    • Some will copy their homework answers from other studentswork in groups.

    Right. I had been giving homework graded on completion, so kids weren't taking it seriously at all and just slapping down random crap. I moved to a combination of online auto-graded work and some in-class work where I could check and they felt like it was more effective. Was it? Dunno. I think so.



  • @benjamin-hall said in Assessing performance--it's not just for programs:

    I moved to a combination of online auto-graded work and some in-class work where I could check and they felt like it was more effective.

    I'm surprised that there was much merit in work that was of a type that could be auto-graded. I'm too old to have encountered that at school, but it was very slightly becoming a thing while I was at university, where it seemed as though auto-graded tasks tested only the limited content that could be distilled down to true or false 'facts': highly amenable to cramming with very little requirement for understanding.

    Knowing that a computer will do the grading absolutely invites disengagement from the task (to be blunt, why should I as a student bother if the teacher isn't - but then I've always been obstreperous).

    There's always the open question as to whether a process imparts or tests knowledge recall or understanding (which are obviously not the same thing) - all the more fraught when the final exams might well test only things that can and will be promptly be forgotten by most of the class once they leave the exam hall. My experience of school left me fairly jaded.

    I'm not sure that I'd expect much correlation between homework and exam performance (with outliers falling at both extremes).



  • @benjamin-hall said in Assessing performance--it's not just for programs:

    imposter syndrome

    That's the name of the achievement I got for killing myself in a video game recently:



  • @japonicus said in Assessing performance--it's not just for programs:

    Or, "I spent years working on a project that ultimately failed (through no fault of anyone on the team)."

    Often times, that's because C-:phb: decided to change company direction.



  • @japonicus The things I'm doing with the online systems are usually more numerical--solve this problem type things rather than just true/false or multiple choice. Those are only for things where the process is clear--plug it into the equation or simple fact analysis. They're also able to retry them until they have the correct answer. The only "grade" for those is that they can submit their email address for points if all the questions are correct. Later, they can go back and clear their answers and retry for practice.

    For things where the process is important I give problem sets in class that they have to check with me and iterate on until they're all correct (focusing on method and process).

    The older way of homework was either

    • homework given once per chunk of material, graded in detail but not given back (due to time constraints on my part) until the assessment for that chunk. This was just a take-home, open notes portion of the assessment. Not much use.
    • frequent problem sets graded on completion and returned quickly. These were not taken seriously at all, nor were the students practicing correctly which leads to bad habits.

    One thing I just tried today was giving them the online practices in class before I taught the material, but not giving points right then. I walked around and gave gentle nudges but let them figure out the material themselves. I've found that student engagement and retention is much better if they can receive quick feedback when they're doing it right. "Try it" is my motto there. It turned one of the more painful topics of the year (molar mass) into more of a self-challenge. The practice was due later and they had time to finish it then.

    In neither case do they have "homework" unless they fail to complete the task in class. That way they're less disengaged and can spend the time at home rewriting notes or doing other useful stuff, not BSing their way through homework that no-one is ever going to read.



  • @hardwaregeek said in Assessing performance--it's not just for programs:

    . If you want to learn just because you enjoy it, yeah, it's almost impossible to put a monetary value on that.

    That is why I have said from the beginning that ROI is not specifically about money.

    (If) I enjoy playing the piano, playing golf, then I can determine the pleasure I will (hopefully) get for doing each of them this evening. If it is raining out, I am willing to be the (emotional) ROI for spending time with the piano will be higher than for the golf course. There is also consideration of the amount of effort I needed to put into each activity in order to get a level of enjoyment.




Log in to reply