Hi, everyone:
This week I want to talk a little bit about giving routine feedback to team members; or you can skip to the roundup.
I mentioned last time that I find thinking about team members performance in terms of expectations clarifying. That’s more obvious when talking about longer-term goal setting or performance reviews - here are our expectations for the next quarter/year, and then those expectations were met, or not - but I think it’s especially useful when thinking of more immediate on individual tasks.
Let’s talk about team members meeting or exceeding your expectations on when performing a task first. As a rule we don’t spend enough time thinking about and handling this case. Meeting your expectations is something most of your team members will do most of the time. And yet it still absolutely merits being routinely called out; it is important for you to routinely give positive feedback for meeting or exceeding expectations.
One objection sometimes heard to the above: “Isn’t meeting expectations just the job? The default? Why take the time to call that out?” Heavens; can you imagine if for the next month everyone around you - peers, team members, your manager - met all your expectations on everything? Or exceeding? What would that feel like? If that month would be significantly better than how most of your months go, isn’t it worth trying to nudge things towards that goal?
A second objection sometimes heard: “But they obviously know what the expectation was; they met it. Confirming that is a waste of everyone’s time.” Oh my, no. Both parts of that can be incorrect.
The first sentence isn’t necessarily true; they don’t necessarily know the expectations they met, or which ones were particularly important here. They did something, and presumably they feel like they did a good job, but they don’t necessarily know why you think it’s a good job, what was particularly important to you about how they performed the work. Consider a small task like a routine code review - it could have been the review’s thoroughness, timeliness, consideration for the level of the reviewee, combination of the three, or something else that was important to you. Or how about something more multilayered like a presentation; what was it about the presentation that met an important expectation - the brevity, the clarity, making a convincing case?
The second sentence definitely isn’t correct. As we saw in an HBR article in #64, there’s basically no plausible number of times you can tell people things like “good job” that is too much. It’s easy for you and it makes a real difference to the other person.
Whether they know they met an expectation or not, if we want them to continue working in such a way to meet that expectation in the future, it’s on us to communicate that. And we need to communicate the expectation with enough specificity that they know what they did and have a sense of how to do it again.
Because that’s the important thing about communicating expectations - it’s about pointing coworkers in the right direction for the future.
The considerations are exactly the same for times that someone’s performance on a task doesn’t meet your expectations. Your team member, or peer - or even, honestly, your boss - generally wants to meet your expectations . (How much they want to meet your expectations will of course vary depending on their relationship with you). They don’t necessarily know if they met every expectation on a task. They know it took them two weeks to do the code review, but they don’t necessarily know that two weeks was too long in this case. They know that their presentation had twenty slides on background and four slides on proposed next steps but they don’t necessarily know that this was the wrong balance for this talk, or why. You have context they might not have. Sometimes you’ve explicitly communicated the expectations they didn’t meet ahead of time, but it’s common that you haven’t. Either way, it’s worth communicating about.
The goal in the case of not meeting expectations is exactly the same when communicating those expectations. It’s about pointing them in the right direction for the future. And it’s your duty to do that pointing, that nudging. Otherwise, you are choosing to let them continue to fail to meet that expectation.
How would you like it if your boss was letting you fail to meet their expectation on something again and again but simply decided to withhold that from you because they couldn’t be bothered having the conversation? If you wouldn’t like that very much - and most of us wouldn’t - then there’s what’s the justification for treating your team member that way?
“Oh, it’s just a small thing” - that makes it worse, not better. Talk with them! The biggest part of the job of leading a team is keeping everyone pointed in the same direction, and it is much easier to give people small nudges about small things early than to wait until something big happens.
Note that everything we’ve talked about so far is just as relevant for peers as it is for people who report to you one way or another. Teams are groups of people who are accountable to each other. Mutual accountability is what distinguishes a team from a bunch of officemates. It is perfectly fair to have expectations of your peers, and for them to have expectations of you; that’s how teams work. What changes a bit with relative role is how you request expectations be met in the future.
There’s a few widely used models for giving feedback. What’s important is that they are:
The simplest model is the Situation-Behaviour-Impact model, popularized by Google in their training materials. That might look like
“Last sprint, when you agreed to review Sunita’s pull request by Wednesday, the review wasn’t finished until the following Tuesday; that blocked her progress for half the sprint”. Or
“When you presented the proposed plan at the project kickoff meeting, the material you presented had a really good balance of just enough relevant context and case for the plan overview. That helped ensure the discussion afterwards was well informed and not sidetracked into irrelevant details.”
The Manager-Tools model is a bit more direct and simple; they’ve tested it in a large number of organizations a few different ways, and apparently this is the way that’s worked the best (which I don’t have any trouble believing). In this model they get rid of the Situation part entirely. It shouldn’t be necessary to remind people about - if it’s that far in the past the manager has waited too long to give it - and it necessarily is focussed on the past instead of the future. Instead, they bookmark behaviour and impact with two questions. The first is something along the lines of “May I give you some feedback?”; and, assuming the answer was yes, the behaviour and impact follow, and it ends with “Could you do that differently next time?” or “Thanks!” (and, implicitly, “Could you keep doing that please?”)
So that would look like:
“May I give you some feedback? [If yes:] When you don’t follow through on code reviews by the time you said you would, other developers have their plans for the rest of the sprint disrupted and we lose time. Could you do that differently next time?”
or
“May I give you some feedback? [If yes:] When your presentations have the right background material without anything extraneous, and a good overview of next steps, that makes the whole meeting and following discussion more productive. Thanks!”
On the other hand, Raw Signal Group’s training on feedback is a quite explicit that it’s about expectations; they encourage managers to begin with “My expectation is”, which has the advantage of owning the expectation explicitly.
One other difference between the models is that in the Manger Tools model, there’s no follow up discussion. That’s the more likely outcome when giving frequent, small feedback; the reasons and context are fairly clear. On the other hand, in SBI or Raw Signal group, discussion afterwards is more common.
Personally I’m skeptical that either “most of the time” or “hardly ever” are the right frequencies to be having conversations after giving feedback. If it’s “most of the time” then I’d worry that it means that feedback is usually big or surprising, which suggests feedback’s not being given often enough; if “hardly ever” I’d worry that the manager isn’t having their expectations recalibrated often enough.
That’s more than enough on giving routine feedback for one newsletter; next issue I’ll recap and extend this discussion a bit, and start talking about goal setting.
Five Questions that Will Help You Strengthen as a Decision-Maker - Art Petty
Tweet: “Every time I make myself write out…” - Cindy Alvarez
As managers, making decisions is a pretty big part of the job - priorities for the team, criteria the next new hire, etc. It pays to spend a little time structuring our thoughts around decisions, whether they are decisions we’re making completely by ourselves, or decisions the team is making together.
Petty has five questions he suggests we use to guide ourselves or to guide discussions when a group is informing a decision:
These are good questions - it also prompted me to dig up a reference I filed away a while ago and traced back to Alvarez’s linked tweet above, which takes a different tack but might be a little crisper, depending on the context:
We are doing …..
Because we see the problem of …..
We know it’s a problem because …..
If we don’t fix it, we’ll see …..
We’ll know we’ve fixed it when we get …..
Virtual “Storming”: How to Work through Tensions with New Teams - Nobl Academy
A lot of heist movies have really clear depictions of Tuckman’s four stages of group development. Forming is the initial “the team is brought together” sequence, where a group of individuals comes together for a common (nefarious) purpose. After the initial honeymoon phase, when the hard work begins, comes the Storming phase - the individually brilliant but mismatched group initially has conflicts as they try to figure out how to work together. In the Norming phase, default solutions and workarounds to those conflicts start to emerge as “the way we get things done in this team”. And then there’s the performing phase, where those norms and the groups collection of individuals’ skills really starts to shine, and they start to deliver in a serious way on some high-performance larceny.
The key here is that the team has to go through the storming phase for the norms established in the next phase to be effective. And storming is harder to do constructively in distributed teams.
This blog post walks through some approaches
Handling the Emotional Weight of 1:1s - Lara Hogan
There’s been a lot going this year, for us and our team members. For some, this final stretch - when the end is still distant, but in sight - may be the hardest.
Our team members can tell us a lot in their one-on-ones, either directly or indirectly. It isn’t always easy, for us especially if we ourselves are running on fumes. Our team members relationship with us isn’t symmetric; they can share their feelings with us in a way that’s often not appropriate for us to share with them, and they have one manager to share with while we have multiple team members to hear from.
We need to have empathy for our team members without it completely exhausting us. So we need mechanisms to establish some boundaries around that empathy.
Hogan has the following suggestions:
Batch-processing groups of reading materials (articles or book chapters) - Raul Pacheco-Vega
A few strategies to “stay on top of the literature” (more like, “catching up with the literature”) - Raul Pacheco-Vega
In research computing we often need to quickly get up to speed in a new area relatively quickly to support a project, and then stay somewhat current in the area for the duration of the project. Honestly, that’s my favourite part of working in research computing, but it’s still a lot of work.
More than that, for people new to reading the ancient and obscure genre of writing called “the journal paper”, it’s just mystifying. A trainee or staff member who tries reading a paper in a field new to them as they would read a blog post or book is going to find it a pretty demoralizing process.
Pacheco-Vega’s first article walks through his live-tweeted example of his approach - getting a key, initial, small (5 papers) group of papers, and creating his note-taking structure. He uses a spreadsheet, with columns for
Then he reads the papers, highlighting in different colors the main idea and its consequences, and with the other colours supporting ideas of decreasing importance. Then he keeps the paper with his notes and highlights, and updates the spreadsheet. As he reads the papers, he may find other papers which he then triages - by looking at abstract, introduction, and conclusion (I’d add, figures) - and if it seems relevant it goes into the spreadsheet for processing.
In the second article he describes his strategy of catching up with the literature - keeping track of potentially interesting papers, ruthlessly triaging, and either reading one a day or ‘batch processing’ as above.
I’m a huge fan of reading several related papers at once - it helps me see connections between them much more easily.
Good Slides Reduce Complexity - Tom Critchlow
Critchlow’s talks about slides for presentations. Those of us who came up in research are used to presentations whose primary purpose is informative. We were teaching material as a TA in class, or we were telling other people about our research. There may have been a bit of convincing people in those talks - convincing students that the material matters, or convincing skeptical researchers that your approach is solid - but even there, the convincing is done by providing additional information. Motivation for the relevance of the material, or background information that supports your analysis approach.
As you advance your career in research computing, presentation purposes change. You are no longer merely informing - your manager, or your research stakeholders, could just read an email if you wanted to inform them of something. Instead, you are assembling materials to propose and recommend a course of action. At the end of the presentation, you want the assembled group to make a decision - ideally, but not necessarily, your recommended decision.
Critchlow’s post has some important points on organizing such a presentation:
(PS - if you are giving such a presentation for a decision you care about, the group presentation should not be the first time the attendees are seeing this material. Manager tools has a great podcast episode on prewiring, and we talk about it in the “change management of stopping doing things” writeup in #58).
Which color scale to use when visualizing data - Lisa Charlotte, Datawrapper
Manim - Animation engine for explanatory math videos - manim project
These two posts focus on a quite different form of communication - visual communication of data or math concepts.
Charlotte’s post is a deep discussion of plotting data using colour. It is a comprehensive four-part overview on the kinds of coloyr scale to use, and why, when plotting data. The first part, linked above, gives an overview you’ve likely seen before - categorical scales, sequential vs diverging continuous scales, and highlighting or de-emphasizing data with colour. The following three go into much greater detail, with terrific illustrations of the concepts.
manim is a community fork and extension of the mathematical animation package developed for the well-known 3Blue1Brown channel of mathematical videos.
The SPACE of Developer Productivity - Nicole Forsgren, Margaret-Anne Storey, Chandra Maddila, Thomas Zimmermann, Brian Houck, and Jenna Butler, ACM Queue
We’ve covered several times the challenges of measuring developer productivity, particularly individual developer productivity. Forsgren et al walk us through recent literature on the subject, disabusing us of some common myths and encouraging us to instead, as managers of developers, keep an eye on the SPACE dimensions of how well our team is doing:
What metrics for that might look like are as shown by the table below.
Research Software Engineering with Python - Damien Irving, Kate Hertweck, Luke Johnston, Joel Ostblom, Charlotte Wickham, and Greg Wilson
A couple years in the making (with a few other books in the series on the way), this ebook is online and soon to be published. It’s aimed at (for instance) researchers or junior staff who have successfully solved their problem with some programming in (say) Python, but want to go to the next step - putting it in version control with Git, automating some things with Make and the shell, adding some configuration and error handling, and creating a package, and getting external collaborators.
The material would be useful standalone for trainees or junior staff (or maybe just staff new to Python), as well as for use in courses.
Resurrecting Fortran - Ondřej Čertík’s
This could have gone in Product Management/Engaging with Research Communities just as well as in software development. Because while I think Čertík’s title overstates things a bit - Fortran has been chugging along quite happily, thankyouverymuch - “Revitalizing Fortran” would be entirely fair. And it was all enabled by doing the hard but relatively straightforward work of online community building.
Fortran’s only real community for decades has been the Fortran Standards Committee, which does great work but as all ANSI or ISO groups is highly (if unintentionally) opaque and inaccessible. Čertík describes the community building work of joining the committee, engaging other champions, starting a website and semi-official incubator, and what’s emerged.
And what has emerged has been pretty remarkable in such a short time. A nascent Fortran standard library! A package manager!
15th IEEE International Conference on Networking, Architecture, and Storage (NAS 2021) - 24-26 Oct, Riverside CA USA, Papers due 9 May
Topics of this meeting which overlap with newsletter readership include:
It turns out that font standards are so complex that someone has implemented a game in a TrueType font.
A package for plots implemented entirely in CSS that works by styling tables.
City of London Police warn against illegal, dangerous website sci-hub.se. So, you know, don’t download any illegal, dangerous, or stolen papers from sci-hub.se or else London’s cybercrime unit will be very disappointed that you downloaded those papers - which could be dangerous or illegal! - from sci-hub.se.
A cheat sheet in numerical analysis for software developers.
How to read ARM64 assembly language.
Sandia’s open-access quantum computer is now in GA. Not sure where things are going with quantum computing, but a lot of organizations are willing to spend nontrivial amounts of money to start playing with it.
A very detailed walkthrough of how C++ resolves a function call.
A menagerie of difficult people to deal with on software projects. Tag yourself, I’m the Professor/Peacemaker/Optimist.
A look at the different mechanisms for data (change) durability, and from that a proposal of four laws of durability.