Hi, everyone!
“How are you making sure that junior staff get access to mentoring when everyone is working from home?” was the most popular question on our first go at Ask Managers Anything.
I’ll give my answer, and please send in yours (with indications as to whether I can use your name or not). I’ll collect them and include them in the next roundup - help our entire community learn from what you’re doing.
Here, we’re wrestling with this for two of our more junior team members and I’m don’t have a great answer. We’ve been defining tasks at a much more fine-grained level (but still not too prescriptively about how to do them), doing more code review and giving more feedback, trying to have more conversations, but I’m not sure that’s enough. One of our team members (hi!) has looked into remote pair-programming tools but nothing looks particularly good.
So I’ll hand it over to you - what are you doing on your team to get your more junior staff the mentoring they’d regularly get?
Send in your responses (just hit reply, it only goes to me) and let me know if I can use your name and position when summarizing the responses. And submit any other questions you have at Ask Managers Anything.
Also, some of you may have noticed the URL - I’m slowly moving the newsletter to its own domain. More on this next week or so.
And now, on to the roundup:
Your interviews shouldn’t be spoilable - Rafe Colburn
Could not be more in agreement on this - if answers to your interview questions can be spoiled/leaked, they might not be great interview questions.
At some point I want to do a longer post on hiring for research computing teams, but the points in Colborn’s article are exactly right. The purpose of an interview is to pull out and identify mismatches between the candidate and the role. If you’re relying on memorizable things for the interview - trick questions where someone may have seen the puzzle before, or simple leetcode-type algorithmic things like reversing a linked list, you’re basing your hiring decision on whether or not the candidate has happened to memorize the right thing. Unless you run a reversing-a-linked-list-as-a-service startup, this is likely not super informative as to how well they’ll perform the job.
On the other hand, having them talk about relevant problems they’ve already solved and how they decided on the necessary tradeoffs gives you some insight into how they’d work a related problem at your job. Digging deep into their technical knowledge - like the old “what happens when you type https://company.com” into the browser, and just keeping going deeper at each step is something you can’t study for except by learning exactly the knowledge you want them to have (shades of qualifying exams).
If you follow this advice, then as per Julia Evan’s piece covered two weeks ago, you can have and incrementally improve a document outlining what to expect from the interview and share it with the candidates.
Everyone’s Sad and Getting Sadder - Cassie McDaniel
Just a reminder that just because everyone’s settled into a new routine over the past months doesn’t mean everyone’s ok. Uncertainty (verging on chaos in places), danger, and loss of what used to be normal life is weighing heavily on a lot of our team members; especially as we’re hearing that (e.g.) Google is planning to keep people home until summer 2021. Keep it in mind, and try to be kind.
Models for conflict resolution – choose the right one for you - Andy Skipper, CTO Craft
Leading Through Conflict - Scott D. Hanton, Lab Manager
These two articles cover how to deal with conflict in the technical or laboratory workplace, each with several resources to follow up on.
(Disagreements are not themselves conflict! Technical people of any field in any team should be cheerfully, respectfully, disagreeing with each other all over the place.)
Both start, sensibly, with the fact that conflict comes from somewhere - there’s some underlying issue driving it, which may be quite a bit removed from the topics or events that the conflict seems to be about. And as managers, part of our job is to work at not just ensuring the current conflict is resolved but reducing the chances of the same fundamental issue causing future conflicts.
Skipper’s article concludes with several broad models of conflict resolution with descriptions and pointers to find out more, and some general reminders. Hanton’s article goes into more specifics about broad approaches - stay calm and focused on the big picture and overarching goals so as to avoid being dragged into the conflict, stay away from opinions and try to learn from the two competing vantage points, and in so learning try to find (and create) areas of common agreement that can be built upon. Some examples are provided.
One thing I like about Hanton’s article is it points out the good that can come out of conflict - by helping get the underlying issue resolved, by bringing forward multiple vantage points, by clearing the air and in the end potentially strengthening relationships. As long as conflicts don’t fester and become toxic, some disagreement and tension is normal and even helpful.
One person’s approach to writing - Amy Tabb
Writing is important for your career, whether it’s papers, posts for your teams website, talks, or what have you. For many of us, the biggest issue is sitting at a blank screen and getting some words out first:
My philosophy is that editing is easier than generating text.
If that’s your stumbling block, too, this is a useful read. Tabb recommends just getting words on to page first. Committing to writing something at a given time (and not worrying about quality) is one way to do it - I’ve started using 750 Words and essentially paying a website to be disappointed in me if I don’t do some writing each day. Tabb mentions other approaches - dictating something into your phone/computer to get ideas out, dumping text from other sources (emails, slack messages, etc) into a google doc to get started, etc. Then it’s “just” a matter of iteratively, relentlessly editing, possibly collaboratively. I’m with Tabb here - for me, editing and improving is way easier than getting some words out in the first place.
A Better Cheatsheet - Hillel Wayne
A really nice demonstration of introducing a cheatsheet gradually in teaching material - having sections appear over time when the material is covered. Sort of the opposite of a faded worked example. Has anyone tried something like this in their training materials before?
A framework to assess the quality and impact of bioinformatics training across ELIXIR - Kim T. Gurwitz et al.
Speaking of training - A lot of us run online training programmes, but fewer have systematic ways of following up and quantifying how it impacted on trainee’s success in their research. In this paper, the ELIXIR project - an EU-wide set of life sciences resources - reports on their quality and impact of training courses with four years of self-reported survey data of almost three thousand trainees.
The post-instruction survey information is interesting, but more interesting is the followup results six months and longer afterwards, to get self-reported measures on how well the training affected the researchers’ ability to do their work. Getting people to respond is a challenge - they had an 11% response rate - but given how simple these sort of automation steps are with CMSes or mail management software, I’m surprised it isn’t done more often. None of this replaces educational assessments (e.g. pre- and post-quizzes) of the training material itself, which would be very course specific; nor is it fine-grained enough information to be actionable (although open text fields could give useful qualitative feedback). But having this sort of long term data with research-reported responses about impact would be very useful to have when trying to secure funding for training programmes.
Breaking the mould: What next for the non-traditional academic - Philip Quinlan
Life sciences has an especially strong community of digital research professionals who aren’t academics per se but play an increasingly large role in research; besides the large number of lab techs, think bioinformaticians (for whom it’s not rare - maybe too unusual, but not rare - to play a leadership role in research projects) or data or systems architects or data curators.
Whilst this community is undoubtedly a non-traditional academic, that does not mean they do not undertake research. This breaks the long cast mould, that academics in Universities do research and others provide supporting services.
Heath Data Research UK (HDR UK) has started a slack community for “these Digital Research Technologists”. It’s a shared slack channel; if you have a paid or pro Slack account you can add it to your slack community.
I’m somewhat optimistic about this effort. While there are a growing number of “Research Software Engineering” communities growing around, which is a positive development especially for training, the challenges of supporting research powered by digital research infrastructure (software, curated data resources, software systems, compute systems, storage system..) are cross cutting, and focussing exclusively on one piece of that digital support leaves gaps.
On the other hand, the early definition of a DRT as one who “undertakes research” might not be the right one either - in a multidisciplinary project that requires a lot of digital infrastructure support one way or another, the dividing line between those who do undertake research and not is pretty fuzzy.
Still, I like this initiative to define a larger community. It lines up with what I think is important - e.g. this newsletter is about research computing teams quite broadly.
A new funding model for open source software - Colin McDonnell
While open-source sponsorship models, such as Github’s sponsors program have had some success, requiring users to decide which individual efforts to support and how much for each seems to dampen their uptake. McDonnell suggests sponsorship pools, so that one could decide to sponsor a pool of OSS projects in a given theme for say $100 a year with less effort.
It’s fun to think about sponsorship pools for (say) computational biology or academic data science packages (although how the proceeds would be divided up would be… controversial I suspect). This, or individual sponsorship could work for research computing, but we need to change incentives or norms for it to be an ongoing effort rather than down to one-off donations by a few generous individuals. Having grant proposals require some tiny set-aside for donation to open-source computational packages used - even (especially!) for non-computational research - would be a game changer but I don’t see any movement in that direction.
Testing And Scale - Daniel Bell
This is a short read talking about the difference in the need for testing at the initial, exploratory phase of coding (where detailed testing is brittle and slows you down) as opposed to the stage of development where the code is being used for real things (where lack of detailed coding makes the codebase brittle because it can’t be easily safely modified). This this is particularly relevant to research software development, where I’ve argued a maturity model is a useful way to look at software. (Incidentally, there’s an upcoming SORSE talk on maturity in social sciences research data services that I’m excited for).
Renaming the default branch from master - GitHub
Highlights from Git 2.28 - Taylor Blau
GitHub is seeing a lot people moving their default branches away from the problematic, at best name master to main. (Well, a lot of other names, but main is the most common). Github has set up redirects to branch names to support that transition, and later this summer will allow you to set the default branch name. Later on the new default branch names will likely be main. Git 2.28 now supports setting default branch names.
You don’t need SRE. What you need is SRE. - Sanjeev Sharmak, The DevOps Adoption Playbook
Setting SLOs: a step-by-step guide - Cindy Quach, Google
“But, You are Not Google” - Sanjeev Sharmak
When Google’s SRE book came out, many tech companies misreacted in one direction - overly diligently following every step Google made in implementing Service Reliability Engineering, even though doing so makes very little sense unless you’re nearly Google sized. In research computing, we tended more to misreact in the other way - being exquisitely aware that we aren’t Google, and having no intentions to be, we simply felt there wasn’t anything for us to learn there.
SRE is first and foremost about culture. Do you have a culture that is reliability focussed? What does reliability mean to you?
But I think having some clarity in terms of service level objectives in how we support our researchers, whether that’s uptime, availability, response time, etc., as well as error budgets, greatly increases our ability to communicate with our researchers, lets them have realistic expectations, and informs us where we most need to spend our time. Sharmak’s article focuses on the key things that would mean, and Quach’s article walks us through setting SLOs (even though for most of our services it’s much more straightforward than the cases Quach tackles.)
The Mythical DevOps Engineer - Alessandro Diaferia
This one hits a little close to home, because I just completed one search (and helped with another) where at internally we were talking about looking for “A DevOps person”. It’s sloppy thinking, and unhelpful, and this article walks you through why. DevOps is a culture and approach we’re trying to inculcate, not a person or a team. What I should have been saying was that we were looking for someone to take on a lot of needed automation and deployment tasks, to help bridge the gap between the operations and the development team, and to help bring the development team up to speed on the approaches. That’s a much clearer set of responsibilities and would have made the search a lot easier for both sides.
“How could they be so stupid?” - Lorin Hochstein
Creating Good Processes - Stay SaaSy
So a lot of the twitter hack of a couple of weeks ago came down, amongst other things, to sharing credentials on Slack. And while it’s tempting (and not entirely wrong) to dismiss this as basically user error, that it’s something that just shouldn’t have been done, fire those people and the problem is fixed, that’s not really a solution.
Bad practices typically result from workarounds to awkward processes. Think of desire paths - people want to go from A to B, and if the processes/walkways won’t get them there, well, they’ll find a direct route. There’s quite a literature (over 7,000 in this list) on workarounds in electronic health records systems, for instance.
As the Stay SaaSy blog points out, good processes make it easier to do the right thing than the wrong thing. If there are bad practices that keep coming up amongst users, maybe that’s a suggestion that a walkway needs to be built along that desire path.
Research Data Alliance 16th Plenary CFP - 4 August Short notice on this one, sorry all. This is a call for sessions for the 16th Plenary, including:
13th IEEE/ACM International Conference on Utility and Cloud Computing - Papers due Aug 31
Call for papers including Big Data Science, Infrastructure and Platforms, Applications, Trends and Challenges, or Visualization.
OpenACC Summit GPU Bootcamp - 3-4 Sept 2020 CET, Virtual, Application due 17 Aug Application required; apply individually or as part of a team.
GPU Bootcamp is an exciting and unique way for scientists and researchers to learn the skills needed to start quickly accelerating codes on GPUs. Held as a virtual event in September, this two-days event will introduce you to available GPU libraries, programming models, and platforms where you will learn the basics of GPU programming through extensive hands-on collaboration based on real-life applications using the OpenACC programming model.
A nice description of recurrent neural networks without using neural networks - working straight from linear algebra and optimization.
GRUB2 bootloader buffer overrun vulnerability. Pretty much only a problem if you didn’t trust the people configuring your systems’ boot loaders and were relying on vendor signatures to make sure they were doing the right thing. Note that RHEL/CentOS patches for this are currently broken
A couple of recent articles on scaling relational databases, the first focussing on queries, the second on maintenance and storage.
Nice recent work on zfp for storing floating point arrays in compressed format. When I was in grad school I refused to believe this sort of thing was even possible.
basho - include javascript expressions (math, json handling) in bash scripts.
One of the things I really don’t like about Jupyter notebooks for exploratory development is that, unlike say RStudio, it doesn’t give the user an offramp to software development tools - unit tests, refactoring, version control - for maturing the code past the exploration phase. Microsoft’s VSCode team is doing good and important work merging notebooks with their IDE’s capabilities.
This online latex editor and diagram maker, Mathcha, looks really cool, has anyone used it?
Absurdly abusing AWS’s Route 53 as a globally-distributed key-value store.
Wait, on Mac you can install fonts with homebrew?