#182 - 2 June 2024

Scientific judgement is part of our job. Plus: Parking lots; Stop hiding in the comfort of your expertise; No wrong doors; Sales is research; Mentoring plans mandatory for NIH funding; NIHR RSS funding; Communities sustain software; Seqera containers

In follow up conversations from our series on the research impact flywheel, the issue that keeps coming up for people is discomfort with making choices about what kinds of research to pursue.

One “rebuttal” I keep hearing is that the team’s job is to support all research and researchers equally.

That (a) is very much not what the team’s job is; (b) it wouldn’t be possible if it was; and (c) if your job was just to provide some generic service without applying any judgement on our part as to who or how, our teams’ functions would be outsourced.

First, it is not our teams’ jobs to just mindlessly churn out generic widgets for (or provide widget access to) anyone who asks. We are hired for our judgement. In fact, we’re hired explicitly for our (rare! precious!) combinations of scientific and technical judgement. We apply that judgement in make decisions about who to support, how much effort to put into each effort, and what to go after next.

If we ask our VPR, or scientific advisory board, or whoever is ultimately charged with providing scientific input into the running of our teams, they will very much NOT say that we should prioritize every possible activity exactly equally. Our institution has needs, and priorities, and areas where modest effort — effort of the kind that only we can provide — can have outsized impact. Finding those areas, where we can have the highest research impact possible? THAT is our job.

Our job is not to run computers, or write software, or analyze data. Those are things we do as part of our job. But our job, the reasons for our teams existence, is to advance research as far as possible given the constraints we face. And that means making judgement calls.

Second, it is absolutely not possible to provide research support services without favouring one kind of research over another. How would that even work? How exactly would one propose running a team in such a way that it supported equally and without favour well the needs of the CS department, Physics department, Public Health, Surgery, and Microbiology departments, Literature, and History, and Theatre, and Visual Arts?

Every decision we make, no matter how seemingly purely technical, favours one kind of research over another. When we decide what the RAM to CPU ratio will be for our compute nodes, that favours one kind of work over another. When we choose one programming language over another as the default for our new tooling, that favours one set of research communities over another. When we decide on data infrastructure, that favours one kind of data needs over another. When we decide on what to hire for next, that favours supporting one kind of research over another.

Not only is there nothing wrong with that, it’s hard to imagine how it could possibly be any other way. But one has to be conscious of the choices we’re making and the tradeoffs we’re choosing as a result.

Finally, if our teams really just were generic widget providers that didn’t need to - in fact, weren’t allowed to - apply any kind of scientific judgement about what kinds of projects to take on, there wouldn’t be any point of hiring people familiar with research, and parts of our functions could be safely outsourced. We’d just be a generic HR or IT or other administrative function, which could be partly in house but increasingly provided by external providers, because it’s not especially valued.

Judgement is part of our jobs. It’s why we hire so many people from the research world. That judgement means we’re sometimes tasked with tough choices, which can be uncomfortable. But that’s the job.

And with that, on to the roundup!

Managing Teams

On the other side of town over at Manager, Ph.D., in issue #174 I talked about how Action Brings Clarity - that there’s only so much pondering we can do before we need to start doing things to truly learn about the problems we’re wrestling with.

In the roundup I covered:

  • Good praise-to-criticism ratios are high,
  • The importance of emotional signposting and giving other kinds of context,
  • How to announce team departures,
  • The goal of a strategy is to change a team’s behaviour; and
  • Getting and maintaining buy-in

Technical Leadership

Three Kinds of Parking Lots to Finish More Work and Reduce Decision Load - Johanna Rothman

A parking lot is an invaluable way of taking ideas and possibilities and agenda items off the table for now, so we can focus on the immediate priorities and topics at hand. It’s a way of avoiding saying “no” by instead saying “not now”.

There’s a million things we could be doing or thinking about, enough that we’ll just spin our wheels indefinitely if we’re considering it all simultaneously. Rothman talks about parking lots in three different contexts - the work of a team, possible product ideas, and portfolios of project options.

Everything will be a little clearer in the future, after we’ve already taken the actions we’re working on now; by moving things off our to do lists now, and bumping them to the parking lot, we can reconsider them with more context.


Stop Hiding In the Comfort of Your Expertise - Maarten Dalmijn

This is, I think, the biggest issue I see highly technical leaders of research support teams wrestle with.

The temptation is to consider every issue we face as a technical issue of the kind we’re familiar with. If we can just write some software or build some infrastructure, maybe that will fix everything!

But sadly, as Dalmijn points out, most of the time it’s a people issue, and we need to start building relationships and learning about other areas of work if we want to tackle those problems head-on.


Product Management and Working with Research Communities

No Wrong Doors - Will Larson

In the UK, US, and Canada (and likely elsewhere too - those are the governments I follow most closely), the national “digital service” has been using “digital transformation” as a way to renovate the delivery of different government services, making changes much deeper than just putting PDF forms into HTML. There’s been a lot of really useful service design work that I think we can usefully learn from.

In other areas of public service, especially dealing with vulnerable populations, there’s an increasing number of groups implementing “no wrong door” approaches. Rather than telling people “sorry, wrong department”, the person getting the request helps the person through the process even if that means working with other departments. I think that’s something we can learn from, too, and so does Larson.

It pains me to say this, but the default for our teams (especially larger systems teams) is to become VERY bureaucratic, with forms to be filled out and “that’s not our department”, making us sound like any IT department or local DMV. This tendency exists for any overworked service organization, by the way — fighting that sclerosis takes active effort.

And it’s worth fending off, because that approach slows research and frankly makes us less appealing to work with - it’s a really crummy first impression.

As Larson points out, you can start “rolling out” No Wrong Doors approaches one person at a time, just making a point of starting a three-way conversation with the person you think is the right person to answer the question. This helps the researcher (and remember, our goal is to advance research in our community), it makes us look better, and it slowly helps build connections with other teams. (Which is good, because they’re colleagues, not competition, in the struggle to advance research - #142).


Sales Is Research - Kevin Yien

When I talk about talking with researchers identifying their needs, or what other services you could be providing, these are a form of sales conversations.

In academia, we kind of recoil from that, but we needn’t. Sales conversations, as Yien tells us, are just a form of research. We’re finding out what works and what doesn’t, what is needed and what is optional. We’re not trying to manipulate anyone, or bend their will — we’re trying to see how we can help.


The Broader Research Support Ecosystem

Want NSF funding? You’ll need to submit a grad student mentoring plan - Katie Langlin, Science

Starting this week, principal investigators (PIs) seeking funding from the U.S. National Science Foundation (NSF) will be required to include a plan describing how they will mentor the graduate students and postdoctoral researchers involved in the project as part of their application.

Good.

The most important outcome of the vast, vast majority of research projects is the on-the-job training the grad students and postdocs receive while performing the project (second place would be any collaborations developed or maintained via the work of the project). Focusing on those trainees and their development and career growth should be a priority.

(My hot take: the entire grant proposal process could be usefully re-focussed on the trainee development, with the research aims being described in the context of that development. Having the trainees do uninteresting or derivative work isn’t good for their career growth).

Regardless, for US health-research-supporting teams: this is probably a pretty good time to start putting together some text about how your team helps train early-career researchers and how you show them the ropes of computational research. Then you can share it with your health researchers and their departments, helping them deal with a new reporting requirement while demonstrating your awareness of the funding landscape and highlighting the supports you offer.


NIHR unveils £100m service to help health researchers - Emily Twinch
NIHR Research Support Service - NIHR

And on the UK health research support side:

Somehow I missed this from last year - in the UK, NIHR revamped how it provided pre- and post-award advice and support (consisting of advice from statisticians, health economists, social & behavioural scientists, clinical trialists, and such) for those applying to NIHR for grants, or for other funding to similar work. There are now a smaller number of hubs (8) across the UK, which themselves often consist of many smaller teams across multiple institutions. So I guess the hubs are kind of like concierge front desks to expertise across many teams.

Do any readers have any experience with how this has worked? I’m seeing more of this “regional hub” model, with or without the “concierge front desk” approach, and would love to know know its’ going.


Research Software Development

There’s no such thing as sustainable research software - Katz, Barker, Hong, Turk, Carver, Cohoon, Howison

So you, reader, will already understand that software is not “sustainable”. There’s no sustainability linter you can run over the code to highlight possible sustainability issues, no test suite you can run to check for sustainability regressions. Sustainability is not an inherent property of a piece of software.

Same with a computing system, or a curated database, or..

Instead, these efforts are sustained, or not, by people or organizations who pay for it to be sustained.

Those people or organizations do this sustaining because they (or a community they support) need that software or other effort to do their jobs. Because there are users who advocate for sustaining the effort, or sustain it themselves.

In other words, sustaining is something a community does, and to the extent that “sustainability” in this sense is a thing that exists at all, one enhances an efforts’ “sustainability” by nurturing and supporting that community, and making it easy for them to effectively advocate for continued sustenance.

Even so, the need the community has for that effort is going to wax and wane over time, as will the sustaining. Eventually, at some point, the community will move on or dissolve entirely, and the sustaining will come to an end.

So a tool goes from a prototype or something bespoke for one problem, and grows in technological readiness (#91) to become RCD development, not just research (#119), and over time gathers a community which, with luck, will sustain the effort for as long as the community exists and needs it.

It may not find or build such a community - in startup speak it may not find “product-market fit”, and fade away (as with Sochat’s article on updated software in #172). That’s disappointing for the individuals involved, but it’s very much the nature of research - not every idea or effort pans out.

This understanding of sustaining is starting to gain wider acceptance. In this article, some RSE heavy-hitters describe how they’re starting to think about software and sustaining.


OSS Licensing for researchers and Educators - George Washington University Open Source Program Office

GWU has a nice short three-lesson course for researchers and educators about using licensed software and choosing licenses for their own work.


Research Computing Systems

Announcing: Seqera Containers for the bioinformatics community - Brendan Bouffler, Paolo Di Tomasso, and Phil Ewels

AWS and Seqera (the nextflow folks) have jointly put out a service which serves container images with some features specifically for research.

There are reproducible container URIs, a large library of supported Conda and PyPI and Spack packages, and the containers are quickly generated on the fly.

(I guess they pre-built the appropriate layers, so you can just pull them without building from a Dockerfile? I’m just guessing here)

It works with docker or singularity or anything that consumes docker or singularity images, and there are security scans and SBOM manifests. There are builds for x86 or Arm.

This seems really cool, and is the most interesting bit of scientific software packaging I’ve seen recently.


Random

I’ve posted some controversial things here before, but check this take out: some good things about autoconf.

Copying a file from a thirty year old, only kinda works, laptop.

Python applications that install themselves - PyApp.

Some thoughts on how the way we communicate and share ideas and work when we were working with small amounts of data and mostly analytical proofs might shift in the age of lots of data and neural nets - Universal Modelers. Not sure I agree with everything there, but thought provoking - always useful to be reminded that the way we do things is a choice not a law of nature.

Sure, LLVM and JVM are neat, but what if your programming language compiled to bash, like Amber?

Terminal programs don’t have to be boring - Terminal TextEffects.

Running asynchronous software book clubs.

Data-to-paper via LLM in a reproducible and human-verifyable way with data-to-paper.


That’s it…

And that’s it for another week. If any of the above was interesting or helpful, feel free to share it wherever you think it’d be useful! And let me know what you thought, or if you have anything you’d like to share about the newsletter or stewarding and leading our teams. Just email me, or reply to this newsletter if you get it in your inbox.

Have a great weekend, and good luck in the coming week with your research computing team,

Jonathan

About This Newsletter

Research computing - the intertwined streams of software development, systems, data management and analysis - is much more than technology. It’s teams, it’s communities, it’s product management - it’s people. It’s also one of the most important ways we can be supporting science, scholarship, and R&D today.

So research computing teams are too important to research to be managed poorly. But no one teaches us how to be effective managers and leaders in academia. We have an advantage, though - working in research collaborations have taught us the advanced management skills, but not the basics.

This newsletter focusses on providing new and experienced research computing and data managers the tools they need to be good managers without the stress, and to help their teams achieve great results and grow their careers.


Jobs Leading Research Computing Teams

This week’s new-listing highlights are below in the email edition; the full listing of 183 jobs is, as ever, available on the job board.