Where lab funds go

As you can tell from the above graph, the people in the lab (including me) are by far its most costly resource, accounting for the majority of all lab expenditures. Thus, while there are other important reasons, there’s always this very “bottom line” reason for me wanting to minimize how much personnel time and effort is wasted by confusion and mismanaging!

Some Expected Yields

Here is some real-world data describing expected yields we may expect from some of these routine lab procedures or services.

Obviously the above plot is about how much total plasmid DNA we get from the miniprep kit we use in the lab.
The plot above show the expected total yields of DNA based on the extraction type / method
And this is the pretty wide range of reads we’ve gotten from submitting plasmids to plasmidsaurus
The above graph shows how many (raw) reads we’ve gotten from Azenta / Genewiz Amplicon-EZ.

Oh, and this is a good one:

How well my determination of flask “confluency” actually correlated with cell counts. I mean, sure, there must be some error being imparted by the actual measurement of the cells when counting, but I think we all know it’s mostly that my estimate really isn’t precisely informative.

Identity matrix of indices used in the lab

We’ll be doing a lot of multiplex amplicon-based Illumina sequencing, which means we’ll eventually have a lot of different indices (I think some people refer to these as barcodes) used to multiplex the samples. I’m doing everything as 10 nt indices, so theoretically there is 4^10 or slightly over one million unique nucleotide combinations that could be made with an index of that length. I don’t intend of having anywhere close to 1 million different primers, so I think we’re pretty safe.

That said, I’d like to ensure our indices are of sufficient distance away from each other such that erroneous reads don’t result in switching of one index for another. Anh has come up with a way that we can make sure our randomly generated indices don’t overlap with previous indices, but still useful for me to keep track and make sure things are running smoothly. Thus, I generated an identity matrix of all of the indices we have in the lab right now.

In a sense, the diagonal is a perfect match, and serves as a good positive control for the ability to see what close matches look like. By eye, the closest matches between any two unique indices seem to be 70% identity, which I can live with.

Vacuum Concentration

I hate the high cost of research lab materials / equipment, especially when the underlying principles are pretty simple and mundane. For example, I’ve used blue LEDs and light-filtering sunglasses to visualize DNA with SYBR Safe. And I’ve used a mirrorless digital camera paired with a Python script to visualize Western blots.

Well, this time around I was thinking about vacuum concentration. Many of the lab-spaces I’ve been around have had speed-vacs accessible, though I’ve never really used them since I don’t ever really need to lyophilize or concentrate aqueous materials. Though the other day, we had some DNA that was 1.5 to 2-fold less concentrated then we needed for submission to a company, and I was reluctant to ethanol precipitate or column-concentrate the sample at the risk of losing some of the total yield. Thus, became curious about taking advantage of vacuum concentration.

So the lab already has built-in vacuum lines, so I just needed a vessel to serve as a vacuum chamber. I bought this 2-quart chamber from Amazon for $40, and started seeing what rates of evaporation I see if I leave 200uL of ddH2O in an open 1.5mL tube out on the bench, or if I instead leave it in the vacuum chamber.

The measurements of vacuums are either in “inches of mercury”, starting at 0″ Hg, which is atmospheric pressure, to 29.92″ Hg, which is a perfect vacuum (so no air left). As you can see, the built in vacuum lines at work top out at ~ 21″ Hg, so somewhat devoid of air, yes, but far from a perfect vacuum. I even did a test where I put in a beeping lab timer into it, and while the vacuum chamber did make it a lot quieter, it was far from completely silent, like the vacuum chamber exhibit at the Great Lakes Science Center achieves (here’s the Peeps version). But what does it do for vacuum concentrating liquid? Here’s a graph of the results, when performed at room temperature.

So the same sample in the vacuum is clearly evaporating much faster. I can make a linear model of the relationship between time and amount of sample lost (which is the line in the above plot), and it looks like the water is evaporating at about 1% (or 2 uL) per hour in atmospheric conditions (oh the bench), while it’s evaporating at about 2% (or 4 uL) per hour in the vacuum chamber. Thus, leaving the liquid in the vacuum chamber for 24 hours resulted in half the volume, or presumably, a 2-fold concentration of the original sample.

Clearly, this is not a speedvac. If I understand it correctly, speedvacs also increase temperature to speed up the evaporation process. I could presumably recreate that by putting a heating block under the vacuum chamber, but I haven’t gotten around to trying that yet. There also is no centrifuge. While I could probably modify and fit one of my Lego minicentrifuges inside, the speed of evaporation at room temp has been slow enough that everything has stayed on the bottom of the tube anyway, so it’s not really a worry so far. At some point, I’ll also perform a number of comparison at 4*C as well (since the vacuum chamber is so small, I can just put it in my double-deli lab fridge), which may make more sense for slowly concentrating more sensitive samples.

Overall, for a $40 strategy to achieve faster evaporation, this doesn’t seem too bad. In the future, if we need to concentrate a DNA sample 2-fold or so, maybe it’s worth just leaving it in the vacuum chamber overnight. Furthermore, the control sample is kind of interesting to consider, as it’s now defined how fast samples left uncapped on the bench may evaporate (I suppose I’ll try this with capped samples at some point as well, which will presumably evaporate a little bit slower). Same thing with samples kept in the fridge, which are also evaporating at a slow but definable rate. After all, “everything is quantifiable“.

1/25/2023 Update: In explaining this as a potential option, I used the word “slow-vac” which is good name for this. Time to trademark it! Though other people were onto this name a while back so maybe they did (obviously they didn’t).

COVID testing at CWRU

As a PI, I feel it’s important to know how safe my employees are if coming to the campus to work during the pandemic. While CWRU was rather slow to respond in getting on-campus testing set up, they did set up a surveillance testing program and a public website to post the results, which has largely been reassuring. I’ve been keeping track of the results every week for the last few months and will continue to do so for the foreseeable future. This is what things currently look like:

As of writing this (the first week of February), the absolute numbers of infected students / faculty / staff in a given week are firmly in the double digits, but thankfully the test percent positivity has been at or under 1%, unlike November & December. Now that the students are back for the new semester, we will see how the pattern may change, but at least the pandemic has felt largely under control here, at least in the broader context of the conflagration of viral spread we’ve been seeing in this country over the past year.

Firmware flaw in recent Stirling SU780XLE -80C freezers

[This post is a follow-up to my previous post on this subject]

Wow, I never thought I’d learn so much about a freezer company, but here we are. I took a deep dive on this issue with the Stirling SU780XLE ULT freezers. It’s still second-hand (through company reps and people on social media) and I don’t know if I believe everything about the explanations I’ve received (for example, I count roughly 8 instances of freezer firmware getting stuck through various contacts, and I vaguely remember a company rep saying this has happened <= 10 times), but this is my understanding of the situation:

The issue is indeed a firmware problem, and it affects all units produced between ~ Aug 2019 and ~ Sep / Oct 2020. Aug 2019 is when they switched one of their key electronic components to a Beagle Bone (apparently a circuit-board akin to a Raspberry Pi). Part of its job is to relay messages from one part of the circuitry to another. The firmware they wrote for it had a flaw, where — in certain circumstances that the company still does not understand — one part of the relay no longer works, and the other part of the relay just keeps piling up commands that go unexecuted. So that’s the initial issue. There is also supposed to be “watchdog” code that recognizes these types of instances, but this was not working either. Thus, the freezer becomes stuck in the last state it was in before the relay broke. If it was in a “run the engine to cool down the freezer mode”, then it would have been stuck in a state that kept things cold. If it was in a “stay on but don’t do anything b/c it’s cool enough” mode, then it would have been stuck in a state where it didn’t cool the freezer at all. This is the state my freezer was stuck in**.

[** I’m actually not 100% convinced on this. My freezer stopped logging temperatures / door openings, etc at the end of August. If I look at number of freezer hours, it says ~8,000 hrs (consistent with Oct’19 through Aug’20) rather than the ~10,000 hrs for Oct’19 to Nov’20). It is definitely within the realm of possibility that my Stirling has been a zombie for the last 70+ days, and either slowly reached 5*C over time or had a second event over the last weekend that triggered the thaw in its susceptible state.]

It sounded like they had seen numerous freezers get stuck in the former format, which was the less devastating mode since it didn’t result in freezer thawing and product loss. They had seen one freezer get stuck in the catastrophic format before me, back in Aug 20th. They brought it back to their workspace, and couldn’t recreate the failure. They could artificially break the relay to reproduce the condition, allowing them to create additional firmware that actually triggers the “watchdog” (and other failsafes) to reset the system when it has sensed that things have gone wrong, event though they still don’t know what the original cause of the issue is. The reason the freezers produced after Sep / Oct 2020 are unaffected, is that these have already been programmed with the new firmware. The firmware I had when it had encountered the problem was 1.2.2, while it became 1.2.7 after it got updated.

Freezers made / distributed(?) within the last month were pre-programmed with the updated firmware, and are supposedly not susceptible to the GUI freezes. Apparently they’re having trouble updating the firmware in the units b/c the update requires a special 4-pin programming unit that is in short supply due to the pandemic.

I won’t get into the details of my experience with Stirling (it apparently even includes a local rep who contracted COVID). They completely dropped the ball in responding, and they know that (and I’m sure they regret it). What will remain a major stain on this situation is that THEY HAVE KNOWN ABOUT THIS FLAW FOR MONTHS AND DID NOT WARN ANY OF THEIR CUSTOMERS. I received an email ~ 8 days ago saying they were going to schedule firmware updates to “improve engine performance at warmer set points, enhance inverter performance and augment existing functionality to autonomously monitor and maintain freezer operation”. Other customers with susceptible units did not even receive this vague and rather misleading email. My guess is that they chose to try to maintain an untarnished public perception of their company over the well-being of the samples stored by their customers. My suspicion is that their decisions may have been exacerbated by the current demand for -80*C freezers for the SARS-CoV-2 mRNA vaccine cold chain distribution (Stirling has a major deal with UPS, for example), though there is no way I will ever confirm that.

After my catastrophic experience, they bungled their response, and only jumped to action after I tweeted about my experience. I really wanted to like this company, as they are local and not one of the science supply mega-companies (eg. ThermoFisher). My fledgling lab is still out almost $3k in commercial reagents, and many of my non-commercial reagents and samples were compromised. They did make a special effort to update my firmware today and answer my questions, but I still can’t help to feel like a victim of poor manufacturing and service. All of the effort I’ve put in the last few days was to get to some answers and help others avoid the same situation I was put in.

I’ll post any updates to this page if I learn any more, but I’m now satisfied with my understanding of what happened. Now back to some actual science.

Stirling -80C Freezer Failure

I’m getting really tired wasting time and brain-power on this, but unlike buying regular consumer goods (like the items on Amazon with hundreds to thousands of reviews) buying and dealing with research equipment is subject to really small sample sizes, so the more information that’s out there the better. Thus, I’ll keep this page as a running log of my experience with Stirling’s XLE Ultra Low Temperature (aka. -80*C) freezer.

TL;DR -> My 1 year-old freezer failed in the most catastrophic way: the firmware froze and displayed -80*C while the contents slowly thawed as it had reached 5*C by the time I noticed it wasn’t working. No alarms, as the firmware had crashed and was frozen (again, displaying -80*C the whole time). While I’ve had no issue with their mechanics, I suspect their firmware is potentially critically flawed.

Part 1) Discovering that the freezer had failed: I purchased a Stirling Ultracold SU780XLE, a little over a year now (purchased ~ October 2019), shortly after I started up my lab at CWRU. I’ve been in labs that had poor experiences with the ThermoFisher TSU series freezers, and the reviews for the Stirling seemed pretty good on twitter. Furthermore, CWRU has a rebate program with Stirling due to their energy efficiency, and probably also because they are local (they are based in Ohio).

I went into the lab last Sunday evening (Nov 8) to do some work. I went to retrieve something from my the Stirling -80*C, and saw that the usual ice on the front of the inner doors were gone. I opened up the inner doors and looked at the shelves, and there was water pooled on every shelf. I looked at some of the most recent preserved cryovials of cells we had temporarily stored on one of the shelves, and they were all liquid. Things had clearly thawed inside the freezer. I closed the outer door and looked at the screen at the top, and it was displaying -80*C. The screen is actually a touchscreen, so I tried to flip through its settings, but it was completely unresponsive to my touch. It became pretty clear to me in that moment that the freezer firmware had crashed with the screen displaying -80*C. Ooof.

The picture I took of the frozen screen, timestamped Sun, Nov 8, 7:25pm.

I pulled the freezer out from the wall, found the on/off switch, and switched it to OFF. The first time, I actually flipped the switch too soon to ON, as the screen never reset. I’m guessing there must be some short term battery / capacitor that allows the freezer to keep running with momentary interruptions in power. So I then set it to OFF, waited for the screen to go blank, and then set it back to on. After booting up, the screen displayed 5*C. So there we go. It was indeed stuck on that screen, and rebooting the firmware showed it to show the real temperature again. Which is a VERY BAD real temperature.

The picture I took of the screen after resetting the freezer, timestamped Sun, Nov 8, 7:28pm.

I immediately emailed Stirling (email timestamped Sun, Nov 8, 7:37 PM). I received a response from a customer service representative Mon, Nov 9, 8:01 AM saying “I’m sorry to hear that you are having issues.” and that they were referring me to the service dept. Got an email from the Stirling service department Mon Nov 9, 8:39 AM asking for more information and a picture of the device’s service screen. I replied to this email with all requested information Mon, Nov 9, 10:43 AM. I got an email telling me I was “Incident-7576” on Mon, Nov 9, 11:00 AM. Complete radio silence from them as of writing this section of this post, which is ~ 72 hours later (Thurs, Nov 12, ~ 11:00 AM), even after I sent them a pretty strongly worded email yesterday at 6:00 AM. I’ll follow up on my continued experience interacting with the company in section 3 of this post.

Otherwise, the mechanics for the freezer seemed to be fine. It look me about an hour to mop up all of the water, and look through my boxes to see what had thawed (which was everything except the 15ml conicals, which seemed to have enough mass to them to have not fully thawed). I was still very aggravated and in a bit of shock to have had to deal with this, but still went about my work. Two hours later, the freezer was back down to -30*C. The next morning, it was back at -80*C. So the reset was clearly sufficient to make the freezer operational* again. ( *since it presumably still encodes the same firmware glitch which caused the problem in the first place).

Part 2) Taking stock of my lost items and forming my interpretation of what happened: Over the next couple of days, I had a chance to take stock of everything I had lost during the thaw. Being a new lab (and thus with a ~ 1 year old freezer) we didn’t have a ton of items in there, but they were not inconsequential. The commercial reagents were largely competent bacterial cells, which amounted to ~ $2,110 of lost material. There were also ~ $720 worth of chemicals, which upon freeze thaw cycles, are of somewhat questionable potency, and will likely need to be purchased again before use in publication. There were also dozens of cryovials of cell lines made in house. There were also a few cryovials of cells, dozens of tubes of patient serum, and viral stocks for SARS-CoV-2 research either given by other labs or provided by BEI resources, which would need to be replaced as we have no backups. While there is no monetary value associated with these reagents, the amount of work-time used in creating them and now replacing them is a major loss.

As a scientist, I think it’s natural for me to try to synthesize all the information I have to piece together what happened. There was no power loss (it was a sunny weekend without any storms, and no other equipment in the lab had any aberrant behavior). Nobody had gone into it for any extended amount of time, especially since it was over the weekend. The last time I had gone into it was Friday afternoon, when it seemed fine. That said, it is very well possible it had already crashed at that time. I don’t think I can visually tell the difference between a freezer at -80*C, -40*C, or maybe even -10*C. Frozen looks frozen. In lieu of any alarms or temperature readings provided by the freezer itself, the only visual clue was going to be water from the thawed ice in the freezer, which by that point was going to be too late.

To see if I could figure out when the freezer may have crashed / failed, I tried going back into the freezer log. This is all the information I could glean from the freezer:

So, uh, that history feature wasn’t all that informative, but still a couple of points I could glean from looking at it.
1) It goes from -80*C in the data points directly preceding the event, to being > 0*C to when I restarted it. So it completely stopped logging during the event. This is entirely consistent with the software having crashed, and the reason it was still showing -80*C on the screen while it had thawed.
2) Uhhhh. I can’t actually figure out what day and time it failed b/c it had apparently logged its most recent operation as August 26th. Clearly it wasn’t August 26th when it had failed, since August 26th was 72 days before Fri, Nov 6, which was the last time I had looked in the freezer before the event, when it was clearly still completely frozen. Weirdly, I didn’t have to tell it what day it was after I reset it, so it must have had an internal clock that knew it was Nov 8th upon the reset. So here’s another indication of there being something glitchy with their firmware.

Ironically, I had a separate low-temperature thermometer plugged into it TraceableLIVE® ULT Thermometer, Item#: LABC3-6510, which really isn’t a bad thermometer, but it eats up batteries and I ran out of disposable AAA batteries (I don’t think it takes a wall plug, which it should also do so it only needs to use the batteries during power-outage situations!), so I was waiting for some rechargeable AAAs to come in from Amazon. TBH, they had already come in a week or two ago, but the freezer was operating perfectly fine until this so it wasn’t high up on my to-do list to charge and replace the batteries and get the secondary thermometer up and running again. In hindsight, a very naive and critical mistake!

Part 3) Stirling’s response to this:

Thurs, Nov 12, 11:00 AM: So far, it’s been pretty nonexistent. I wrote them an email yesterday (Nov 11) saying 1) Everything I’ve seen is telling me this is a catastrophic failure of the freezer itself, so are you going to take responsibility for it? 2) I’m still quite worried about the freezer’s operation, since the glitch that caused this has not been addressed. I’m yet to get any non-automated response from them past the most recent email on Nov 9, 11 AM.

Thurs, Nov 12, ~ 5:00 PM: Tweeting about my experience seemed to have escalated things, as I got two phone calls. The first was from the technician handling my case (“Incident-7576”), who asked if anyone had been in touch with me about scheduling the fix on the previous Monday and Tuesday. I said no, this is the first response I had gotten. I also pointed out hat I had emailed him yesterday with some questions. Apparently he had not seem the email. So, a rather poorly managed customer and technical service response.

As soon as I got off the phone, the VP of Global Services called me (this is where I think the tweets likely made a difference). Provided apologies (as expected), but I also got to ask for answers to my specific questions. Here are things I learned:
1) “We’re not responsible for sample loss”. So they won’t cover anything that you lose if the freeze fails and thaws, even if it was in the most spectacularly bad way completely due to flaws in freezer design or production that torpedoes its operation.
2) The mechanics are covered for 7 yrs, but the material and labor warranty is only for 2 yrs. This includes things like “door handles and electronics”, with electronics clearly being the most relevant item here. They offered to extend this warranty to 3 yrs. I don’t think I’m unreasonable to feel like that is a pretty weak gesture based on the freezer failing the way it did.
3) I’ve had people tell me I should ask for a refund to get it replaced. Well, they don’t do that.
4) Apparently there are three parts to their firmware. One of them is called the “Beagle Bone”, which they said is responsible for making the real-time connection between the freezer settings and the parts. Quick google search suggests it’s something like this.

The saga continues. Let’s see what the technicians tomorrow say.

Fri, Nov 13th: Causing a stir on twitter apparently kicked things into action. I also put my detective hat on and I think I figured out what was going on. Too much to bury way down here, so I made a new post.

Chemiluminescent images with standard cameras

I work with proteins, so I’ve done Western blots throughout my career. Originally that meant using film and developers, and later with imagers. Imagers are way better than having to deal with film, so as soon as I knew I was going to start up a lab, I started looking at various imagers and quoting them out. Even the most basic imagers with chemiluminescent capabilities quoted in the $24-27k range. But then it dawned on me….. are these imagers nothing but kind of old cameras with a light-proof chassis and dedicated acquisition and analysis software? During my stint in Seattle, I dabbled with taking some long-exposure photography of stars in my parents back yard. Perhaps I could do something similar for taking images of blots?

I had bought an Olympus E-PM2 16.1MP mirrorless camera for $320 back in 2014. While I used it a decent amount at first, I eventually stopped using it as often as I started using my smartphone for quicker snaps, while using Anna’s Nikon DSLR with an old telephoto lens for more long-distance pictures. So, with the E-PM2 now not doing much at home, I figured I’d bring it in and try it with this. I cut out a hole in the top of a cardboard box I could stick the camera into. I dug up the intervalometer I had used for those long-exposure photos of the sky. Nidhi had been doing some western blots recently, and had kept her initial attempts in the fridge, which was good since I could just grab one of those membranes instead of running and transferring a gel just for this. I kept it in some anti beta-actin HRP antibody I recently blot, washed it, and exposed.

OLYMPUS DIGITAL CAMERA

Above is something like a 5 minute exposure. So my cardboard box wasn’t perfectly around the sides, so there’s a decent amount of light bleeding in. I had the blot lifted up within the blot on a metal pedestal (some heat-blocks that weren’t being used), so the blot itself is actually pretty free from being affected by the bleed-over light. Notably, the beta-actin bands are blue! Which makes sense, as if you’ve ever mixed bleach with luminol, you see a flash of blue light. Furthermore, if you google “hrp luminol nm”, you see that the reaction should emit 425nm light (which is in the Indiga / violet range). Notably; this would be a difference between my regular use Olympus camera, which is a color camera, with cameras you’d normally encounter on equipment like fluorescent cameras, which are normally black-and-white.

I had actually been playing around a bit with image analysis in python over the last week or so (to potentially boot up a automated image analysis pipeline). That work reminded by that color images are a mixture of red-green-blue. Thus, I figured I could isolate the actual signal I cared about (the chemiluminescent bands) from the rest of the image by keeping signal in the blue channel but not the others. So I wrote a short python script using the scikit-image, matplotlit, and numpy libraries and ran code to isolate only the blue image and convert it to greyscale, and to invert it so the bands would appear dark against a white background.

To be honest, the above picture isn’t the first ~ 5-minute exposure I mentioned and showed earlier. Knowing this seemed to be working, I started playing around with another aspect that I thought should be possible, which was combining the values from multiple exposures to make an ensemble composition. The reason for this being that a single large exposure might saturate the detector, making you lose quantitation at the darkest parts of the band. I figured why couldn’t one just take a bunch of shorter exposures and add them up in-silico? So I took five one-minute exposures. The above image is the inverted first image (with an exposure of one minute).

And the above image here is what it looks like if I make an ensemble plot from 5 separate 1-minute exposures. With it now effectively being a “longer exposure” (due to the combining of data in silico), the signal over the background has been improved, with no risk of over-saturating any of the detectors.

So while I’m sure there are many suboptimal parts of what I did (for example, the color camera may have less sensitivity for looking at chemiluminescent signals), it still seemed to have worked pretty well. And it was essentially free since I had already had all of the equipment sitting around unused (and would have cost < $400 if I had to buy them just for this). And also gave me a chance to look under the hood of this a bit, practice some python-based image analysis, and prove to myself that I was right.

Miniprep efficiency

The SARS CoV-2 pandemic -caused research ramp-down period was a weird time for me / the lab. I sent Sarah to work from home for 10 or so weeks, meaning I had to do the lab work myself if I wanted to make any progress on any of the existing grant-work, or for any of the SARS CoV-2 research I was trying to boot up. This has resulted in some VERY long weeks over the last few months, as I was really trying to do everything at that point. Cognizant of this, I even started timing myself doing some of the more routine / mundane tasks, to see if I could try to maximize my efficiency. Perhaps the most consistent / predictable of the tasks were minipreps. In particular, I was curious whether doing more minipreps simultaneously saved me time in the long run.

So short answer was yes. 24 is a very comfortable / logical number for me (I just fill up my mini-centrifuge, and the result is divisible by three so easy for processing as complete 8-strip PCR tubes for Sanger later on), and I consistently processed those in about an hour. Dong fewer would be somewhat less efficiency, though sometimes you have to do that if you’re in a rush to get some particular clone of recombinant DNA plasmid. Then again, doing more than 24 — while somewhat exhausting — does save me some time overall. Thus, I found out that was a worthwhile strategy to plan for during that period.

That said, I’m very glad to have Sarah back in the lab helping me with some of the wet-lab work again. Not only does it save me time, but also saves me focus; I’ve gotten pretty good at multi-tasking, but I still do hit a limit in terms of the number of DIFFERENT things I can do / think about at the same time.

Plasmid Lineages

Recombinant DNA work is integral to what we’re doing here, so I’ve become extremely organized with keeping track of the constructs we are building. This includes having a record of how sequences from two constructs were stitched together to create a new construct. Here’s a network map showing how one or more different plasmid sequences were combined to create each new construct.

[The series of letters and numbers prefixed with G (for Gibson) are unique identifiers I started giving new constructs when it became clear partway through my postdoc that I was going to need a better way of tracking everything I was building. Those prefixed with A are constructs obtained through addgene. Those prefixed with R are important constructs I had built before this tracking system, where I had to start giving them identifiers retroactively.]

Edit 9/1/2020: Even if some of my code / script-writing is kind of haggard, I figure I’ll still publicly post them in case it’s useful for trainees. Thus, you can find the script + data files to recreate the above plot at this page of the lab GitHub.