Our application, titled “Recombinant DNA Technologies for Multiplex Genetic Assays in Human Cells” was funded by the National Institutes of Health, National Institute for General Medical Sciences. This five year, $250,000 direct cost per year grant will support our continued efforts pairing landing pad -based cell engineering with multiplex assays to unlock new aspects of protein and cell biology, as well as improving our understanding of human genetics. Goals include creating generalizable, multiplex methods for functional complementation, fluorescent transcriptional reporters, & large-scale cDNA screening. Thank you NIH NIGMS for supporting us with this wonderful funding mechanism!
Estimating coverage for NNK SSM transformations
We were recently doing some small scale Gibson-based NNK site saturation mutagenesis PCR reactions. In this scheme, we are independently transforming each position separately, so the number of transformants (ie. colonies) on a given plate should be directly related to the likelihood that all of the desired variants that we want to see are there at least once.
In fact, there are three parameters that factor into how good the variant coverage is at a given position. This is going to be 1) nucleotide biases in the creation of the NNK degenerate region of the primer, 2) the number of transformants, and 3) the fraction of the number of total transformants are actually variants, rather than undesired molecules such as carryover of the WT plasmid used for the template.
For any given experiment, you’re not going to know what the nucleotide bias is like until you actually Illumina sequence your library…. but at that point, you’ll already know the variant coverage of your library, so no need to estimate it anymore. On the other hand, if you know the nucleotide biases you observed for similar libraries, then you can do this estimation far before you get around to Illumina sequencing. Based on previous libraries, I have a pretty good idea of what the biases from machine-mixed NNK primers from IDT are like. For simplicity sake, I’m using 40% G, 20% C, 20% A, and 20%T as a rough estimate for the nucleotide bias I saw in the most biased NNK libraries.
The other two parameters are going to be very much experiment specific, and can be determined shortly after generating the library. The number of transformants can be determined by counting colonies from the transformation. And the amount of template contamination can be roughly determined by performing Sanger sequencing on a handful of colonies from those plates. Thus, I chose a few reasonable values for each: colony counts ranging from the very small (10 and 20) to quite large (400 and 1000), and template contamination percentages from almost impossibly low (0%) and much more likely (10 or 20%) all the way to possibly prohibitively high (50% and 75%). I then simulated the entire process, bootstrapped 20 times to get a sense of the average output, and made a plot showing what types of variant coverages you get depending on the combinations of those observed parameters. This is what the plot looks like:
So there you go. In a reasonable condition where you have, let’s say 10 or 20% template contamination, then you’d really be hoping to see at least 200 colonies, and hopefully around 400, where you can then really pat yourself on the back. If things went awry with the DPNI step, for example, and you were getting between a quarter to a half of colonies being template, then you’d minimally want 400 or so colonies and don’t feel too safe until you got a fair bit more than that. Though that’s only to make sure you at least have one copy of every variant at that position. If your library is half template, then chances are you’ll be running into a bunch of other problems down the line.
Vacuum Concentration
I hate the high cost of research lab materials / equipment, especially when the underlying principles are pretty simple and mundane. For example, I’ve used blue LEDs and light-filtering sunglasses to visualize DNA with SYBR Safe. And I’ve used a mirrorless digital camera paired with a Python script to visualize Western blots.
Well, this time around I was thinking about vacuum concentration. Many of the lab-spaces I’ve been around have had speed-vacs accessible, though I’ve never really used them since I don’t ever really need to lyophilize or concentrate aqueous materials. Though the other day, we had some DNA that was 1.5 to 2-fold less concentrated then we needed for submission to a company, and I was reluctant to ethanol precipitate or column-concentrate the sample at the risk of losing some of the total yield. Thus, became curious about taking advantage of vacuum concentration.
So the lab already has built-in vacuum lines, so I just needed a vessel to serve as a vacuum chamber. I bought this 2-quart chamber from Amazon for $40, and started seeing what rates of evaporation I see if I leave 200uL of ddH2O in an open 1.5mL tube out on the bench, or if I instead leave it in the vacuum chamber.
The measurements of vacuums are either in “inches of mercury”, starting at 0″ Hg, which is atmospheric pressure, to 29.92″ Hg, which is a perfect vacuum (so no air left). As you can see, the built in vacuum lines at work top out at ~ 21″ Hg, so somewhat devoid of air, yes, but far from a perfect vacuum. I even did a test where I put in a beeping lab timer into it, and while the vacuum chamber did make it a lot quieter, it was far from completely silent, like the vacuum chamber exhibit at the Great Lakes Science Center achieves (here’s the Peeps version). But what does it do for vacuum concentrating liquid? Here’s a graph of the results, when performed at room temperature.
So the same sample in the vacuum is clearly evaporating much faster. I can make a linear model of the relationship between time and amount of sample lost (which is the line in the above plot), and it looks like the water is evaporating at about 1% (or 2 uL) per hour in atmospheric conditions (oh the bench), while it’s evaporating at about 2% (or 4 uL) per hour in the vacuum chamber. Thus, leaving the liquid in the vacuum chamber for 24 hours resulted in half the volume, or presumably, a 2-fold concentration of the original sample.
Clearly, this is not a speedvac. If I understand it correctly, speedvacs also increase temperature to speed up the evaporation process. I could presumably recreate that by putting a heating block under the vacuum chamber, but I haven’t gotten around to trying that yet. There also is no centrifuge. While I could probably modify and fit one of my Lego minicentrifuges inside, the speed of evaporation at room temp has been slow enough that everything has stayed on the bottom of the tube anyway, so it’s not really a worry so far. At some point, I’ll also perform a number of comparison at 4*C as well (since the vacuum chamber is so small, I can just put it in my double-deli lab fridge), which may make more sense for slowly concentrating more sensitive samples.
Overall, for a $40 strategy to achieve faster evaporation, this doesn’t seem too bad. In the future, if we need to concentrate a DNA sample 2-fold or so, maybe it’s worth just leaving it in the vacuum chamber overnight. Furthermore, the control sample is kind of interesting to consider, as it’s now defined how fast samples left uncapped on the bench may evaporate (I suppose I’ll try this with capped samples at some point as well, which will presumably evaporate a little bit slower). Same thing with samples kept in the fridge, which are also evaporating at a slow but definable rate. After all, “everything is quantifiable“.
1/25/2023 Update: In explaining this as a potential option, I used the word “slow-vac” which is good name for this. Time to trademark it! Though other people were onto this name a while back so maybe they did (obviously they didn’t).
HEK 293T Bxb1 Landing Pad recombination protocol
Due to popular request, I’m going to put my *most current* version of the HEK 293T Landing Pad recombination protocol here for others’ benefit. Much of the credit goes to Sarah Roelle, who wrote up this current version of the protocol.
Recombination:
[This protocol is for a 24-well plate:]
Day 1:
1) Make 2 transfection mixtures per sample:
Tube 1: 23 μL Opti-MEM + 1 μL Fugene6
Tube 2 (If using cells that don’t already express the Bxb1 recombinase enzyme): μL of DNA corresponding to 16 ng Bxb1 plasmid + μL corresponding to 224 ng attB plasmid and OPTI-MEM to a final tube volume of 24 μL.
-OR-
Tube 2 (If using cells that already express the Bxb1 recombinase enzyme): μL of DNA corresponding to 240 ng attB plasmid and OPTI-MEM to a final tube volume of 24 μL.
2) Mix Tube 2 into Tube 1 for each sample. Mix up and down a couple times with pipet. Then let the mixtures sit for 15-30 min while you get the cells ready (Unless you trypsinized and counted the cells first, to know how many you had in case they were limiting).
4) [Meanwhile] Trypsinize and count the cells. Add 120,000 cells per well in a final volume of 300 μL media to each well.
5) Once at least 15 minutes have passed since mixing, add the mixtures dropwise throughout the well of cells being transfected.
Day 2:
Add at least 500 μL media to each well.
Notes about scaling up / down:
We typically pilot new plasmids in a 24-well scale, and transfect larger populations of cells by transfecting 6-wells and increasing the number of plates / wells as needed.
Negative selection (if applicable):
Negative selection with iCasp9 is wonderful, and hopefully you are using one of those landing pad versions. I typically use 10nM final concentration of AP1903, although I’m fairly certain 1nM is just as effective. Death occurs in a couple of hours, so you can come back to your plate / flask later on in the day and change media to get rid of the dying cells. I typically wait until at least 72 hours after recombination to perform this step.
Important note: If you’re doing a library-based experiment, then make sure you leave some cells aside (even 100k to 500k cells will be plenty) that DO NOT go through any selection steps, since this will allow us to estimate the number of recombined cells (and thus estimate the coverage of your library; see below for the calculation). If you’re just doing individual samples, this isn’t nearly as big of a deal, since you’ll be able to visually inspect the number of recombinants (ie. do you see only a handful of surviving cells, or is it 100+ individual cells surviving?).
Positive selection (if applicable):
I tend to do the positive selection step AFTER negative selection, since the negative selection step has usually thinned the number of cells down enough such that the positive selection is going to be most effective (I find that over-confluent wells tend not to do super-grant with positive selection). Thus, while it can be done as soon as 72 hours post recombination, I tend to do this a week or later after recombination. You can find effective concentrations for positive selection of landing pad HEK 293T cells here.
Some additional notes:
- Due to a phenomenon of (annotated) promoter-less expression from the thousands of un-recombined plasmids that remain in the cell following transfection, I typically wait ~ 5 to 7 days before running any cells in the flow cytometer, to wait for that background signal to die down.
- Other transfection methods may work, but will need to be optimized. For example, I have observed transfection with lipofectamine 3000 to result in prolonged promoter-less expression, probably due to increased transfection and greater toxicity preventing cell division and thus plasmid dilution.
- Calculating the number of recombinants. OK, so as I alluded to above, it’s definitely worth estimating the number of recombinants of any library-based experiments. To do this, take your observed recombination rate from running flow on your unselected cells (say, you see 5% of cells as being mCherry+ / recombined when you ran flow on unselected cells 7 days after transfection), and multiply that fraction with the number of cells you transfected. So, if we pretend that you transfected 20 million HEK 293T cells and you had observed 5% of cells recombined in your unselected well, then the rough calculation is 2e7 cells * 0.05, amounting to roughly 1 million cells estimated to have been recombined. Of course, directly sequencing what’s there in the recombined library is a more direct measurement of library coverage at the recombined cell step, but doing this calculation is still usually worthwhile as another line of evidence.
Conda virtual environments
If you’re going to do things with the bash command line, you’ll inevitably have to install a number of packages / dependencies. Having a package manager can help with this, along with virtual environments, that allow you to keep the versions of packages you need for different purposes relatively organized. For package managers, I have tended to like Anaconda the most. So install that. I’m not going to go through the steps here, since I already have it installed, so I’d really have to go out of my way to try to describe that. But it should be pretty straightforward.
Well, I want to create and run a script using biopython, but I don’t already have it installed on this computer, so this seems like a nice time to make a virtual environment for it. The steps I did were as follows:
- Create a virtual environment for biopython.
$ conda create -n biopython - Activate the virtual environment
$ conda activate biopython - Next, install biopython.
$ conda install -c conda-forge biopython
That should be it. Whenever you want to deactivate the virtual environment, you can type:
$ conda deactivate
5/17/22 edit: I’ve been doing some image analysis in python, that requires installing some of the following packages:
$ conda config –env –add channels conda-forge
$ conda install numpy
$ conda install matplotlib
$ conda install scipy
$ conda install opencv
$ conda install -c anaconda scikit-image
$ conda install -c conda-forge gdal
Analyzing Illumina Fastq data
We recently got some Illumina sequencing back from GeneWiz, and I realized that this is good opportunity to show people in the lab how to do some really basic operations to handle such types of sequencing data, so I’ll write those instructions here. Since essentially everybody in the lab uses a Mac as their primary computer, these instructions will be directly related to performing these steps on a Mac, though the same basic steps can likely be applied to PCs. Also, since these files are small, everything will be done locally; once the files get big enough and the analyses more complicated, we’ll start doing things on a computer cluster. Now to get to the actual info:
- First, find the data we’ll be using for practice today. If you’re in the lab, you can go to the lab GoogleDrive into the Data/Illumina/Amplicon_EZ/30-507925014/00_fastq directory to find the files.
We won’t need to analyze everything there for this tutorial; instead, let’s focus on the “KAM-IDT-Std_R1_001.fastq.gz” and “KAM-IDT-Std_R2_001.fastq.gz” files.
2. Copy the files to a directory on your local computer. You can do the old “drag and drop” using the GUI, or you can do it in the command line like so, once you adjust the paths for your own computer:
$ cp /Volumes/GoogleDrive/My\ Drive/MatreyekLab_GoogleDrive/Data/Illumina/Amplicon_EZ/30-507925014/00_fastq/KAM-IDT-Std_R1_001.fastq.gz /Users/kmatreyek/Desktop/Illumina_data
$ cp /Volumes/GoogleDrive/My\ Drive/MatreyekLab_GoogleDrive/Data/Illumina/Amplicon_EZ/30-507925014/00_fastq/KAM-IDT-Std_R2_001.fastq.gz /Users/kmatreyek/Desktop/Illumina_data
3. Un-gzip the files. You can do this in the GUI by double-clicking the files, or you can do it in the terminal (if you’re now in the right directory) like so.
$ gzip -dk KAM-IDT-Std_R1_001.fastq.gz
$ gzip -dk KAM-IDT-Std_R2_001.fastq.gz
Optional: Take a look at your fastq files. You won’t want to open the files in their entirety, so what makes more sense it just looking at the first 4 or so lines of the file, corresponding to the first read. To do this, type:
$ head -4 KAM-IDT-Std_R1_001.fastq
And you should get an output that looks like so:
4. They won’t always be paired reads, but this time it is. So we’ll pair them. I you don’t already have a method for doing this, then download PEAR and install it like I described here. Once you have it installed, you can type in a command like so:
$ pear -f KAM-IDT-Std_R1_001.fastq.gz -r KAM-IDT-Std_R2_001.fastq.gz -o IDT_HM
It took my desktop a couple minutes for this process to complete. You’ll get an output that looks like this.
Your directory should now have all of these files:
You can look at the first read again (now that it’s been paired), using the following line, it should look like so:
$ head -4 IDT_HM.assembled.fastq
As you can tell, the quality scores in the first line went from mostly F’s (Q-Score of 37) to almost all I’s (Q-Score of 40).
5. Now that we’ve prepped the Illumina data, it’s time for the downstream analysis. This will be far more project or experiment specific, so these next steps won’t apply for every situation. But in this case, we made a library of Kozak variants to try to get a range of expression levels of the protein of interest. Furthermore, the template DNA used for the PCR lacked a Kozak sequence and a start codon, and these will inevitably be in the sequencing data also. So the goal of this next step is to identify the reads that are template vs those that have the Kozak sequence, and if it does have a Kozak and ATG introduced, to extract the Kozak sequence from the read.
I went ahead and wrote a short python script that achieves this. So, grab that file (ie. Right click, and it “Save link as…”), stick it in the same directory as the data you want to analyze, and run it with the following command.
$ python3 Extract_Kozak.py IDT_STD.assembled.fastq
The script should then create a file called “IDT_STD.assembled.tsv” that should look like this:
This can now be easily analyzed with whatever your favorite data analysis language is, whether it’s R or Python. Huzzah!
Dark culture media with RUBY
I like playing around with various recombinant DNA tools to see how they work and figure out if they’ll do something useful for me. Sometimes it works out amazingly, like iCasp9, which is a fantastic negative selection transgene. Other times, it’s not so clearly a success…
I recently ordered RUBY from Addgene (originally used by the depositing lab to darken plant roots) to see if I could turn my cultured cells visibly darker. I had actually messed around a little with this concept previously using tyrosinase, where it worked in making the cells darker, albeit one could only see the darker color either as a centrifuged cell pellet, or perhaps as a large overgrowing “colony” on an originally sparse culture plate. Well, I shuttled RUBY into my recombination vector and selected for cells expressing it. I don’t even think I saw dark cells this time (though I didn’t look very closely, since this is just a fun side-experiment), but I did notice that these cells had much darker media than their recombined siblings, like the cells expressing fluorescent proteins in the two wells to the right of it. So I’m not exactly sure what’s happening, but the chromogenic small molecule is definitely making it out into the culture media.
I guess I’ll just file this observation for now and see if it ever comes in useful at some point the future! hahaha.
Update: FYI, RUBY is *BIG*. Like ~ 4kb kind of big, since it encodes multiple enzymes within a chemical pathway. So def not a small chromoprotein kind of thing.
Dealing with paired fastq reads
10/25/2022 update:
Ran into some paired sequences that didn’t work well with fastq-join, so went back to pear. Found an easy way to install it via bioconda, using the following command. So simple.
conda install -c bioconda pear
July 19, 2022 update:
OK, I’m trying to reinstall PEAR on my new-ish laptop and running into errors. After struggling a bit, I decided to switch over to using Fastq-join. This is way easier to install and run, as you can see below:
Based on the information on this website, and assuming you have anaconda installed, run:conda install -c bioconda fastq-join
After that, actually pair your files running a command like:fastq-join ACE2-Kozak-mini-library-pooled_R1_001.fastq ACE2-Kozak-mini-library-pooled_R2_001.fastq -o ACE2-Kozak-mini-library-pooled.fastq
Note 1: You can run fastq-join on gzipped files, like so:fastq-join No-infection-maybe_R1_001.fastq.gz No-infection-maybe_R2_001.fastq.gz -o No-infection-maybe.fastq.gz
Note 2: If working with gzipped fastq data, it’s a little trickier to look at your files compressed files. To look at the first read, for example, you need to use a command like this:gunzip -c SARS2-MOI-0pt1-1_R1_001.fastq.gz | head -4
Here’s the original (now deprecated) text in this post:
Got some paired Illumina sequencing back today, and wanted to pair the R1 and R2 fastq files. I vaguely remember using PEAR to do this before, so gave it a shot. Since the files are relatively small, I figured I’d just try to run this locally, which meant trying to install it (and it’s various dependencies) on my Macbook. Here are the steps…
- Download “autoconf-2.65.tar.gz” from this website. Uncompress it, and then go into the now uncompressed directory, and type in “./configure && make && make install”
2. Download “automake-1.14.1.tar.gz” from this website. Uncompress it, and then go into the now uncompressed directory, and type in “./configure && make && make install”
3. Get PEAR academic from this website by entering in your info, receiving an email, and downloading “pear-src-0.9.11.tar.gz”. Uncompress it, and then go into the now uncompressed directory, and type in “./configure && make && make install”
Huzzah. It should now be installed.
Now you can go to whatever directory has your R1 and R2 files, and type in something like so:
“pear -f KAM-IDT-HM_R1_001.fastq -r KAM-IDT-HM_R2_001.fastq -o IDT_HM”
You should now have a file called “IDT_HM.assembled.fastq” that has all of the joined paired reads.
5/18/21 Update: I was working on something else that required trimming reads. I decided to use fastp to do this. The instructions to install fastp is as follows (essentially following the instructions here):
- Download the zip file (under the “Code” button) from https://github.com/OpenGene/fastp, and unzip it.
- Use terminal to go to the directory that is made, and run
$ make - Run the following command:
$ sudo make install
… and then type in your computer password when prompted. Fastp should now be installed.
Anh wins a scholarship X2
Anh is selected as a CWRU SOURCE – Provost Summer Undergraduate Research Grant (PSURG) 2021 Summer Research Scholar, which will support his time in the lab performing research over the summer of 2021. Congrats, Anh!
Landing pad plasmid maps
I still intend to get around to posting most if not all landing pad plasmids to Addgene, but it’s taken me forever to get around to it. Actually, the institution takes just as forever to do it, since apparently they have to get permissions to post any plasmid with parts I may have amplified from another source(eg. the puromycin resistance gene)? Once those are there, the plasmid sequences will obviously be publicly available. In the mean time, I figure I’d post some of the most common plasmid maps here, so other people can benefit form them (or I don’t have to send specific emails to each person who asks for them). So here are some of the most popular, published plasmids, in GenBank (.gb) format.
G542A_pLenti-Tet-coBxb1-2A-BFP_IRES_iCasp9-2A-Blast_rtTA3 (Addgene #171588)
G698C_AttB_ACE2_IRES_mCherry-H2A-P2A-PuroR (Addgene #171594)
G758A_AttB_ACE2(del)-IRES-mCherry-H2A-P2A-PuroR (Addgene #171596)
p0489_AttB_EGFP-Link-PTEN-IRES-mCherry_Bgl2.gb
p0669_AttB_sGFP-PTEN-IRES-mCherry-P2A-bGFP_Bgl2
G273A_AttB_sGFP-PTEN-IRES-mCherry-P2A-HygroR
G274A_attB-sGFP-PTEN_IRES-mCherry-P2A-PuroR
attB-mCherry (Addgene #171598)
G382C_attB-eGFP-rIRES-mCherry-2A-PuroR
G163A/B_AttB_PuroR-P2A-mCherry_Bgl2
G310A_pLenti-TetBxb1BFP-2A-coBxb1-2A-Blast_rtTA3
pLenti-TetBxb1BFP_coBxb1-rtTA3_Blast
G57A_attB_TP53-link-EGFP-IRES-mCherry