Rapidly screen synthetic antibody libraries to discover the next blockbuster therapeutic.

When sorting through a heap of antibody candidates from a library, antibody affinity and binning data are critical for sorting through potential candidates. Twist Biopharma, a division of Twist Bioscience, uses its proprietary DNA writing technology to create large diverse oligo pools (upwards of 106) to develop a range of antibody phage display libraries. These libraries are either broadly applicable to any target or focused on a specific class of tough targets, e.g. GPCRs. Screening Twist’s libraries often discovers a large panel of antibodies to each target, which can be further sorted based on affinity and epitope.

Throughput, speed, resolution, and sample consumption are typically key limiting- factors for detailed kinetic characterization early in antibody discovery campaigns. Here, we show that high throughput surface plasmon resonance (SPR) can be used to rapidly generate high quality kinetic data from 384 antibodies in parallel with minimal sample consumption. Additionally, epitope binning assays can be performed on up to 384 antibodies per array, providing unprecedented throughput that allows for early assessment of your library’s epitope coverage with exquisite epitope discrimination, facilitating the identification of clones targeting unique epitopes. The ability to characterize binding kinetics, affinity, and epitope specificity on large antibody panels with minimal sample consumption at early stage research is highly advantageous in drug discovery because it helps to accelerate library-to-lead triage.

In this webinar, Twist Biopharma shows how the Carterra LSA fits into their workflow to get an early read on affinity and epitope binning. Utilizing this data, they show how they can winnow their lead candidates down to a smaller number of top candidates to pursue further.

How the Carterra LSA fits into their workflow to get an early read on affinity and epitope binning

0:00:00.8 Elizabeth Lamb: Good day, everyone. On behalf of Cambridge Healthtech Institute's global web symposia series and our sponsor, Twist Biopharma, a division of Twist Bioscience, I'd like to welcome you to High Throughput Antibody Screening Using Next Generation Synthetic Antibody Libraries Coupled with Kinetic and Binning Assays. My name is Elizabeth Lamb, and I'm the host and moderator for today's event. Now, I'd like to introduce our presenters for today. First is Dr. Aaron K. Sato, PhD, Chief Scientific Officer, Biopharma with Twist Biopharma, a division of Twist Bioscience. Our second speaker is Dr. Daniel H. Bedinger, PhD, Application Science Team Lead with Carterra. Welcome, Aaron, the presenter ball is yours.

0:00:57.4 Dr. Aaron K. Sato: Great, thanks so much, Elizabeth, for the introduction. So again, my name is Aaron Sato, I'm the CSO of the Biopharma vertical at Twist Bioscience. And today I'm gonna give you an overview of the Twist Biopharma libraries and how we use kinetic and binning assays as part of our overall workflow using the Carterra LSA system, and then I'll pass the ball to Dan, who will give you a much better overview of the system and how it generally can be used throughout the Biopharma industry. So again, I always say that the best companies out there really understand the best, the one thing that they're really good at, and so for Twist, that's actually our ability to print DNA. So as shown on the right is this silicon chip platform that we have where we can actually print up to a million individual oligos up to 300 base pairs in length on this device. Basically, it's a fantastic way to make individual oligo pools, so we can make pools of oligos anywhere from 10 oligos all the way up to a million, and we can use those oligos to make all kinds of different custom DNA products.

0:01:57.7 DS: So our top custom product that we make is, of course, clonal genes where you can make, you can order any clonal gene up to 5kb in length and we can clone that into any vector you wish, but there are all kinds of other products that we make as well. We use oligo pools themselves, for say, gene-editing, CRISPR-Cas9 applications. We also use oligo pools for our NGS enrichment kit line to actually enrich for specific sequences before doing NGS. And then finally, we use the oligo pools to build high-quality DNA libraries, and again, the custom DNA libraries are actually really central to the Twist Biopharma mission, which I'll get into in a second.

How do we use oligo pools to generate and build DNA libraries

0:02:38.5 DS: So how do we use oligo pools to generate and build DNA libraries, and in particular, antibody libraries? So if you think of an antibody variable domain, either a heavy chain or a light chain domain, it's essentially a collection of three different loops that are pieced together. You could think of basically synthesizing an oligo pool that encodes for diversity in each of those different loops, and then basically, seamlessly PCR them together to create a hypervariable domain. I know this is subtle, but it... One kind of... The reason why this is, in my mind, kind of game-changing in the antibody engineering discovery space is that in the past when people have made synthetic libraries, they typically used what's called the degenerate oligo, which is an oligo that has mixtures of nucleotides or mixtures of trinucleotides, and you basically try to use that oligo to mimic the diversity that's seen in the natural human repertoire as you build an antibody phage display library.

0:03:31.8 DS: In this case, I don't need to do that. I can actually make the explicit sequences in a pool and then just shuffle them in different contexts, within the context of a single human germline framework. So this allows me to make libraries that are much more precise and also that exactly match the natural human antibody repertoire, if I choose to. In addition, I can also remove sequences up front that might lead to developability risk, so isomerization sites, cleavage sites, deamidation, glycosylation sites, for example. And then finally, if I choose to, I can also code for specific motifs in our CDR sequences as well, and that's actually really important for our GPCR libraries, which I'm not gonna talk about today, but if you're interested, please, again, follow up with me.

0:04:18.9 DS: So again, just to reiterate the power of the Twist platform to make libraries, again, we use this silicon-based DNA synthesis platform to make a huge pool of oligos, and we can use those pools of oligos to make really precise antibody DNA libraries, and in particular, antibody phage display libraries, where we have really strict codon usage control and we can control the combinatorial diversity of these and the combinations of those different oligo pools in the construction of a library. We have tight control of the amino acid distribution in each position, and because we're just making pools of individual oligos, we can actually... It's very easy to choose to modify and have different lengths of CDRs which, again, using the traditional ways of using degenerate oligos, it's actually really difficult to do and require multiple degenerate oligos. Of course, we can avoid restriction sites and unwanted motifs, essentially use multiple germline frameworks and then finally, the last but not least, we can also validate the library at the end using next generation sequencing to make sure that the final library matches the design that we intended.

What is the Twist Biopharma vertical within Twist Bioscience?

0:05:24.7 DS: So what is the Twist Biopharma vertical within Twist Bioscience? We're basically an antibody discovery and optimization group that utilizes all the fantastic DNA products within Twist to help pharma and biotech discover as well as optimize antibodies. And we basically have two kind of flavors of antibody libraries. The first one are just kind of general use libraries that are either based off of Fabs, single-chain FV or VHH scaffolds. I'm gonna talk today about the VHH libraries and how we use them to discover novel antibodies, but we also have a whole series of libraries that can be used for difficult to drug targets. I'm not gonna talk about it today. Our primary... Difficult to drug class of targets or our GPCR libraries, which we actually have two. And again, we're continuing to innovate in this space to create libraries of advanced ion channels and other things like carbohydrates as well.

0:06:18.5 DS: Thirdly, we have a whole platform for doing antibody optimization, I will also talk about this today as it relates to doing affinity ranking and binning. So, again, a fantastic system that we have in-house for doing affinity maturation and humanization of antibodies. And then finally, I'll also mention our new alpha product, which is our ability to do high throughput antibody production, it's part of every project we do within the Twist Biopharma vertical, but it's also a product that we're thinking about rolling out to the greater Twist customer base, and so please stay tuned for that update.

0:06:51.9 DS: So, again, thinking of ways that we are differentiated from other antibody discovery companies out there, one way that I think that we're very special is that oftentimes in the phage display arena you're limited by the overall diversity of your library. For a phage library, that's typically around 10 billion. So if you wanna increase the breadth of your diversity, rather just building a bigger and bigger library, one way is actually to build more libraries. So, my solution to this is because Twist can build high quality libraries so quickly, why not have at one point in the near future a whole plate of libraries where we have basically 100 libraries of 10 to the 10s, overall we would have a total diversity of roughly now 10 to the 12.

0:07:35.8 DS: So that gives us a huge breadth of different scaffolds on different diversities and CDR loops that allows us to be at the end of the day, be successful with any target that comes our way. And so, I've coined the term a "library of libraries" that Twist Biopharma is offering. Okay, so getting into that library of libraries, so again, as I said, we have general use libraries that can be used for any target and the different types of scaffolds. So, on the top right, we have our libraries that we call the Hyperimmune Library, which is a fully human Fab library. It has a very large oligo pool and heavy chains CDR3 that encodes for over 2.5 million individual heavy chain CDR3 sequences derived from the natural human repertoire. This is a Fab library and we've also made a common light chain version of that same library to enable you to make a common light chain bispecifics.

0:08:27.2 DS: On the top left there are VHH libraries, I'm gonna talk a lot more about these later on. I'll be basing on four different VHH single domain libraries for use. On the bottom right, we have a library that we call the structural antibody library. It's a library based off of all of the known antibody crystal structures that are in the PDB. And we made the assumption that if an antibody has a crystal structure, it's usually very well-behaved, potentially very developable, so we took that as input data set to create a fully human antibody SCFV library focused on using that diversity set.

0:09:04.1 DS: And then as I said, another big area for us is our focus on difficult to drug targets. And again, we can get started with GPCRs, where we create two different libraries, our motif-directed library that we call a GPCR 2.0, and also another library that they call GPCR 3.0 that's based off of all the known GPCR antibodies that have ever been discovered. And again, those are two fantastic libraries for potentially directing them against any difficult GPCR target that you might have. And then finally, as I mentioned, we are continuing to innovate in the space and are now working on and have libraries now for direct against ion channels as well as carbohydrates.

Introduce the idea of our high throughput IgG service

0:09:44.9 DS: I'll just again, introduce the idea of our high throughput IgG service, it's our new alpha product. As I said, it's a part of every project that we do, and it's another project, a product that I'm trying to push out within Twist. To, again, the idea is to not only offer up the genes that encode for specific antibodies, which again, a lot of people use us to synthesize for them. Why not us also offer up the ability to actually make large numbers, but small amounts of antibody as well, just like we do on the DNA side, for genes? And so I've actually created now with the team inside Twist to actually get a whole workflow around making clonal genes and doing many preps and then doing transient transfections in HEK293 and we're doing downstream purification to purify large numbers of IgGs and we can do that on 1 mil as well as an 8 mil scale to deliver either hundreds of micrograms or maybe even upwards of a milligram of antibody to you. So again, as I said, this is part of all the projects that we do and we wanna roll this out as a new product down the road. And we can do that for both full-length antibodies as well as VHH-Fcs, so basically any structure that has a Fc domain that would use protein A as a purifier.

0:11:00.0 DS: Okay, so now I'll transition into the use of our libraries and how we use affinity ranking and binning as part of our process. So again, as I said, VHH libraries are important repertoire of our library of libraries. I really love seeing the domains 'cause they're small and modular, they can get into epitopes and crevices that are oftentimes hindered by larger IgGs, and of course, as everybody knows, they're great building blocks for bispecifics as well, and also because they're smaller, they're really easy to make and manufacture, and so in my mind, they are a great alternative to a traditional IgG structure.

0:11:36.8 DS: So what kind of libraries do we have at Twist Biopharma to actually enable you to discover a VHH against your targets? We actually have four different libraries, the first three are shown here. Again, we had... We found a very large llama database. It actually had over 3000 VHHs that had bound to a specific target, and so we took those input 3000 VHHs and used them to design the four libraries I'll talk about now. The first one is what we call the VHH Racial Library. Basically, we basically create oligo pools that encode for the diversity seen in each of the different CDR loops from the database. So again, we created oligo pools to kind of mimic what a degenerate oligo would do but in a very controlled way. And then we... Which utilized the diversity shown in the first schematic for this library. We then put them into a consensus llama framework and then finally created the final VHH ratio library that's shown here. And we also added in a fair amount of length diversity in CDR3 as well. For the next two libraries, we didn't... We didn't do that. We basically just use the exact CDR sequences derived from those 3000 llama VHHs and just shuffled them in unique contexts, either in the context of a consensus llama framework, the VHH shuffle library, or a humanized llama framework, which is the VHHH shuffle library. And so again, it's really akin to... For example, doing chain shuffling back in the day, again, which is innovated by Cambridge Antibody Technology.

0:13:09.3 DS: But in this case, I'm doing CDR shuffling in the context of a single framework. So that allows me to get unique specificities and binding that I might not be able to get with the original antibody that they are derived from. All these libraries are transformed on a level of 10 billion different diversity, as I said before. And then finally, we created a fourth library. I don't have any data in it about this library today, but I like to say it's also a fantastic VHH library which I've seen a lot of great results from. Which is basically we take the last library, which is the VHHH shuffle library and we replaced the llama CDR3 diversity with the hyperimmune heavy chain CDR3 diversity I talked about before, which, again, is an oligo pool over 2.5 million heavy chain CDR3s, and so we put that into the CDR3 register of the library. So it's kind of a hybrid structure of llama diversity in CDR1 with human diversity in CDR3. And again, in a humanized transducer map with VHH framework. And shown here is just a schematic of the different length diversities. You can see in the racial library, you see a broad range of blanks, in the middle, you see the CDR3 diversity for the H shuffle and also the shuffle library, and you can see, again, the natural distribution of blanks for the llama VHHs in our database.

0:14:29.3 DS: And you can see anywhere from very short all the way to very long CDR3s. Then again, on the far right, is just the natural human diversity that's in the hyperimmune library that was also inserted into the last library I just talked about. And so when we pan those libraries we basically take each of the individual libraries, and often times we'll even pool them all together and then pan them against any approaching target we wish. We'd be typically do a phage ELISA on the rounds four and five. Sequence all of our clones. And then as... Because we sit inside of a DNA company, and we can very easily reformat and use the HyTropin IgG, in this case VHH-Fc process that I talked about before to make large numbers of individual and purified VHH-Fcs. And then we run them through our Carterra LSA platform that I'll talk about in a second to determine affinities of those interactions as well as look at epitope binning of those different binders and how to interact with each other, and also we run them through a whole panel of developability assays as well.

0:15:33.5 DS: So shown here is just an example where we did this with a target called TIGIT, which is an immuno-oncology target. As I said, we typically do four or five rounds of panning, shown here is where we actually did four. And you can see for the first three libraries I talked about, we see dramatic improvement and enrichment of specific phage clones as we go to successive rounds. So again, showing that we get good enrichment of clones, and that the selection potentially is working. When we then pick colonies from each of those different rounds around three, four and five from all three libraries and then did phage ELISAs against each of those different rounds and then pick all of our unique clones and sequence them...

0:16:11.9 DS: Shown here in the table on the top is the number of unique clones from each of the different libraries from all those successive rounds of screening. We've also shown the bottom, the CDR length distribution from all three libraries for all of unique clones. And what you can see is that the CDR3 length diversity is actually very different between all the different libraries. Again, showing that we're getting different sequences out, different types of potential binders, and so again, having a breadth of libraries allows us to potentially find a breadth of different epitopes against this specific target. Which will bear out later when we were talking about epitope binning. As I said, we love the Carterra in terms of its ability to do a one-to-many type of binding analysis, so that allows us to, in this case, since we have hundreds of antibodies... Of VHH-Fcs to mobilize them on the surface. And again, then we're gonna go into much more detail about how the system works after I talk. Basically, we can put down a whole hundreds of different proteins on the surface in a very short period of time, get kinetic data on all of them, so again, it's a first pass that allows us to rank them in terms of their overall affinities and so for this entire project, we... Again, as I said, we had over 100 different VHH-Fcs. And we saw a range of affinities, anywhere from double digit nanomolar all the way to micromolar range.

0:17:31.4 DS: And again, we also look for specificity, so here is just, again, some more clones against another target that we also panned on separately. But this slide is, again, just showing what the clones are specific in the sense that we wanna make sure that our TIGIT specific clones are actually, specific for TIGIT and don't bind any other targets. So again, the Carterra system is fantastic for... Once you've laid down your specific clones on the array, you can use it very easily with it's specificity to not only irrelevant proteins, but also to closer related family members, like other species for example, to look at affinity data against those other forms of the target as well. And then, again, we can... We get... It's because it's an SPR instrument, we get nice Ka, a little Kd data, and that allows us, again, to look to see how their... And you can plot them on this is iso-affinity plot to see how they bin into different affinity groups and how they're... Whether they're driven primarily by off-rate or on-rate in terms of that overall big Kd.

0:18:34.0 DS: And then when we look further into and bin them against into different affinity bins we can see that... And then actually sort them based on the library that they're from, you can see that, in general, we're getting... We're seeing high affinity binding from all three libraries, so in the graph on the right we've actually color-coded it by the three libraries I talked about, in terms of the affinity bins that they're derived from. And so, in general, we see a nice distribution, again, from double digit to micromolar affinity. And there is a slight trend that the highest affinity clones came from the VHHH shuffle library. And if you look on the left, we've actually ranked all the clones from highest affinity to lowest affinity, from double digit to a 100 nanomolar. You can see, in general, we see a nice distribution of all three libraries, but as I said, that the top clones did come from the VHHH shuffle library.

0:19:27.5 DS: And another analysis we'd love to do is to do a phylogenetic wheel of the sequences and how they're related to one another by their primary amino acid sequence, and then we'd love to layer on the Carterra affinity data on top of that as well as other express-ability data, and so it's shown in green on the outside of the well, we actually plotted one over the Kd so you can always see if it's a really high bar or if that's a high affinity clone. And so, again, you can see how the highest affinity clones are clustered into different amino acid families within all of the sequences that we discovered.

We did a full epitope binning to understand

0:20:03.3 DS: And then finally, we did a full epitope binning to understand how the... How the clones from all three libraries fell into different communities in terms of how they might compete with... How they compete with one another. And as you can see, and again, they're color-coded based on library, the... Certain libraries favored specific communities within the... This competition binning plot. And so you can really see again, that the power of having the library of libraries and having multiple libraries at our disposal allow us to access other communities that may be excluded by other specific frameworks and diversities.

0:20:38.2 DS: As I mentioned, we also... And then followed up and did competition experiments to see... And this is just, again, just an ELISA-based assay. But the reason why I bring it up is because it actually relates back to the community data I just showed in the sense that the highest affinity clones that actually competed the best in this competition assay with a known ligand with TIGIT which is CD155, all came from the community... Mostly came from the community one, so, again, we can see how the epitope actually relates to the actual functional read-out from the competition assay. So again, really showing the power of doing high throughput and large scale epitope binning and how it can relates back to the... A functional read-out competition in this case.

0:21:32.0 DS: And in addition to doing all the fantastic assays for affinity and epitope binning, we also run our VHHs through a whole panel of different developability assays for purity on our lab chip system, as well as we look for over all stability. I'm using our Unchained Uncle system and so we can get a good sense of overall expression in developability and stability. And again, we couple that with all the interactive data derived from the Carterra system. Now, I'll just end with our top platform, so, again, as I said, our top platform is our ability to optimize antibodies, and again, what we have heard, that system is we have a custom software that has a very large human NGS database as part of it. And again, we use that to basically input an antibody sequence into it, and the software basically looks at the specific sequence and then suggests any... All goes to make the... Or we're gonna define mutational space derived from that human antibody repertoire that then actually allows us to then build a library focused on your specific lead.

0:22:31.6 DS: And so this just again, how it works. We put in your antibody, which can be derived from any source into the software, the software suggests a whole series of all the clones to make. We synthesize them, create an antibody phage just for the library. And then from that we can, again, pan and screen it against your original target, and often times we see a dramatic increase in affinity. Shown here is just one example where we increased the affinity of a PD-1 antibody by seven fold. And again, showing you data from the Carterra system where again, you can see for hundreds of antibody variants of this original parent antibody which is shown on the right in green, we can see again dramatic improvement in affinity and again this just is PD-1. We also ran the market improved antibodies against PD-1, which are Pembrolizumab and Durvalumab. But again, we see comparable affinities to those two on market and the peer PD-1 antibodies. We now... We've also gone on to do a full epitope bin with these antibodies, as expected, since they're done pre-maturation and they're derived from their parent, they all should bind in the same bin. And, in general, that's what we see. So, again, it's just a nice test and it's very easy to do in the Carterra, but again, just really making sure that we didn't get epitope direct during the optimization.

0:23:45.0 DS: And again, I'll just conclude. In ways to work with Twist Biopharma, again, we're open to licensing the libraries, in particular, the VHH libraries as I talked about today, as well as doing partnerships around all of those different libraries, not only for discovery work, but also for the optimization. We do generate a lot of leads derived from all the POCs that we do around the libraries and we're definitely open to licensing those. We do a lot of work with our library counterparts, the library team internally, where we do help virtual companies to... Where they design their own libraries and we actually help out with all of the downstream screening of those libraries, so that's oftentimes a project that we do. And then, as I said, we're trying to kick off a new alpha product for high throughput IgG, and so if anybody's interested in accessing that new alpha program, please follow up with me. Okay, and I'll... Again, I'll pass the ball over to Dan who will give you a fantastic overview of the Carterra LSA system in much more detail than I did, and will kind of... I think, open up the hood a little bit and show you kind of how it works and how to really access all the fantastic capabilities of the system.

We at Carterra really view the LSA as a disruption

0:24:50.7 Dr. Daniel H. Bedinger: We at Carterra really view the LSA as a disruption in the capacity to do antibody characterization, especially at the early stages of screening. Maps are obviously being leveraged by the discovery for their high specificity and affinity. And the binding to that affinity and the epitopes of those antibodies recognized are really the crucial parameters that inform the mechanism of action. And SPR has for a long time really been the de-facto technique that people rely on for measuring those binding kinetics and affinity. So the LSA really just takes all of that existing knowledge and techniques and tries to expand the capacity by roughly an order of magnitude, which should enable customers to generate more data earlier in their funnel for more of their clones. And the 384 ligand capacity of the LSA, which is how many antibodies we can immobilize on the surface of the array at a time really enables a new scale and high throughput epitope binning studies.

0:26:03.4 DB: That enable exquisite epitope resolution. And really, the architecture of the system makes it such that the sample consumption is very minimal, and it's relatively easy to set up these large experiments in the format of the plate and things are very simple. Also, we've put a huge amount of effort into making a dedicated kinetic and epitope analysis software packages that really facilitate dealing with these large data sets in a information rich and easy-to-use kind of way. So the LSA core applications are really designed around some of the fundamentals of antibody screening workflows, those being kinetics and affinity analysis, epitope binning, epitope mapping, and quantitation. And, as I mentioned, we really put a huge amount of effort into making these visually rewarding and interesting presentations of the data in these kinetics and epitope software packages, and I'll talk a bit more about that later on. So here's a picture of the LSA, it's a bench top and it's SPR instrument. It's quite large. We prefer to sell it as a table that it sits on, it holds the waste and the computer and supplies underneath it.

0:27:25.3 DB: The real differentiating factor of the LSA though, is this two relatively independent fluidic modules that control how the samples get to the system. So there's a 96-channel mode and a single-channel mode, and both offer a bi-directional flow of the analyte, which is... Reduces sample consumption. And using the 96-channel mode, you can immobilize up to 384 antibodies at an array. So we're gonna watch a little video here that shows how the system works. So this is the 96 field manifold for the 96-channel fluidic side of the system. This shows the 96 flow cells coming down and being created on the chip surface and flowing the sample back and forth. The ability to do this is really Carterra's probably biggest differentiator. We can flow 96 samples at a time in a bi-directional way across the chip surface. So this allows you to overcome conventional limitations on flow rate and contact time. Also, it's really a new approach to creating arrays, to the traditional microarray deposition, or is a deposition-based where you're putting material onto the surface in an additive fashion, whereas we are doing this under full flow.

0:28:51.3 DB: So it goes from running buffer to sample, then back to running buffer. So you can do things that you would do on conventional bio-sensors, like immobilization of low-concentration proteins using electrostatic pre-concentration or capture of crude samples using an affinity tag. Also, important, and I think I can run this one more time, is that the system will return the sample to the plate after it deposits it. So from that 200 microliter volume, you not only get efficient array capture, but you can get the sample back for further analysis or reuse. So once you've created a 96 array, the system can go get more samples and dock to a different position and print additional 96 arrays, up to four of them, creating the high density 384 spot array seen in this image. So the other fluidic module... So the 96-channel manifold can move away, and then the single-channel manifold can come over and dock and create a single flow cell. So this exposes one sample over the entire surface of the array. This can be used for activation or creating any capture or affinity lawn, if you're gonna capture crude sample.

0:30:11.5 DB: Then once you've created your array, you can flow one sample over the entire array. So this would be, say a concentration of antigen, if you're doing kinetics, or a competitor antibody, if you're doing epitope binning, and you collect simultaneously data from that one 250 microliter sample in real time, for all of the antibodies you've immobilized. So this is a schematic of how the arrays are created, so the 96-channel manifold is shown on the top with the pink vertical rectangles being individual flow cells. The blue rectangles are the inter-spot references, and then when you do four nested 96 well immobilizations or capture steps, you end up with the 384 array as shown below, essentially, in the same footprint as the 96. And then you can flow one sample over the entire thing and collect all 432 data streams, 384 active, 48 references in a single injection. So, at Carterra, we like to think it's all about the epitope. So one of the main things is if you can characterize your epitope diversity of your panel early on, it can be a surrogate for functional diversity.

0:31:35.1 DB: If you were to run a, let's say, a panning and screening output, and you see that you see very low epitope diversity, it probably means that you have low functional diversity as well in terms of mechanisms of actions and broad coverage of your antigen. So it's a good check for that. Also, this epitope influences the antibody's mechanism of action, whether the antibody is an agonist, an antagonist, force of... Internalization, those are all largely dependent on the epitope the antibodies recognize. Also, this epitope property is innate, it can't be really predicted by in silico methods ahead of time, nor can you really rationally re-target it by engineering, so you really have to select antibodies to the epitopes you're interested in upfront. Epitope binning can be used to secure IP. At least, traditionally, a lot of claims around antibodies are based on their competition profiles with other competitors. And if you have a high resolution epitope competition map, it can be easy to find differences or find things that can differentiate a clone or demonstrate against prior art.

0:32:57.8 DB: And also, if you have antibodies that bind to different epitopes on the receptor, you can have... Especially if you have, like say, viral neutralizers that bind to multiple epitopes, then you can have those antibodies co-occupy the target. It dramatically increases their potency. So competition based epitope binning is only one way to characterize epitopes on the LSA. It is probably the one most commonly applied. But we have two formats for that. We use one we call classical binning, which is the ideal method if you're looking at monovalent antigens, where antibodies will not self-sandwich. If your antibody is a dimer or trimer, multimeric species, then you would use the premix approach. And that's where you premix the antigen with the competitor antibody and inject it over the surface to look for an increase or decrease in the amount of that antigen binding. We also have applications around peptide mapping or epitope mapping, this is where you would use a series of overlapping biotinylated peptides and immobilize them onto the array and see which antibodies bind to which peptides, so you can get a focused read of the epitope assuming the antibodies can bind to a peptide.

0:34:23.4 DB: The epitope software package has a module for making this analysis really intuitive, also a very similar approach can be done with full length proteins, if you do make mutants. So these can be chimeras or alanine scans, for example. Or you immobilize your diversity of antigen mutants and then see which antibody is binding to which mutants. You can also use that to identify key residues and regions of the epitope, or the antigen that are important for binding. So high throughput binning of the LSA is really a big step forward. And the conventional epitope binning runs will scale geometrically with the number of clones, so it's sort of an exponential function in the amount of antibody you need and the amount of antigen you need to run those experiments and the amount of time required. But the LSA's architecture allows you to have essentially a non-scaling assay in terms of the amount of antibody you need to run it, so it doesn't matter whether you're doing a 10 x 10 or a 384 x 384. The amount of antibody you need of each clone doesn't change, it could be just 5 to 15 micrograms range, depending on the assay parameters.

0:35:43.5 DB: You need one volume of sample as a ligand to immobilize and one volume of sample to use as the competitor. So, typically, it's gonna take between 30 to 50 micrograms of antigen for a 96 analyte run and up to about 200 for a 384 x 384, which will give you 147,000 interactions over the... Here's a published example of a 384 x 384 epitope binning. So these data sets, as I mentioned, are large. You can have up to 147,000 interactions to look at, and it's fairly complex. So Carterra has spent a lot of time on our epitope analysis software trying to make it very intuitive and flexible and also provide you some really great visualizations, so this is actually a view from the software as you'd be doing an analysis. So on the left-hand side, you have a sensorgram view. Here you can see the antigen injection followed by the injection of either buffer in the dark blue or sandwiching or competing antibodies. You can set the green bars and normalization bar where you equilibrate to. Or normally all of the samples to how much antigen bound, and then the second orange bar is your report point that populates this heat map plot. We can see there's a cut-off where you have a red region which is blocking and a green region which is sandwiching for those injections. So I chose this example of this clone 'cause you can see even though it has significant dissociation from the surface, you're still able to get clear sandwiching information for that clone.

0:37:29.0 DB: So the information from the sensorgram plot with these cut-offs is then displayed on this heat map plot. So we have the immobilized antibodies in the Y direction and the injected antibodies in the X direction, and green means it's a sandwich-er, non-competitor. Red means it's a blocker and the black versus black outline is self-versus-self. And then once you've generated this heat map and the software's sorted it, you can create a network plot and each antibody in your set is shown as a node or a circle. A cord or a line connecting two nodes means that they're competitive with each other. A lack of a line means they're sandwiching. And then if they're contained within one of these colored regions, those are epitope bins. That means all of the clones in that group have the exact same competition profile, in the assay.

Unique about this software is that these three panels

0:38:24.1 DB: And what's really unique about this software is that these three panels are interactive. So if you were to say, click on a cord in the network plot, it would highlight the cells of the heat map that are used to make that call, and then display you the sensorgrams on the panel. So this makes really digging into the data in a lot of detail and then fine granularity, very easy. Other platforms, you may have to go from a table in one module to a visualization in another, and then the raw data in the third. This has everything at your fingertips, both raw and normalized data.

0:39:05.2 DB: So the exploration of these bins is also a bit flexible. So if you end up with a network plot where only identical clones in terms of competition profile are shown in a bin, the software will also generate a dendrogram which shows how the competition profiles differ among related clones and you can set a cut-off and generate what we call a community plot. And so this is a more generalized view that the user defines what level of resolution they want in the analysis. So, talk briefly about how this type of analysis applies to a synthetic library so, as Aaron covered this really well, you can design your diversity in silico, so you have a very controlled and focused construct of your library. If you're Twist and have the ability to synthesize huge amounts of DNA, you can do that in an easy fashion and then assemble and express these large libraries. I guess now even a library of libraries. You can pan them against your target sequence and the output from that panning and express your unique clones, at which point you can characterize them based on affinity and epitope binning.

0:40:23.1 DB: So these synthetic antibody libraries have huge diversity, and it's highly effective diversity too, because it's so controlled. But when you get sequence diversity out of a panning, that doesn't necessarily mean that you have a functional diversity. It's really the high throughput epitope binning analysis that is gonna give you your clearest picture on how diverse in terms of number of epitopes and coverage of that antigen your output and your panning really was. Various protein antigens can have denatured epitopes that may cause an epitope bias in a selection and have the pannings run etcetera. So it's good to be able to verify if you are getting a broad epitope coverage early on. Also, when you have these carefully constructed diversity in these libraries and you do pannings, you can end up with output with related gene families or antibody sequences. Oftentimes, if you have lots of kinetic information as well as epitope information, you can go and look at those sequences and learn a lot about the contribution to the binding of the various amino acid.

0:41:44.6 DB: So moving on to a hot topic nowadays is the use of neutralizing antibodies to viral and pathogen targets. So Carterra has multiple customers that are using the LSA to characterize the anti-SARS-CoV-2 antibodies from a variety of sources. This is really an example of how multi-clone cocktails of neutralizing antibodies can be synergistic to make these therapeutics, the COVID spike protein is very large and has the opportunity to have multiple neutralizing epitopes on that. And much like the antibodies immune response, if you want to have a potent neutralizing therapy, you're gonna wanna take things that block the exterior receptor action from different non-competitive epitopes, and when you pool those antibodies together, past experience has shown that you'll have a much more potent antibody there... That can be highly synergistic. So to be able to generate rapidly one of these monoclonal antibody neutralizing cocktails, it's really important to be able to do early and rapid characterization of both finding specificity, the kinetics profiling and the epitope diversity so that you can focus your more involved, functional and in vivo assays on very directed... So the LSA can perform all these analyses on up to 384 clones in parallel, making the use of your time and antigen very efficient.

0:43:26.1 DB: And also, I just wanna mention that Carterra is involved in the COVID consortium, so we've volunteered to help. That was just run by the La Jolla Institute for Immunotherapy. So they are compiling antibodies from industry and academia in going to have a centralized processing to try to find the most efficacious one, to make one of these antibody cocktails. And Carterra is gonna be involved in that effort. This is an example of this type of analysis that was done on patient-derived antibodies that were exposed to the yellow fever vaccine. So they did isolated B-Cell sequence cloning for two patients over a period of time as they were immunized and looked at the epitope diversity and antibody sequence diversity, and where the neutralizing antibodies came from. And this was an interesting experiment, that we were able to find neutralizing antibodies to a number of epitopes. Several of the epitopes, one more common from the upper left corner, and then one a little bit further to the right, where there was a high incidence of highly potent neutralizers and they were in distinct and non-overlapping epitopes. So on this data merging the neutralization data and the epitope binning data really gave us a good picture of the functional space on the antigen of where these antibodies can bind and act, but also was highly suggestive of a good potential cocktails of antibodies that would be synergistic together.

0:45:10.5 DB: So, I'm gonna dive a little bit into kinetic analysis on the LSA. So this is an example of a typical antibody screening workflow where we have an anti-human SC-LON created, we can then capture up to 384 antibodies either from supernatants or from diluted purified samples, and then we inject the titration series of antigen. This allows for up to screening 384 antibodies in parallel, and the LSA has 384 well plate positions, so you can automate up to a 1152 map screen in this format. So this is what we think high throughput kinetics should look like. This is 384 interactions from one run, which was set up in an afternoon and run into the evening. It's a detailed kinetic characterization in that it has eight concentrations of antigen used, but because this was done in parallel, it was only really one injection of each concentration, and used 7 micrograms of... This was PD-1... So 17-kilodalton antigen to generate all of this data. Zooming in on the data a little bit and highlight a few points here. One is that if you have less than 384 antibodies, like in this case, it was about 40, you can spot them at multiple times, so in this case, we spotted all the antibodies at 8-12 replicates, which allows you to generate actual statistics.

0:46:40.1 DB: Mean and standard deviations of on and off rates, affinity fits, which is kind of unheard of in SPR previously. It also allows you to do things like immobilize antibodies at different densities, so you can get better kinetic parameters. Also, you can see these antibodies weren't spotted right next to each other, so there's very good kinetic agreement across the surface of the array. Also, this approach enables you to do a more optimized approach to the screening where we have very little sample or time constraints on running these, so we can do broad kinetic series. At this point, it was an eight-point titration series, starting at one micromolar. You can see we were able to get really excellent kinetic descriptions over, at least, to 20,000 full dynamic range, going from high triple digit nanomolar to double digit picomolar from the same kinetic series in the same run. Also, the kinetic analysis software tries to prevent users from either reporting bad data or having to spend a huge amount of time triaging out questionable data in the analysis, so it has these automatic QC flags, so we've... Say, like to flag the good, the bad, and the ugly. So if that something is a low or non-binder, it's colored grey and the array constants will not be recorded in the table that flags things of poor fit and can also flag things that have kinetic limitations in the assay.

0:48:09.8 DB: If you did not inject the high enough concentration of antigen to accurately estimate the Kd, it will flag that for you. Or if you have very stable clones that have off-rates that are not well described by the amount of time you've collected dissociation, it will flag those as well. So these are all very common problems in SPR literature where people have reported erroneous rate constants or people have to spend a lot of time curating their data to prevent these issues, but we try to automate that as much as possible. So with that, we really view the LSA as sort of a disrupting tool in the antibody analytics with this unprecedented parallel throughput and minimal sample consumption allows you to generate really high quality kinetic and epitope binning analysis early in your discovery funnel, so you think this kind of upstream SPR analysis enables people to shorten the timeline for this library to leave triaging steps, and that gives you more of the detailed characterization you'd typically get later on, early in your funnel.

High throughput epitope binning

0:49:17.7 DB: And high throughput epitope binning can really reveal the epitope landscape quickly and it gives you an exquisite resolution that the lower throughput methods just can't enable, and it allows you to select mechanically differentiated MOAs. So I wanna thank you for listening and stay safe, and also if you have questions for me about the platform, there's my email address dbedinger@carterra-bio... Or info@carterra-bio.com. And we also have some more content on YouTube from recent epitope binning and kinetic webinars. So if you're interested in that, just search for Carterra on YouTube platform, you can find that content. So thanks again for listening and hopefully we still have some time for some questions.

0:50:07.4 EL: The question that I have here: When you decide to express clones from a panning output, how much do you rely on related sequence groups to reduce the number of clones to express and test? To rephrase, how heavily do you think you need to sample out of closely-related mAb sequence families on the first path through screening and characterization? 

0:50:32.4 DS: Great question. So typically I work inside of a DNA company, and I have a lot of ability to make a lot of DNA, so typically, I usually make every clone that comes out of my panning, so I don't typically have to make those decisions. But if I had to, then I agree with you, looking at that sequence phylogeny tree that I showed earlier is a great way to kinda narrow down your sequences, but in general, I usually make every single unique clone that comes out of my panning.

0:51:00.2 EL: One other question, is it necessary to run replicates of samples in kinetics, and how does it change the interpretation results of a panel? 

0:51:09.9 DS: Yeah, I always consider the different concentrations of the analyte as a great way to make sure you get a good affinity when we're doing SPR experiments. So, typically, I don't run replicates, oftentimes replicates of the antibodies themselves. But again, if we have room on the array, it's generally running 96, I will run replicates of each of the individual antibodies, but if I'm running a larger screening where I don't have the capacity on the... That single run or I don't have enough antibody itself, I'll just do a single hit to just get me some initial data, and then, of course, I'll always follow up that with additional replicates to make sure I have that true affinity.

0:51:49.4 EL: Alright, and we also do have several more questions that have come in.

0:51:53.4 DS: How do you make and design libraries targeting ion channels and GPCR? So, as I've said, we use a motif-directed... We use two approaches, one is a motif-direct approach where you, for example, collect together all of the motifs that bind GPCRs and ion channels and then we actually graft that into the heavy chain CDR3 loop. The other approach is you basically take all of the antibodies that bind that class of targets and use it as a design set, to make a library focused on that class of things. Okay, I'll answer another one. In the TAO platform, does the mouse antibody humanization involve CDRs being also humanized or is it just the framework? Good question. So I always say the TAO platform is fantastic for doing humanization because the mouse CDRs are actually replaced with human equivalents, so not only are you humanizing the framework, you're also humanizing CDRs. So it's... At the end of the day, it actually gives you an antibody that's almost indistinguishable from a fully human antibody.

0:52:48.7 DS: Curious if your VHH library has been validated binding intracellular targets? No, it hasn't, but I don't see why it wouldn't work, but we haven't done that. Secondly, are T-cell epitopes removed from VHH frameworks? When we use consensus llama frameworks or humanized frameworks, we haven't... Great question, we should check them for T-cell epitopes, but again, we're just trying to use the most commonly used frameworks throughout the repertoire, but you're right. That's probably a good one to check. Have you tried to switch frameworks of the hits from the shuffle libraries to see what happens to functionality and affinity? I haven't... We haven't tried to put them in the... As I've said, there's kinda two flavors of library, either consensus llama or humanized framework, and that does have an effect in terms of the hits we get, so it does show that framework does play a role.

0:53:43.0 DS: Do you start any work on COVID-19? Great question. Yeah, actually, yesterday, I did give a webinar on Twist where I talked about our... That we've done a lot of work trying to find antibodies against S1 and ACE2 and we have found some antibodies that are potentially inhibitors, so that's something we're actively working on right now.

0:54:03.5 EL: We do have time for just one question. Eliardo asks: How long does it take to acquire data for one sample over the full 384 array? 

0:54:14.6 DB: Well, it takes about four minutes to load the injection into the loop and flow it. So if you're talking about the entire array prep process, it varies based on conditions that... It takes about two hours to build a typical 384 spot array, and then the injections just take minutes. Typically, if we're doing something like a kinetics or epitope binning run, the workflow is that during the day or the afternoon, you set up the array and put all your samples in the instrument and then run the assays overnight, so when you come in the next day, you have a mountain of data that you can process.

0:54:58.2 DS: It's like Christmas.

0:55:00.6 EL: Alright, thank you. We do have one other quick question for you: Are purified antibodies needed for epitope binning? 

0:55:06.8 DB: Oh, that's a good question without a really easy answer. So, epitope binning assays are complicated enough where if you have purified antibodies, they will work easier and give you clearer data almost universally. There are cases where if you have a Expi293 system, like the expression system that Twist uses, oftentimes, those samples are pure enough in high enough concentration, where you can use them as if they were purified, just the supernatants, then run the binning. There are some approaches that you can use to do classical binning with other sample types like mouse hybridoma antibodies, but because kinetics and concentration are important parts of that analysis where you're injecting things with analyte, you're a bit more constrained on what conditions will give you clear data. So it's a bit of a mixed answer on that. It is possible. There are definitely techniques to do it, but not all sample sources are readily amenable to that.

0:56:14.4 EL: Alright, thank you so much. We have run out of time, so I will be forwarding the other questions we weren't able to address along to our presenters, and you'll receive those answers via email. So I'd like to take this opportunity to thank Dr. Aaron Sato and Dr. Dan Bedinger for presenting today. I'd like to thank the folks at Twist for sponsoring. So mostly, thank you all so very much for coming today. It's a strange time and we know you're very busy and we're glad you chose to spend this time with us. So on behalf of Cambridge Healthtech Institute's global web symposia series, thank you all again so very much and have a great day. Bye-bye.

Speakers:
Aaron K. Sato, Ph.D., CSO, Biopharma, Twist Biopharma
Daniel H. Bedinger, Ph.D., Application Science Team Lead, Carterra