Presentation at Oxford Global Biologics UK 2022 by Andrew Goodhead, PhD

Posted by Andrew Goodhead

In biotherapeutic discovery, traditional hybridoma and phage approaches now both contend with and are complemented by NGS and synthetic strategies to meet candidate timelines measured in weeks rather than years. Despite substantial market pressures driving more sophisticated and differentiated approaches, there are stages in these workflows where turnkey solutions can be implemented to eliminate bottlenecks. In this talk HT-SPR technology will be highlighted as a tool in biotherapeutics discovery that is deployable in any workflow.

0:00:00.1 Speaker 1: Scientific career with a Bachelor's in Zoology studying muscular repair mechanisms in animals and immuno-staining, went on to attain his Master's degrees in environmental engineering and completed his PhD in molecular microbiology. And then he became a technical consultant for Sartorius, a number of years at Promega specializing in sort cell assays. And you've been at Carterra for three years?

0:00:25.2 Andrew Goodhead: Correct.

0:00:25.5 Speaker 1: There you go.

0:00:26.8 AG: I'm Andrew, thanks for the introduction. Yeah, I was an SPR newbie three years ago, so I'm still learning a lot about SPR, so. But I'm constantly amazed by what we're doing. So I'm here today to introduce Carterra to you guys and show you how our platform, the LSA has been transforming the paradigm of drug discovery, especially in antibody discovery workflows. He was doing nice to follow on from Aaron who gave you a bit of a teaser on some SPR data there with a full screen of sensorgrams. He said it nicely at the beginning whereby we live in an age of speed, the information age, I think science was slower to catch up with that. It's harder to get answers in science, it's slower. The general paradigm behind drug discovery has been in a sense of years, sometimes decades, rather than the speed at which we're used to getting information. The pandemic the last couple of years really showed us I think what scientists can do when there's enough pressure and resource thrown at a problem and last year we saw record setting time schedules for creating new vaccines and also new antibodies, which I'll talk about later.

0:01:40.2 AG: So there isn't a one size fits all way of making new bio therapeutic, because everyone runs different library strategies and campaigns. NextGen sequencing has really stepped up maybe 10 years ago and started bringing even the older technologies back into the fray. So there's no one real right way to do everything. Everyone has their different methods, but at the end of the day, we're still trying to do the same thing. You want to be able to develop a strategy, to discover a new antibody or a protein in the fastest most risk free way possible. You want feedback as early as you can get, and this is critical towards continuous improvement. I guess what we saw last year with all these things being developed at record pace is that we can do it.

0:02:30.9 AG: There are 5000 diseases out there without a cure. A lot of them are in such small numbers that big pharmaceutical companies, even small pharmaceutical companies don't have the resource, or the return in investment to even start looking at these things. So if we are able to introduce a risk mitigation strategy and therefore reduce the cost of discovery for these companies, we can look at more targets. We can look at more diseases and cure more things. There's a lot of pressure out there which we need to account for in terms of intellectual property. A lot of targets are in a crowded space. We show it with COVID antibodies. I'll talk a bit more about that later. And everyone wants to be first in class, best in class.

0:03:15.2 AG: With our platform, it really helps you to strategize your discovery workflow earlier and help mitigate these things. So what do you need when you're characterizing a therapeutic. Historically and still to this day, people do characterization of antibodies downstream. Most screening strategies are generally low resolution. And so you do not get the information like who, where, how and what, but it's really important information. And with the LSA and what we're trying to help people move towards is a screening paradigm where you have all that information at the beginning, rather than at the end. And with that, you can make much better decisions and be much more strategic in your discovery workflow. So this is rather crude, but we've had this slide in from the beginning for three years when we talked to people about this, it's a needle in a haystack.

0:04:15.9 AG: Aaron described it perfectly before with synthetic libraries. The numbers are absolutely insane. You can barely even think about the amount of numbers of things that are out there that you can possibly screen. And why go to all that trouble of creating an amazing library when you're gonna screen it with a single point concentration essay like ELISA, or something which is a very low resolution. And so our message to everyone is you don't need to do that anymore. You can use gold standard characterization such as SPR, but right at the beginning of your workflow to really inform your downstream processes, it's a changing landscape. It really is. So that's the LSA.

0:05:01.2 AG: It's about the size actually, it's relatively large, but it has a lot going inside it. What we're really enabling everyone to do is to reduce the time for screening by screening much, many more clones in substantially less time with a lot less sample. And by doing this, you really do increase your workflow. You reduce the amount of un-instrument time, you reduce the time to clinic at the end of the day. And all of that brings the cost down for everything, which is good for everybody. Especially people suffering from disease.

0:05:35.0 AG: Now I'm gonna struggle here I think. These are a couple of videos and I wonder if my friend at the back might click on the left hand side video for me. Here we go. So this is the inside of the LSA and it's our print head. We call it picking up clones from a plate. It picks up 96 at a time and it bidirectionally flows them across the surface of the sensor, creating an array. It will do this four times and create a chip based array of 384 spots. The interesting thing you saw there that we do this using flow, which is standard SPR technology, but we use bidirectional flow. One of the limitations of normal SPR is it's uni directional. So you're limited by the amount sample that you've got and it comes across the chip and goes to waste. We flow back and forth and that's a user defined characteristic. And so we can increase the contact time.

0:06:32.3 AG: This enables us to see things that are in lower concentrations, lower amounts, and a low expressing clones. For example, historically those molecules would've been missed in a screening like this on an SPR machine. And so what we're doing is really increasing the sensitivity of the assay. We've had clients that purchased an assay simply because their negative controls actually had antibodies in them, so they were missing information. It's all about information as early as possible and as much of it as you can get.

0:07:04.8 AG: It's one over many system. So if you click on the right hand video for me. So once you've printed your array, we move across to the single flow cell where we run our antigen over the entire chip. So that's one over many. Again, bidirectional flow enables us to keep the sample volume really, really small, approximately 1% of what you would normally use in an SPR environment. Antigen, especially commercial grade antigen can be very expensive. And when you're talking the numbers we're talking, you just physically probably couldn't afford it with a lot of other platforms. So yeah, it's about 200 microliters back and forth, and then it goes off to waste with dissociation. So that's kind of what the inside of the machine looks like, but we spend a lot hardly any time looking at the actual machine. Most of the interaction is with our software, which is a very graphical lovely GUI. I'm talking really high numbers here, but you can set off a week long experiment on the LSA queue, one experiment after another, and it takes five minutes. The actual amount of time that you spend interacting with the machine is very small, but you get a lot of results for that.

0:08:17.0 AG: The standard things that we do are kinetics affinity. We are able to like throw numbers at this thing full kinetics. So we're not talking single point concentrations. We're talking eight concentrations for every single antibody you put on the platform at 384 at a time, and you can queue 384 by three. So it's a thousand sensorgrams in a day, full kinetics at eight different concentrations. You can then also cure the kinetics to be followed by epitope characterization. So like I said, we can immobilize 384 proteins on the chip and then compare in a competition environment, how they interact with each other and produce epitope binning. Over three, three-ish, four days, you can produce 150000 sensorgrams and produce an epitope landscape of your library. And we also have the ability to do quantitation. This is important when you're moving forward just to see how much you've got. And typically this is a newer application on the LSA.

0:09:28.6 AG: So these are the numbers we're talking about. I mentioned before for kinetic affinity, it's 1152. So three 384, well plates. And you can do that in approximately a day, a day's work. And you've got a thousand interactions at eight different concentrations. Epitope binning 384 x 384. Typically historical paradigm in that realm is about 20 by 20. I've had people tell me that have done a hundred by hundred, but it's taken them a while, and they may have slept in the lab a few nights. Again, quantitation, just a few hours for a thousand quants. So we're really shifting the paradigm of numbers in early characterization. Our software looked a bit like this, like I said, takes a few minutes to set up an experiment and then, then you're off going to do something else and wait for your results to come back.

0:10:22.9 AG: When you're producing a lot of numbers, it makes sense to have software that can really cope with those numbers. So we've put a lot of time and effort into separate analysis packages, which don't run at the same time as the machine they're separately, derived. One for kinetics and one for Epitope binning. Again, both of these are very user friendly, intuitive. They're easy to set up our kinetic software, and has an element of AI in it where it helps you choose the good, the bad and the ugly sensorgrams to throw out. So, as you're increasing your numbers and your libraries, you can automate somewhat and only pick the things you really want. We can produce things like ISO affinity plots, which are sort of automated to choose certain affinities for you. So there's a lot in there to help streamline your process.

0:11:09.1 AG: Now we're pushing the numbers. And then Epitope binning is just a lovely way of visualizing how your antibody library interacts with itself and which epitopes. And when you can see and place them in different bins, you can really strategize your selections moving downstream. And so it's a complete HT SPR drug discovery and screening sets. So the instrument, the software, and a number of different biosensor chips that if you're into SPR, you'll know the sort of standard stuff and we do the same Things.

0:11:50.4 AG: There's a lot of different ways to do things. Like I said that before, and lots of different companies are doing things in different ways. You know, pharma, biotech, HEROs, there's government, there's academia. But the LSA fits into all these pockets in every way. Everyone wants to do more. And like I mentioned, libraries are not getting smaller. They're getting bigger and they're getting bigger and they're getting bigger. Synthetic approaches are enabling ridiculous size libraries, but what that is helping doing is covering every single base when it comes to characterization... To antibody discovery, sorry. And When you've got such great libraries that you spent such a long time doing, you really need to be using the best technology to screen them.

0:12:34.7 AG: So I'm going a bit quickly here, but that was the platform, I guess for me now, the most interesting thing is to tell you a bit about what we've been doing over the past couple of years with our collaborations and with our clients such as Aaron, during COVID 19. And I think what it's really done is highlighted for us and for me in particular, the need for more information sooner and what I'll show you now is some examples of that. So part way into COVID, the Lahore Institute set up a COVID consortium and this was... The idea behind this was to try and get as many different COVID targeted therapeutic antibodies into the same place and compare them to each other. Now on the face of it that... 10 years ago, that might have seemed an impossible task. People are very closely guarded when it comes to their IP and their molecules and things but it was COVID. Everyone was being very collaborative. It was nice. And the COVID consortium managed to get somewhere around 300 different companies to send them their monoclonal antibody candidates which that's a feat in itself in my opinion.

0:13:48.0 AG: And what they did with those antibodies was did detailed kinetics, detailed epitope and H-2 blockade on approximately 300 different candidates. Now what I really like about this is, if you'd done a screening on these things with ELISA, for example, again, you'd get the best binders you get yes or no, and things would probably group all together. Everyone choose... Everyone in these organizations probably chose things based on affinity, for example. And so they would... You'd expect them to all be the same. But when we did the epitope binning analysis on them, you can see on this lovely network plot that they sort of clustered into seven different bins. And what I really like about this is that you've got these 300 companies each with their different selection strategies. Most of them, you would think that they're all targeting the same thing, but they're not. And this really highlights that.

0:14:44.0 AG: So, they've all gone after the same target and come out with very different molecules. It really shows that diversity of your epitope is really important. And from this information you could, if you... For example, if you took this as your screening campaign, when you move downstream with this information, would you just choose RBD two cluster downstream? You wouldn't... You would choose two or three from each cluster to maintain your epitope diversity as you move downstream. Aaron showed it before, they're creating really diverse synthetic libraries, but if you screen with ELISA or something which has very low resolution, you immediately lose all that diversity, 'cause you're selecting just for one thing.

0:15:29.0 AG: If you start selecting based on epitope, which Aaron showed that, and the COVID consortium showed, then that would be a really strategic way to move forward. If one of your molecules fails downstream, you've got another molecule that's very slightly different, that you could potentially move to, instead of going back to the beginning. So another thing we did was with the NAH, which is, this is a really nice study where they took, patient's sera and screened for, spike protein. And what this really showed is that they had one set of antibodies from the patient sera that clustered in the end terminal region. So it bound only to the end terminal and everything else was kind of a mixture between, the RBD domain and some other things. They saw this as a really nice way to develop and move forward in that they could, create a bispecific antibody, which targeted both the end terminal and the RBD region.

0:16:30.0 AG: So they showed the difference between making a cocktail of these two antibodies versus making it bispecific. And they saw a tenfold increase in potency just by putting them in a bispecific. So by looking at the epitope of your molecules, you can really inform how you're gonna make a new molecule, so that this bispecific antibody was developed on the back of epitope binning work using only LSA. And finally, I won't spend too much time in this. Aaron's done a much better job of explaining what twists do, but as you saw, they produce these amazing libraries, but then immediately go on to making... Giving you so much information before you've even started working with your libraries. All this stuff really helps you to take a really long view of your discovery process and select molecules for downstream developability, which are highly diverse, and... Again, we're going back to risk mitigation strategies that the... All that money is spent in pharma, in things going on in the clinic.

0:17:37.8 AG: If you can stop that bottleneck and make things a bit cheaper then it's the best way forward. And you do that by having more information. And, well this is my final slide. I've gone a little bit too quick, but there'll be time for questions. One of the things we are quite proud of last year is when Eli Lilly, and AbCellera used the LSA to break a record. And I mentioned it before, we're changing the paradigm of drug discovery. Something that five years ago, would've taken two years, took them three months. They were from concept to clinic with their molecule, candidate molecule in 90 days. And they did all their discovery in about a week. Well, not all their discovery, but all of their characterization of their molecules, epitope billing and kinetics in under a week, six days in fact.

0:18:34.8 AG: So these last three things I've talked about or four things, we have all the papers at our booth, so if you fancy a copy, please come along and we'll share that with you. And we can talk to you about... In more detail, how we do these things. So this is quite a big deal, but, Denisa Foster Eli Lilly, has literally commented, she thinks that what we're doing with the LSA, giving more information sooner is kind of helping change the course of human health a little bit. We really need to bring the cost down of drug discovery, and you can only do that by getting more information sooner and using technology that we've got right now. If you're still doing things that was state of the art 20 years ago, you're not doing it properly.

0:19:28.3 AG: So, some parting thoughts for you. The need to intelligently develop therapies rapidly is critical for human health. I think you'll all agree that the pandemic has highlighted how speed and high throughput technologies can drive rapid discovery timelines. And this was previously unthought of. Leveraging HT SPR with the LSA will give any bio-discovery workflow, most advanced toolbox available, and give you all that information, and help you prevent things going wrong downstream earlier... Thank you. We are in a booth in the main hall, so if you wanna come and have a bit more of an in-depth chat, grab a paper, we'd love to speak to you. But for now, that's me. If you've got any questions, that'd be great.

[applause]

 

0:20:29.7 S1: Thanks, Andrew. Any questions from the floor? Before lunch? I mean, you didn't have to convince us about the high throughput nature and the processing power. Did you wanna say something about the complexity of the samples that you can also use? I mean, with all that parallel processing power you have... Can you apply that to really complicated samples, such as membrane proteins within a membrane context, or larger particles? Or is that... That level you have really is for nice soluble proteins?

 

0:21:04.2 Andrew: I mean, that's a good point. And putting GPCRs and ion channels into solution is a problem of anyone doing SPR. It's something we're working on. There are nice technologies out there, such as nano-discs and poly-pro molecules, where you can take a little bit of membrane to stabilize your protein and then bring that soluble... Solubly, I guess is the word. And if someone has got some of that ready to go, we'd have to... On SPR, but I know there's SPR data already out there with membrane proteins in solution being used in SPR. So, we know it's possible.

 

0:21:44.4 S1: I guess the point, yeah. But the point I'm making is do you lose that sensitivity if you have those membrane proteins? Or if you have bigger pump complex SPR much further away from the sensor surface?

 

0:21:54.6 Andrew: Again, that probably will be an optimization thing, where you play around with the surface chemistries. Certainly what we've given everyone the ability to do, is work with lots of different types of materials. So crude samples, our tubings are really big, so nothing really gets blocked in there. You can throw just about anything at it. And then just play around with the bidirectional flow to get optimal concentrations and see more, essentially. So yeah, I guess the answer is, you can... It's a lot more flexible in that respect, in terms of samples.

 

0:22:26.7 S1: Okay, thank you. Any questions? Nothing seems to be coming in from online.

 

0:22:32.5 Andrew: That's fine. Great...

 

0:22:33.1 S1: Okay, so thanks very much, Andrew.

 

0:22:35.7 Andrew: Thanks, guys.