The use of HT-SPR is critical to speeding the discovery and development timelines for both therapeutics and vaccines.

This presentation covers:

  • Physical properties that underpin SPR technology
  • Generation of high-quality kinetic data with SPR technology
  • High throughput (HT-SPR) technology advances with the LSA

High throughput SPR will accelerate your drug discovery.

Presented by:
Dinuka Abeydeera, PhD, Team Lead for Training and Support, Carterra


0:00:00.0 John McKinley: Hello everyone, and thank you for attending today’s webinar. My name is John McKinley and I will be your host. If you experience difficulty with audio or advancing slides, please refresh your browser window. The need for therapeutic antibodies is greater than ever, and SPR is the gold standard for characterizing antibody-antigen interactions. Now, with high throughput SPR, you can rapidly screen and characterize entire libraries of antibodies for kinetics, affinity, and epitope coverage. This kind of throughput and resolution allowed Eli Lilly and AbCellera to identify their COVID-19 therapeutic antibody and get it into the clinic in just 90 days. I’d like to introduce you to Dhanuka. He’s an expert in the use of high throughput SPR and current antibody discovery workflows. I’ll hand it over to you now, Dhanuka.

Fundamental principles of SPR

0:00:50.8 Dhanuka: Alright, well, thank you John. Welcome everyone, and thank you for joining us today. These are the topics that I’m going to talk about during my presentation. We will start off with the fundamental principles of SPR and go over how the detection technology works, and then go into a little bit deeper on the basics of the kinetic analysis, what the rate constants mean and what the curve should look like, and then go over at least one example of best practices and talk about how the surface density of X kinetic analysis.

0:01:31.1 Dhanuka: On the second half, I will introduce high throughput SPR using the LSA platform as an example. That segment will provide an overview of the LSA itself, some details about the surface chemistries we offer, and provide an analysis example from a recent publication. So SPR is the acronym for Surface Plasmon Resonance technology. In this technology, we mobilize a binding partner to a surface, we call this the immobilized ligand, and then a sample is injected and we call this the analyte in solution, which is the other binding partner that can interact with the immobilized ligand. The actual detection technology works as diagrammed on the right-hand side of this figure. So we have a laser light light source that is focused in through the bottom of a prism that is coated with a gold layer, and then the other side of that gold layer is exposed to the fluidics chamber. In this case, it is called the flow cell. The light is reflected off the bottom of that gold layer and is detected by a CCD camera that collects essentially an image of that entire array surface.

0:02:51.8 Dhanuka: The reflection typically yields total internal reflection of the light off of the bottom of the gold layer, so you see essentially a saturating brightness on the detector. However, there is an incident angle known as the resonance angle at which the photons are lost to this evanescent plasma wave in that very thin gold layer. And the angle in which this phenomenon occurs is determined by the refractive index of the solution on the other side of the gold layer, where the binding interactions are taking place. The way we set up these experiments is, we have a gold surface that has a metrics on it, usually some form of a hydrogel, and then we immobilize the ligand to the surface and then under flow, we expose that surface to an analyte. Example, on Carterra LSA, we typically immobilize the antibody onto a chip surface and then we flow the antigen over the array surface, and then that allows monitoring of SPR response corresponding to binding as well as dissociation in real-time across all the interaction locations simultaneously.

0:04:09.3 Dhanuka: Shown here is a little bit more detailed illustration. So we have the light source and the incoming light is swept across the firing angle, generating what is known as the SPR dip. For example, on the LSA, for every spot out of 432 spots in the array, we generate a live SPR dip profile using the brightness in the Y-axis and the angle in the X-axis. When the system scans across these angles, it starts off with a saturating brightness, and then as you approach the resonance angle, the brightness starts to dip and then goes back up on the other side to produce a signature SPR dip profile as shown in green color here. So the position of the dip is dependent on the refractive index changes on the surface or in the solution just above the surface. So if you have a high salt solution or glycerol for that matter, the SPR dip shifts to the right, as shown in here. On the other hand, it will shift to the left if you inject a solution with relatively a low refractive index. If you inject water or dilute acid over the surface, then the SPR dip shifts to the left.

We are monitoring a binding event

0:05:33.4 Dhanuka: If we are monitoring a binding event, we should start off with a bare chip in a buffer, and then we immobilize a ligand to the surface. This causes the dips to shift to the right because the binding of protein at the surface-fluid interface changes the effective refractive index on the surface, because you’re essentially adding mass of molecules to the evanescent wave. Onto this surface, if we inject the analyte and if the analyte binds to the ligand on the surface, then that would trigger further shifts of the SPR dips to the right.

0:06:14.8 Dhanuka: As the analyte dissociates, the SPR dip will shift to the left and further to the left as the analyte dissociates further. One thing worth noting at this point, so the instrument itself records these SPR dips as a function of time, which is shown on the left-hand side, but this process enables the user interface of the data acquisition software to translate this shift to the binding responses in real-time, and that produces what is known as a sensorgram, which is shown on the right-hand side. So in the next few slides, I’m going to provide a schematic representation of that particular process, the instrument sees these SPR dips that are shown on the left-hand side, then what the user sees are the real-time sensorgrams where we get a binding response in response units.

0:07:12.5 Dhanuka: So the response is an arbitrary unit that technically does correlate back to a certain mass of protein binding onto the surface. So if we are flowing just buffer, then the dip just corresponds to a baseline signal down here in our sensorgram. When an interaction is taking place, we get the dip shifts to the right; hence, an increase in our binding response. Now, the actual interaction during this same encounter could eventually reach an equilibrium, thereby your response signal could plateau ideally. As you switch over to the buffer flow, then those non-covalently bound molecules dissociate causing the SPR dips to shift to the left, and that enables the user to collect real-time dissociation data, as shown in here.

0:08:16.8 Dhanuka: So the next question that I would like to address in here is that, what do we expect from a real-time kinetics analysis of a binding interaction? In other words, what is the significance of these rate constants that we are recording in here? So if we consider a simple one-to-one interaction where we have A plus B forms the complex AB, and this happens in a reversible orientation, where we have little k, and little a, which is the association rate constant or the on-rate driving the formation of that complex. And then the dissociation rate constant or the off-rate, that little k, and little d, driving the dissociation. The big K, big D or the equilibrium binding constant is calculated by dividing the dissociation rate constant from the association rate constant, as shown in here.

What are the typical practical implications of these parameters?

0:09:16.4 Dhanuka: So what are the typical practical implications of these parameters? When we are looking at the pharmacodynamic effects of therapeutics, and if you have a fast on-rate, you can get a quicker onset of effect, meaning that it can affect the ability to neutralize, for example, cytokine that is circulating in your serum. Therefore, having a rapid onset can make a drug more efficacious by starting to work faster. If you have a slow dissociation rate of an antibody, then that antibody can elicit its physiological effect on its own, even after the serum level of the drug falls below what would be an efficacious range. In other words, one could anticipate durability due to extended residence time on that particular target. Also, if our goal is to interrupt an interaction, then having a durable residence time would be quite beneficial. But as a high efficient drug, which means a low value for Kd, means that typically you will get a pharmacological action at a lower serum concentration of that particular drug, because you have higher occupancy at low concentrations.

0:10:40.2 Dhanuka: So let us talk about a bit more detailed terms. So this would be an example of a typical concentration series where we have, say, a three-fold dilution series of an antigen where the concentrations are higher as you go up in these curves. So the slope of that association is determined by the on-rate times a ligand capacity term known as Rmax parameter or how much protein can bind to the surface times the concentration of your analyte and off-rate, and the amount of complex that is already being formed during this process. So this makes this shape of the association curve a bit more complicated because there are three parameters that are affecting the shape of this curve: The concentration of the analyte, the on-rate, as well as the dissociation rate constant, as shown in here.

0:11:41.2 Dhanuka: If you inject long enough or your kinetics are rapid enough that the interaction can come to an equilibrium where you see no real association or dissociation, then you are at a steady state. There is only one injection on this set of sensorgrams in here that has achieved the equilibrium, but the others are still, as you can see, are climbing. So it is important to distinguish between equilibrium and saturation at this point. So every concentration injection will eventually hit an equilibrium, if you inject long enough, but only a very few concentrations would ever achieve saturation and the time to equilibrium is dependent largely on the off-rate as a result of that.

0:12:34.3 Dhanuka: Then as the concentration goes to zero as we are injecting Buffer, for example, as we have in this dissociation phase, the entire association parameter essentially drops out of the determinant, and we have a simple exponential decay based on the dissociation rate constant and the amount of complex that is formed at the time that you’re measuring it from, as shown in here. The binding affinity of an interaction can be also determined using an equilibrium or steady-state binding approach. So if you have an interaction that reaches rapid equilibrium, like in this example, as you can see, all these concentrations are essentially steady-state. So you can simply collect report point values from the steady-state binding curves and then fit those to an isotherm using this equation in order to determine the Kd of an interaction. The elicit software also has a module for this analysis, however, we use this approach frequently in transient interaction analysis, for example, Fc-gamma receptor binding studies. Now, in order to extract kinetic parameters of a binding interaction, we need to feed the binding data to these pseudo-first order integrated rate equation model.

0:14:03.2 Dhanuka: So this is a composite of a simplified version where we have our equilibrium parameter, which I showed on the previous slide, and also shown on the right-hand side in here, where we have a concentration of the analyte, the Rmax, and the Kd parameter and then you can substitute it for the Kd value or the affinity with the ratio of off-rate divided by the on-rate, and they end up with the complete rate equation shown in here. During a fitting routine, there are three parameters that will typically flowed: The off-rate, the on-rate, and the Rmax parameter. So those are all global parameters to fitting the data at all sensorgrams, and the known parameters in this case would be the time and the analyte concentration that we extract from the sensorgrams. During dissociation, when the concentration of A goes to zero, the equation simplifies to where we just have the response at a given time and then the off-rate parameter with the delta T. So the interesting thing about the off-rate is that you can actually measure it from essentially any point in this dissociation curve. So the Rmax or the saturating level of the surface isn’t actually part of that equation anymore. So that is often why people would do something called off-rate screening conventionally, because from any point during that dissociation phase, as long as you are getting enough dissociation and you have a good sense of where your baseline is, you can fit the off-rate using this equation.

How fast the analyte dissociates from the surface

0:16:01.5 Dhanuka: So again, the off-rate is this property of how fast the analyte dissociates from the surface, although it is measured both in the association and the dissociation phases, it is best described in the dissociation phase when the antigen concentration is zero and it’s presented in the units of one over seconds, as described in this slide. If we talk about fastest off-rate, you are probably going to ever describe with an antibody interaction is about one per second as shown on the left-hand side. You can see the dissociation happens almost immediately from the surface. On the other hand, a very slow dissociation example would be a curve shown on the right-hand side, where we have one times 10 to the minus five off-rate in that case. When there are antibodies that have dissociation properties slower than this, they actually become challenging to measure using this technology. As you can see, that there is almost no visual dissociation in this 10-minute dissociation time shown in here. So one could use one times 10 to the minus five as a default setting in the analysis software, but it can obviously be changed if you really want to try to rigorously design an experiment that will elucidate more dissociation.

0:17:28.8 Dhanuka: But it probably a good range for most SPR applications because obviously you can see if you are measuring almost no subtle change, then it would hardly impact your estimated rate parameter. When you design your experiment, you want to see decay in all your data sets, but at the same time, you do not want to make every assay take forever. Therefore, we usually set it to something fairly reasonable. In high throughput characterization for example, we collect data in between 10 to 30 minutes of dissociation, and that is good enough to get you down close to 10 to the minus five range. In terms of observable dissociation, and very often, if you are not seeing dissociation in that phase when it was set to say, 35 to 40 minutes or even and over, then it is unlikely that you would just all of a sudden overcome that and get enough decay by running it even longer. So typically, if you are going to try to measure off-rates that slow, performing a robust experiment designed specifically to interrogate that parameter with lots of replicates is what we recommend to use.

0:18:51.0 Dhanuka: When we are measuring association rate constant, again, the shape is determined by the concentration, the off-rate, and the on-rate. So this parameter is measured while the sample is being injected and cycle over the surface. It is reported as k symbol a parameter in the units of one over moles over seconds. So these are the same off-rate, shown in the same concentrations with a very fast on-rate, and this is about as fast as you are going to typically see for antibodies about one times 10 to the 7th, and on the other hand, we can have very slow association, probably ranging in the range of specific interaction, but you can see how those response or the response units of these concentrations diminishes as their on-rate goes down. This is an example where we compare the same on-rate, five times 10 to the 5th, and the same concentrations while varying the off-rate. We are stepping it up an order of magnitude slower with each plot so you can see for at 10 to the minus two, you are starting to get longer tail, but that injections are still largely coming to equilibrium in about five minutes. As we go to 10 to the minus three, this is getting more into your single-digit nanomolar or typical affinity for antibodies, you are still getting easily measurable dissociation, but your low concentrations are no longer coming to equilibrium.

0:20:33.0 Dhanuka: And then as you go higher, you will notice you are away from equilibrium in your low concentrations, but at the same time, you are seeing more binding from the low concentrations. Then this is a good example to compare between 10 to the minus four and 10 to the minus five parameter. How much absolute difference is there in the dissociation phase during a 10-minute time window, for example. It is clearly measurable. The software would give you a very good estimate of this, and if this is was the data, but you can get a sense of the scale that even relatively subtle changes start to make significant impacts in your estimated dissociation when you get to that lower range. On the other hand, one could look at the fixed off-rate of 10 to the minus three while varying the on-rate. As we step down, so you would see in the dose-response of the different concentrations begin to change as the on-rate slows down, and also it may or may not be obvious, but it’s much harder even for people that are really experienced SPR users to look at the sensorgrams and estimate the on-rate because it’s affected by three parameters, as I mentioned earlier. We must have a sort of an inherent knowledge of what our concentrations are being injected and how that affects the shape to be able to intuitively see it because it’s inherently complex in nature.

What makes the LSA ideally suited for high throughput SPR?

0:22:15.1 Dhanuka: Alright, now I’m going to switch gears and talk about the LSA platform. LSA stands for the Lodestar Array, and it is the High Throughput SPR Array System. So, what makes the LSA ideally suited for high throughput SPR? It has these two relatively independent fluidic systems that address the same sensor chip surface, it has a multi-channel mode which enables Continues Flow Microspotting technology, and we also refer to this as the print head, and it is with this multi-channel mode that it can do up to four nested 96 array prints in order to create a 384 spot array. And then there is also a single channel flow mode that does what we call one-on-many, where one sample is injected over the entire surface of the array. So let us dive a little bit deeper into the hardware components of the LSA. So this is a visualization of that multi-channel side of the LSA where it was accessing samples from a 384-well plate, and then the samples are flowed over a chip surface as shown in here.

0:23:37.3 Dhanuka: So this is this Continuous Flow Microspotting technology as it happens, and it is actually a big differentiator between the LSA and other array-based platforms in which array creation is a deposition-based process, where samples just being added to the surface. Whereas on the LSA, as you just saw here, the sample loading occurs under fluidic flow, so it goes from running buffer to sample, back to running buffer. Therefore, you can do all of the chemistries that you would typically do on an SPR-based biosensor on the LSA as well. So you can also do charge-based pre-concentrations during immobilizations, for example, low concentration samples. So that means one could make an affinity capture surface and then flow those low concentration crude samples for an extended period of time in order to concentrate your ligands on the chip surface. The interesting aspect about this technology, during capture from a supernatant or a bacterial extract, is that it flows the sample back and forth, so there is no flow rate to volume trade-off in this technology. From a 200 microliter sample, you can capture at full flow rate as long as you want, and then the sample is returned to the micro-plate after it cycles through, so that you get the majority of your sample back in the plate.

0:25:19.6 Dhanuka: So the multi-channel manifold can undock in an automated fashion from the surface and move out of the way so that it enables the single flow cell to dock on the same chip surface. So this flows one solution over the entire chip surface, and when the array is printed, then the single flow cell can dock and inject one sample over the entire array surface in order to achieve what we call one-on-many analysis format. So if this was an antigen injection, we would be getting kinetics information from that concentration of antigen, for example, 250 microliter volume of an antigen could be used across all the antibodies immobilized on the array. Also if this was an epitope binning type experiment, we would be getting competition information for all of those 384 immobilized antibodies from that one volume of sample. In addition, you can use this single flow cell to immobilize what we call a capture lawn, say, you have an anti-Fc antibody lawn, you want to put that on the entire surface of the chip so that you can then capture your antibodies from a crude solution using the 96-channel side of the instrument, and then come back to the single-channel side to do your kinetics analysis.

0:26:57.1 Dhanuka: So let me explain it further by using this illustration from another angle, so to speak. So the pink vertical rectangles are those individual sample locations generated by the multi-channel manifold, and we call those the regions of interest or ROIs, whereas those blue color rectangles are the interspot references that are used for real-time referencing by the LSA. Those reference ROIs are not printed, but those are arbitrary locations on the chip surface or a lawn surface for that matter. So if you were to perform four nested prints, then you end up with the array layout at the bottom, in which there are four rows of 96, and then there are 48 interspot references as shown at the bottom. For each reference spot, there are four samples, sample spots above and four sample spots below. So those are the ones that are used by default by the analysis software for reference subtraction. And again, as I mentioned earlier, this is yet another illustration of how the single-channel mode analyzes a minimum volume single injection against all 384 ligands simultaneously. So we refer to this as one-on-many assay format. The architecture of the LSA makes it ideally suited for some of the core applications pertaining to antibody characterization, such as binding kinetics or affinity analysis, competition-based epitope binning, peptide or mutant mapping, as well as quantitation.

Having a hardware architecture that makes these assays faster

0:28:47.5 Dhanuka: In addition to having a hardware architecture that makes these assays faster and efficient, we have also put a huge amount of effort into designing and implementing analysis software, so that really does an excellent job making the analysis so fast and powerful, but also providing you scientific visualization tools to make the data sort of experience-rich and easy to communicate. In the epitope module, for example, the visualization tools such as the competition matrix or the heat map, and the network plots are used as competition profiles to describe the diversity of an antibody panel. So when designing a high throughput kinetics experiment, there’s a lot of factors to consider. Obviously, your selection of the chip type and how you prepare your capture surface, especially if you are doing capture kinetics experiment, is fundamental. I’m not going to go into too much detail about capture surfaces today, however, we will talk about certain chip chemistries that we currently recommend. So obviously, how you design your antigen concentration series is quite important on the LSA because everything is done in parallel and in a high throughput fashion.

0:30:17.2 Dhanuka: We typically like to do a broad concentration series, such as 8.3 fold series that gives us a broad dynamic range in the kinetics experiments. In terms of the assay design, one huge advantage of having a 384 spot array, even if you don’t have 384 antibodies, is that you can spot your antibodies in a dilution series or capture them in a dilution series, so you can be assured that you will have ideal densities for your kinetic characterization. Also, you have the opportunity on the LSA to spot antibodies as replicas, which allows you to sort of power the analysis or perform kinetic analysis with an N. So you can calculate mean and the standard deviations of your rate constants across true replicas instead of just simply relying on goodness of fit parameters to build your confidence in those values. It is also important to note while the LSA really brings a new scale to these type of kinetic characterizations, we are not changing the fundamentals of SPR at all. We often refer to this paper by Dr. David Myszka in the Journal of Molecular Recognition from 1999, where he outlines many of the key parameters of how to properly design and execute SPR experiments as well as how to report the data. So none of those have changed and we are trying to be consistent with those fundamentals that are out there for quite some time. So this is an actual LSA prism in the middle, popped out of its cassette.

0:32:07.7 Dhanuka: This glass prism is coated with gold, and it has this hydrogen layer on the top. So what we are going to do now is to go into kind of a cartoon mode to talk about our chip surfaces and how do they work. So we have a gold film on the prism surface and it produces plasma waves as we discussed earlier, then the hydrogen layer provides a three-dimensional surface for the proteins to bind, it is flexible, it is hydrophilic, and your injected molecules can have good access to the surface, and it also prevents interaction of these proteins directly with the gold layer. Once we have a matrix on the chip, it can be loaded with carboxymethyl groups, and they provide negatively charged termini for electrostatic pre-concentration, which is a phenomenon that we used to get proteins to stick to the matrix, so that they can be covalently coupled eventually. So the first chip type we are going to look at is CMD-200M. This means it has a carboxymethyl dextran hydrogel, which is 200 nanometer in thickness, and it is highly cross-linked. It is important to remember that when working with this type of chip, that it’s not just little strands of dextran coming off the surface, it is really like a gel layer, and that can create some degree of diffusion barriers for proteins and analytes to get into the matrix.

0:33:49.5 Dhanuka: Nevertheless, this is our highest capacity surface, and we recommend it typically for interactions with relatively smaller analytes. Another sensor surface to go over is our linear polycarboxylate surface, which is known as HC200M. And again, this is a 200 nanometer matrix, so it’s the same approximate depth as the CMD-200M we just talked about. However, the major difference is that instead of having a highly cross-linked gel layer, we now have these linear strands of polycarboxylate, so there is much less of a deficient barrier for molecules to get in and out of the surface. It probably has roughly a half of the protein capacity compared to the CMD-200M since it does have a little better transfer dynamics, it is a bit easier to regenerate. So this can be used with low concentration samples for kinetics screening of moderately small proteins. A very similar chip type is the HC30M, this is again the linear polycarboxylate, but instead of being 200 nanometers, this one has a thickness of 30 nanometers. It has a medium protein capacity, this is probably our most recommended general chip type, and a vast majority of either kinetics or binding assays can run well on this chip type and it has great transport dynamics, quite similar to a planer chip surface.

Final chip type is the planer dextran surface

0:35:35.8 Dhanuka: The final chip type is the planer dextran surface, this is the same chemistry as CMD-200M shown earlier. But instead of being a 3D hydrogel, this is essentially a film on the surface, so this has probably the best transport dynamics of any surface that we typically recommend. Although it has relatively a low protein binding capacity, it is sufficient for a lot of assays, especially if you have a reasonable size antigen, for example, the 30 kilodalton or more, you are definitely going to be able to have sufficient antibody on the surface in order to get significant binding responses if you were to use this particular chip type. In addition to carboxyl chip chemistries, which I described earlier, one could also use streptavidin, NTA, protein A/G as well as protein A sensor surfaces for various assay development, high throughput screening and quantitation, as well as diversity assessment type applications. So the property of all lamina flow cells is that there is a bulk flow, and then as we approach the surface, there is actually no fluid movement, and so we end up with what’s called an unstirred layer that molecules need to diffuse through in order to actually access the surface. So that rate at which those molecules can move through that unstirred layer is called diffusion, but it can be described as a flux, which is governed by the Fick’s Law.

0:37:21.0 Dhanuka: The flux is defined by deficient co-efficient, a concentration gradient, and a distance that determines the rate. As the liquid gets more viscous, or the protein gets larger, or the hydrodynamic radius of the protein increases, the rate at which the diffusion occurs is going to be less. Therefore, if you have a very large protein, it is easier to create what we call a mass transport limited condition, where the binding rate observed on the surface is being governed or influenced by the rate of diffusion from the bulk flow to the surface rather than just the kinetics of the interaction itself. So this is why we recommend to perform kinetics assays at relatively a low density. For example, if there is a mass action of analyte diffusing through this unstirred layer but we have a low antibody density tethered to the surface, then it is unlikely that you are going to be able to deplete that flux of the analyte, and from that bulk flow to the surface. Especially if you have a high density of a high affinity first on-rate antibody, then those antibodies will grab up all antigen that comes through and likely end up in a limiting analyte scenario. So as a result, we will end up with measuring the rate of antigen flux rather than the rate of the interaction itself under these circumstances.

0:39:04.0 Dhanuka: There are two ways one could compensate for mass transport limitation. You can either immobilize low density antibodies or as a titration of antibodies, where you have low density spots so that you do not observe this effect in the first place. Alternatively, we can add a Km term to the kinetics model, making it essentially a mass transport model. That works well if the effective mass transport is relatively modest. You must keep in mind though, if you are having to compensate for a massive change in the kinetics, it’s adding quite a bit of flexibility to the model and the parameters will be a bit less constraint. So this is a simulated example showing the same concentration of injection over an antibody with an affinity of one nanomolar. So it is a 25-nanomolar injection where we put in a significant mass transport term so we can observe the differences in the binding profiles. The sensograms look like different interactions altogether as you can see there. Listed in the table are some of the properties such as the dimension of the flow cell, the viscosity and the diffusion factor of the protein that would be taken into account during these fitting algorithms. Now, here is an actual example of this phenomenon that we just talked about.

63-kilodalton antigen titration series binding to a clone

0:40:39.7 Dhanuka: This was a 63-kilodalton antigen titration series binding to a clone at different densities on the same chip surface. So this is not a dilution error or something. This is actually the same antibody at different densities on the same chip surface binding to the same antigen. This particular clone, C21, that we spotted onto the array at three different densities, and if we zoomed in, the fix to the kinetics model show a 10-fold difference in the apparent on-rate going from a low density to the high density surface. Here we can see how different the low concentration injections behave on these different densities, where we are getting essentially saturation within five minutes from the lowest concentration on the low density surface.

0:41:37.9 Dhanuka: Whereas on the high density surface, we are not even at 50% bound, so it makes a dramatic difference depending on the densities. Obviously, this was an extremely mass transport limited system shown in here. However, the take-home message from this example is that by titrating down to a moderately lower level of antibody density, we can generally overcome this phenomenon. In the same experiment, we had another clone that was spotted at three different densities. So we can see it really made almost no effect on the measured on-rate from those different densities for this particular clone. Also it is an order of magnitude slower on-rate, and it simply sort of outside the range where the transport limitation typically occurs.

0:42:33.2 Dhanuka: We can also see that the data fits better at the low density, which is also common under these circumstances. One could probably get some sort of a crowding effect by having a high density, but the actual kinetic estimates were not that different between those conditions. Therefore, it is both an antigen size and an on-rate dependent phenomenon, in addition to the fact that it is also influenced by the chip matrix as we discussed earlier. This is a peer reviewed example of a recent publication. This was a collaboration between Carterra, Adimab, and Amgen, in which a bunch of the publisher patented PD-1 antibodies were synthesized and analyzed on the LSA, the bicoid K and two solution phase affinity measurement tools, namely kinexa and MST assay formats.

LSA run from that particular investigation

0:43:36.7 Dhanuka: It is an LSA run from that particular investigation. This was about 40 antibodies, but we made maximum use of the 384 array by analyzing everything in 8 to 12 replicates for each clone. This was one particular run set up in the afternoon, and the concentrations of the PD-1 antigen started at one micromolar and a three-fold serial dilution series of injections from low to high generated all the data in one evening. Here is a summary from the results where we compared similar chip types. The kinetics data from the LSA and the Biacore K were essentially the same and produced kinetic values over a broad affinity range, as you can see here.

0:44:29.5 Dhanuka: However, the LSA was about 50 times faster and used about 1% of the amount of sample. Also to compare is the error bars in the LSA direction, because we generated multiple replicates of each data point, whereas the Biacore only generated a single data series per clone. In that same investigation, we also compared chip types and we found some very interesting differences. In this comparison, you can see the X-axis is showing SC-30M chip type, and we are comparing the on-rate, off-rate and the affinity on CMDP, which is the planar dextran or the CMD-200M. As you can see here, the SC-30M and CMDP gives very similar kinetic results, but the CMD-200M with a thicker hydrogel with that cross-linked surface showed a systematic decrease in measured on-rate. The off-rates were much more similar, but then the affinity reflects the on-rate differences as you can see here. This was really interesting observation, because we now know that if we are interested in precise kinetics values, especially for large antigens, then we really need to focus on these less dense chip surfaces to get a more reliable kinetic analysis.

0:46:03.3 Dhanuka: By the way, this is not the first time, this effect has been reported, the group at Pfizer in South San Francisco, California did a similar study on the ProteOn looking at different chip types, and they saw a very similar result, that was published back in 2011. The LSA kinetics analysis software is built to handle thousands of interactions as you can imagine. It has automated batch processing tools to make you go from raw data to process data in just a few clicks. It has several automated data QC flags to prevent common errors. For example, if you are familiar with SPR literature, you may have come across with ambiguous results either because the software will spit out a result and people just reported it, or there is some misunderstanding of the limitations of the software itself that the people should have known prior to reporting that analysis.

0:47:09.6 Dhanuka: The software, in this case, facilitates multiplex analysis, for example, in captured kinetics application, if you have antibodies from multiple projects or multiple antigens from the same project that you want to analyze, it allows you to easily segregate each data sets based on the categorization of your interest. It has user friendly visualization tools, and I’ll show you a couple of those examples in the next few slides, and the export feature also enables transfer of the entire analysis to an Excel file with multiple tabs. So this is great for notebooks or for sharing the data, where you get all of the figures and tables that you generated in the analysis software into just one document. So this is an example that highlights many of the features in the kinetics analysis software. So the clones that are highlighted in gray color are said to be below a threshold value when it comes to the binding signal. And also if things are having poor feeds by looking at the residuals, the software will flag those in yellow color suggesting that you should assess whether you believe this data or should you be re-investigating the characterization by setting up different set of assays.

0:48:45.6 Dhanuka: It also flags affinities, which are tighter than typical thresholds that are set by the software, for example, the off rate can be below 10 to the minus 5, in that case the software will flag those affinities so that you will immediately know that those affinity needs to be re-investigated in order to confirm that tight affinity. On the other hand, if the software believes the KD value estimated is greater than the highest concentration that you implemented in the assay, then it will flag those clones as well, so in that way, you will understand that there is a need for you to maybe broaden that concentration range in order to get reliable kinetics for those weak affinity binders. Another great visualization tool in the software is that it automatically generates these ISO-affinity plots, this is an excellent way of weaving the kinetic diversity of your panel. So we have the off rate in the X-axis and the on-rate in the Y-axis and the diagonal lines are, we call those ISO-affinities or single affinities.

0:50:11.5 Dhanuka: The one in the middle, left is one times, 10 to the minus 9 or 1 nanomolar, or then we have 100 picomolar going up to the left-hand side and 10 picomolar up to the corner, so the high affinity clones are to the upper left-hand side, whereas the lower affinity clones are to the right-hand side. In summary, LSA really provides a unique approach to high throughput screening and characterization by really combining the concepts instead of just doing low resolution kinetic analysis from your small scale crude samples. You can now get full kinetic profiles and characterization for a large number of clones with very limited amount of material using the LSA. Also the LSA uses SPR as a tool or a strategy, so the same rules apply for good practices, and you need a lot of attention to detail in terms of the way you set up these assays and also the way you report these results, in order to get good, valuable results using the LSA. Thank you all for your attention, I hope you found this seminar to be useful, and I hope you’ll stay safe.