Posted by Noah Ditto
Data rich technologies are a critical part of drug discovery, particularly platforms utilizing AI/ML to drive commercial success. High-Throughput Surface Plasmon Resonance (HT-SPR) is an indispensable tool for strategies given the enormous amounts of data needed to make these processes robust. With these quantities of data comes a requirement to handle it in a manner that can be meaningfully interpreted in real-time and remain shareable amongst cross-functional teams. This talk will focus on the use of PerkinElmer Informatics’ Signals™ VitroVivo software and its ability to integrate and display results from HT-SPR experiments using the Carterra LSA. Highlighted will be ways in which Signals makes data integration more streamlined in a customizable way and ultimately facilitate greater data sharing to drive drug discovery and development.
0:00:02.2 Adam Loukeh: Welcome to our talk Exploring Carterra High-Throughput SPR Data Using PerkinElmer Informatics Signals VitroVivo. My name is Adam Loukeh, I'm the product manager here at Carterra. I'll be kind of giving you a brief introduction here before turning it over to our speakers. As many of you know, data-rich technologies are a critical part of the drug discovery process, particularly platforms using artificial intelligence and machine learning. High-throughput surface plasmon resonance is clearly an indispensable tool given the enormous amounts of data needed to make these processes robust. But with the vast quantities of data comes a real need for meaningful interpretation in real time and the ability to share amongst cross-functional teams. This talk is going to focus on the use of PerkinElmer's signals VitroVivo software, and its ability to integrate and display results from high-throughput SPR experiments using the Carterra LSA. Our talk today will be given by Noah T. Ditto, who is our technical product manager here at Carterra. Prior to joining Carterra, Noah supported drug discovery and early clinical stage development for nearly a decade at Bristol Myers Squibb in Princeton, New Jersey, with a focus on biophysical characterization of protein and peptide-based biomarkers, drug targets, as well as small and large molecule therapeutics.
0:01:31.1 AL: Noah has an MS from Pennsylvania State University, where he develops chromatography-based functional techniques to isolate disease-specific serum biomarkers in dengue infection, as well as an MBA from Westchester University studying business analytics. Our second speaker is going to be Dr. Christoph Gänzler, who gained his doctorate in molecular biology from the German Cancer Center in Heidelberg, working with human papilloma virus vaccines. Previously, he's held positions at LION Bioscience as a scientific bioinformatics consultant, before joining TIBCO Spotfire. There, he led Central European Sales Consultants Team, and later Christoph joined Zephyr Health. Then he came to PerkinElmer as an informatics manager, scientific analysis, analytics, and later Christoph was responsible for strategic marketing and business development of Signals VitroVivo, and Signals Image Artist. Today, he is the product marketing manager for biology as well. I'm going to turn over now to Noah to give us a start.
0:02:53.0 Noah T. Ditto: Thank you for that introduction, Adam. I'll start things off by introducing the flagship technology of Carterra, which is the LSA. The LSA is a high-throughput surface plasmon resonance device. And really what makes it unique among a field of other similar Biosensing technologies is the microfluidics that make it truly high-throughput. So, if we look on the left side of this slide, we can see where that uniqueness comes from. It's in part this multi-channel mode where we can address the sensor chip surface with 96 multi-channels simultaneously. And we can do up to four events of that association to create arrays of 96 up to 384 unique species on the surface. And then we switch our fluidic modes with the chip surface remaining in the same location, but the fluidic device swapping out to our single-channel mode where we can run injections across that array rapidly. Really what this does is, we have arrays then of 384 unique species on the surface, and we can test them quickly with single injections in single-channel mode, allowing a tremendous amount of throughput really unseen in the biosensing space and really transforming what this instrument does and how SPR is used in early drug discovery.
0:04:10.2 ND: So, going down into the details here, the two microfluidic modes of the LSA, on the left is shown the multi-channel mode where we're drawing up 96 samples associating with the surface in our bi-directional flow paradigm. And this can be done up to three more times to create a 384 spot array. It is worth to note that that bi-directional flow does allow you to have very long capture times, so we can enrich very low-expressing supernatants, for example, and this gives you tremendous assay sensitivity so that there's not difficulty in screening in the early stages when we have maybe crude materials with not a lot of sample on board. We easily can enrich them on the chip surface and get a really clean binding signal. And to do that binding event, we switch to single-channel mode shown on the right here, where we dock our flow cell on the surface, and a single injection is passed across that surface back and forth bi-directionally. And similarly, it's bi-directional like the multi-channel is, and that allows us to have extended contact times without consuming any more material. You can see that we have really high flexibility in terms of the amount of contact time, and we don't have to concern ourselves with the amount of volume that's consumed per flow rate since sample volume is a fixed quantity.
0:05:26.3 ND: The bottom of the slide indicates the key attributes of we're getting 100 times more data in 10% of the time, and in particular, using only 1% of the sample requirements, which are all really valuable features when we're looking at early-stage screening where we need to really find that needle in a haystack, if you will. And this is just a screenshot of the LSA software. Really, the great thing about this system is that despite the huge amount of throughput and the huge amount of interactions it measures, even a really complex, week-long experiment only takes about five minutes to set up in the software. There's very straightforward graphics, very simple to understand kind of how the assay is structured. In the bottom half of the slide, you can see kind of a mock sensorgram profile showing the different stages in the cycle that we'll be executing. So, very easy to use and makes the setup, again, very streamlined and rapid to run the system despite its enormous complexity in terms of how much data it can create. And the LSA itself, obviously, does not function alone without key software. So, shown in the center here is the navigator control software, as well as the kinetics and epitope analysis software packages that all come with the system, drive it, enable easy execution of assays and analysis of data from those assays.
0:06:49.9 ND: And all the biosensing chips and consumables are also offered to really enable a turnkey discovery process. In terms of applications, there's three kind of main applications that really provide value to customers. One is kinetics or affinity screening. So, that shown on the left is typically antibodies arrayed on the surface, and we do a screen of a single antigen injected across that array. This is really powerful because we can screen up to 1152 species at a single run, and it can be done from both crude and purified sources. So, whatever stage in the process and whatever processes are upstream of the LSA, the LSA is adaptable to those. The second technique is epitope characterization, where most typically we do competitive epitope binning. We're looking for clusters of antibodies that share unique epitopes on the antigen. There's also opportunities to interrogate epitope doing peptide mapping or mutant mapping, and even blockade assays. All of these are critical to understand mechanism of action and use that in conjunction with other datasets to better understand how the candidate antibodies relate. And then the last assay is quantitation, where we would take material typically from crude sources most commonly and screen them and understand tighter levels.
0:08:05.2 ND: And this often is run actually as a front end to maybe kinetics or epitope characterization, where there is crude material, and we're just trying to understand kind of what expression levels are like and understand how to optimize assays subsequent to that. And again, this can screen 1152 supernatants, for example, in a single run. So kind of going back to those single run throughput capabilities, you can see here that everything is in the hundreds, if not thousands, in terms of throughput. Kinetic affinity, epitope binning and mapping, quantitation, even the blocking and neutralization assays, all at a minimum can screen up to 384 samples at a time and often go much higher than that in a single run. And so really we're seeing, again, throughput levels that really are extremely well aligned with early discovery and really just unparalleled. So really opening up doors in terms of what can be done with biosensors in early discovery. This is just an example of some of the data we're getting in a kinetic run, 384 unique affinities, a single run, only 7 micrograms of antigen consumed for this entire dataset and a full detailed titration of eight points measured.
0:09:16.2 ND: What you can also see in here, it may be a little hard depending on kind of the size of your monitor, is that we have replicates actually built into this. And that's a good point to make that while many researchers may screen and have 384 unique clones, for example, they want to screen, oftentimes you may want to screen a smaller number of clones but test under different conditions. And in this case, we can see that when we vary the concentration of the amount of material antibody in this instance that we capture on the surface, you can get different binding responses. And that's important for kinetic studies to kind of understand where's the kinetic sweet spot in measuring it. And that can all be done in a single run by varying that concentration that's captured onto the surface. And additionally, there's the ability to build in replicates as well. So even, again, if there's not 384 unique clones but maybe a subset of that, you can build in replicates to get confidence in the measurements you're making in the assay.
0:10:08.8 ND: Same goes for epitope binning. This is 100,000 plus interactions measured in a single experiment, only needing five micrograms of antibodies. So even for small scale expression paradigms, this is an excellent fit. And the amount of data here is tremendous, really understanding what the epitope diversity is in an early panel of clones. And kind of putting the picture together, this is sort of a proof point that had come out of Eli Lilly and AbCellera working during the pandemic to identify therapeutic antibodies that could actually mitigate infection in individuals. And so in this particular set of experiments all shown here in a very elegant publication, all this data generated on the screen was done on the LSA looking at both affinity, epitope recognition, domain binding, as well as ACE2 blocking for this panel of antibodies. What's significant is Lilly took the process and was effectively able to get the first in human trials within 90 days, which is unprecedented. And the LSA played such a critical role in that because within only six days, they were able to generate all this data on their panel of candidates and really drive the project at unprecedented level. So really exciting to see that throughput does make a very meaningful impact in drug discovery and in this case, critical to saving lives.
0:11:26.2 ND: And so, as I sort of transition here to the informatics part of our talk, I'll kind of mention that for LSA data, we generate two file types off the system or one file type originally, which is the.SPR data, the raw data file. And then using our kinetics or epitope packages, we have the ability to generate processed files with varying degrees of processed levels of data. But either of these are excellent, but unfortunately, they kind of are in a vacuum. Drug discovery projects are sophisticated. There's data coming in from orthogonal sources. There really needs to be a way to pull all the great data coming off the LSA into a package that can view it all, perform additional analyses, and really meld that with different datasets across the project to drive meaningful decisions. And that's kind of where PerkinElmer and their informatics platforms come in here, that Christoph will discuss. They really enable LSA data to really speak amongst all the other pieces of data in a discovery project and allow researchers to really make the most of that data. So with that, I'll transition over to Christoph.
0:12:36.7 Dr. Christoph Gänzler: Thanks, Noah. And the components of our signals research suite, I will just outline here. So the data really comes into the Signals VitroVivo, which is here on the lower right-hand side. But of course, you're planning your experiments and you're capturing all of the data around it of the metadata. And that is done in the signals notebook. And in the end, what you talked about is really aggregating all of this data together from all of the different sources, all of the assay types, and really do a data-driven scientific decision. And that's in Inventa. So I will start with the next slide on Signals VitroVivo. And what we have here is really a quick run through of the example of two orthogonal experiments. And one is from PerkinElmer. It's an alpha-laser kit and an HT-SPR performed by Noah. The alpha-laser data comes from my colleague, Jen Karlström. And Noah and Jen presented this at this year's SLES. So the simultaneous data capture really is here the focus. So we're doing two parallel workflows. We're doing one for analyzing the data for the alpha-laser and one for analyzing the data of HT-SPR. And then we combine these results and analyze them together. And in order to do these two workflows, what we have created is an application framework and which sits on top of Spotfire.
0:14:36.0 DG: And the TIBCO Spotfire software allows us with its APIs to really automate and build workflows from these modular apps. So we have the possibility to create a workflow, share this workflow, and really let our colleagues run through it or yourself run through it every time you're doing the same thing, every time you're using the same steps, same calculations, and so on. And with the next slide, we will start the workflow. So I just have some screenshots here. I will not take you through all of it. This is the workflow for the alpha-laser. And this is how it looks like. It's inside TIBCO Spotfire. So a normal user just clicks on the workflow and loads the information from the instrument. In this case, it's our plate reader for the alpha-laser. And it is joined with the metadata. So I won't go through these details. But what we really do is the first calculation. In this case, we're normalizing the data. And this already takes all of the data that comes out of the raw files and the metadata like the plate maps, where are the replicates, where are the positive and negative controls, and so on.
0:16:12.2 DG: So all of this information now is used to normalize the data. In this case, it's a percent of negative controls, but we have already built in 20 different normalizations. And you can add your own. So whatever you're calculating here, you can want to calculate here like a Z prime or whatever. All of these calculations are possible inside this app. And in the end, what we would like to see here in this essay is an IC50. And this is also calculated by another app in our workflow, it's called the Calculation Explorer. And this is really now doing the stats with over 70 built-in regression curves or curve fits. And again, you can add your own if you want to. And every time you're running this workflow, it will use the exact same curve fit. And so you can see that over the course of many experiments that you're using the same way to load the data, the same way to normalize the data, and the same way to do curve fits. This is all really good for reproducibility also in the data analysis, not only in the wet lab. And this was the one essay that we have run here, and the other is, of course, the HT-SBR. And just to sum this up before I show you how it works, of course, it ends also with a curve fit and with an IC50.
0:18:13.9 DG: So the resulting data of the alpha-laser that I just showed you and the data of the HT-SBR can actually be published into the same database, which is also part of the VitroVivo package. And the same data structure, which was created using the VitroVivo workflow, actually makes them comparable in the end. And when we start the SPR workflow, in this case, of course, we're importing the SPR data file from Carterra. And you see immediately, okay, this is the data that has been loaded. And then we go through several steps here to actually get to the data and QC-ing the data. So in the first step here, you can see that you can, of course, enter a start and an end time, and that means that you're zeroing your data. That gives also, and this is on the left-hand side here, the possibility to exclude or include curves or single data points. You can just, because Spotify is very interactive, you can mark these lines and you can then include or exclude them. After the zeroing is done, cropping is performed, again, with start and end times, again with the possibility to do QA, QC with your data. So this is all, again, a repeatable workflow that has the idea of reaching an endpoint that can be compared with other assay types or with a repeat assay of the same type.
0:20:17.4 DG: And what I already mentioned several times, we have a curve fit here as the last step because we would like to have the EC50 values. And these EC50 values, again, can be published alongside the alpha-laser IC50s and then comparing them. So how to compare these results of these two orthogonal assays? Well, they are in the same underlying data factory inside the signals research suite and the end user can now search and retrieve these datasets, these endpoints without coding. And I will show you this in a minute because this is a very easy example where we only have two assays lined up here, but of course, you can have many, many assays and you can add, if you're dealing with bioinformatics or cheminformatics, you can add, of course, predicted values to this as well as an endpoint and then your SAR table, your SAR analysis can become really big, but we're helping the end user, the scientist, to really get that data, assemble it the right way and then analyze it.
0:21:48.8 DG: So in our example here we have the alpha-laser and the HT-SPR results and we are using them, of course, as orthogonal approaches and we would like to combine it in one charter, one report. And this is the so-called global search and this is also inside TIBCO Spotfire because in the end we would like to analyze the data but we have to reach the data first, and this is a query tool on the left-hand side that goes into the results and it's very logical how to query them, you don't have to code anything and it is representing already, in this case we have the alpha-laser and the SPR here, and of course, you can search through them, there are many, many other like layers or many categories that you can add to this. The interesting thing is that on the right-hand side by actually assembling the query you can already preview how many data points you will get back, and so this is very comfortable that for the scientists they actually know what they will get and then they push or click the button to download the results and they will receive a SAR table and that's SAR because it could be, of course, structure activity relationship but also sequence activity relationship.
0:23:21.3 DG: So we are just calling it SAR, and this here, these endpoints we now have only two, as I said we have the alpha-laser and the Carterra results, and we have in this case four antibodies that we have tested, and of course, this is nice to have it as a table but you can easily switch it to, for example, a scatter plot and you see like three assays actually align very well. We have to look at the fourth outlier here but that's the interesting thing about visual data analysis. You can directly see it, you can click on it, you can look at the underlying data very easily and do root cause analysis of what happened here, and go back to eventually your planning phase and so on because everything is linked inside the system. So you can go back to the experiment in the notebook and see what probably other conditions there were or if there was other antibodies used in both tests. And of course, since I was talking about sequences, now we also have used different subclasses of IDGs here to test and just to illustrate this, of course, the sequence itself can also be shown and aligned in this case so just downloading the FASTA files and putting them into Spotfire and letting it align with, the cluster omega here shows us this in context.
0:25:11.5 DG: So in this case like although, of course, the subclasses have different length and this assay that we have seen, the assay results are not different because of the different lengths so we are really seeing a really good and stable result here and real interaction of the different proteins. And I'm happy to hand over back to Noah for the summary.
0:25:53.9 ND: Yeah, thank you Christoph and really great kind of showcase there of how you can take LSA data and combine it in this case with alpha-laser data very seamlessly in Signals VitroVivo, and really get meaningful interpretations out of that from the two different technologies. So yeah, I'll just wrap things up here kind of hitting on some of the main points that we've discussed today. Certainly the LSA generates lots and lots of data really at unmatched levels, and kind of as you had seen with the Eli Lilly example, it's changing the paradigm of drug discovery and really pushing those timelines to areas that people really thought weren't even possible, but the exciting part we see is that PerkinElmer Signals VitroVivo software platform can take this data, pull it up in the context of other data and really give investigators the big picture of everything, really be able to drill down quickly as programs are obviously progressing so much faster there's not months and months of time to go through data analysis, it has to happen more rapidly and has to be able to talk across other platforms and datasets and working teams, for example.
0:27:03.4 ND: So we feel this is really exciting that the integration of Signals VitroVivo with LSA data allows seamless data analysis the ability, as I said, to do cross-platform comparisons and recall that whenever the need comes up to understand data at a later point. So very excited about what we've shown here and we're really encouraged that our users of LSA will have a new tool in Signals VitroVivo to really accelerate their research.
0:27:33.7 AL: Well, thank you to both of our speakers. At this point we want to open up in the chat for questions, so please go ahead and submit any questions you have. So the first question, our site does not currently have Signals VitroVivo, who should we contact to start that process? I assume that's for Christoph.
0:28:00.2 DG: Yes. So you can you can either contact me or, of course, Carterra can also forward you to us, our our sales team is happy to contact you and we have the easiest way probably is really to go to our perkinelmerinformatics.com, so perkinelmerinformatics that's one word, dot com is our web page and there you can easily just fill out a small form and we will contact you.
0:28:32.6 AL: It looks like as a follow-up, do I need a Windows PC to run VitroVivo analysis?
0:28:40.3 DG: Good question. Yes, Spotify is originally a Windows tool but it has a web component called Consumer and all of the analysis that I have showed you can be run through a web browser and so it is not dependent on any platform, you can easily run this and we have customers who exclusively run on Macs, for example, so this is not an issue.
0:29:10.3 AL: Okay. It looks like we have something here for Noah. Are there other applications for the LSA outside of antibody characterization?
0:29:20.7 ND: Yeah. The short answer is yes, there definitely is. And really antibody discovery is just a hot area, so obviously we highlight that a lot in our sharing of the technology, but the short answer is there's work being done in peptides, in aptamers, even in targeted protein degraders aka PROTAC as well as kind of some very cutting edge areas including DNA coded libraries. So yes, the short answer is, the LSA is a screening device for real-time binding interactions and there's a huge breadth of different areas that it can be applicable to outside of just antibody discovery.
0:30:02.5 AL: Excellent. What is the practical number of antibody affinities that can be screened per day on an LSA?
0:30:10.3 ND: Yeah. So in about a 24-hour period you can expect under most typical assay setup conditions to screen about 1152 affinities in a single go, so that's basically just loading up plates in the instrument and as I said, it's a very fast setup on just a minute or two to write up the method and then the instrument does all the rest in an automated fashion and then 24 hours later you have your 1152 affinities.
0:30:39.1 AL: Again, can I read plate reader files into VitroVivo?
0:30:44.6 DG: Oh yes. Yeah, I've only shown it briefly in the beginning, yes, there is an entire plate reader or file parser that, again, does not need any coding which you can use either our predefined parsers or again, create your own, and this is this is doable with any kind of file including all kinds of assay types which are not plate-based, either.
0:31:19.3 AL: Chris, thank you. We have kind of a long one here. Let's see, if we collect data for kinetics from crude preparations and later in the project screen kinetics a purified monoclonal antibodies, is there a risk of affinities not agreeing between the two approaches?
0:31:40.7 ND: Yeah, that is a good question and a concern that obviously particularly in early discovery selecting candidates and moving them forward if affinities change at least in terms of what they're measured as, that's problematic for decision-making. The short answer is no, there's several good publications out there kind of comparing crude versus purified antibody screening of kinetics, and in practice on the LSA we really actually, when we do crude screening it actually is purified in effect because we do capture the antibody on the surface and enrich it out of a crude matrices and all that crude matrices is then washed away and we measure the binding of the antigen to the antibody in effectively a purified environment. So there would not be really any chance of the kinetics being perturbed depending on the matrices, the antibody sample comes in.
0:32:34.9 AL: Let's see here, there's a couple, is it possible to combine multiple assay types into a single run and does that complicate data analysis?
0:32:46.1 ND: Yeah, for the first part of that question, yes, you can definitely combine things, there's a queuing functionality in the navigator control software of the LSA that lets you take different methods and put them together so you could run maybe kinetics and binning or quantitation and kinetics, really, largely any combination that you want and run them and you'll get kind of everything done at once which is great, you don't have to come back and run multiple small experiments. And then in terms of data analysis the output is a single file but it has subsections of that for analysis, so it is not difficult then to perform whatever specific analysis needs to be done on the individual components of that file, so it is very straightforward.
0:33:31.7 AL: Can we also analyze high content screening data with Signals feature repo?
0:33:37.3 DG: I guess that's that's mine. Yes, you can, there is a platform that we use, it's our own Signals, it's called Signals Image Artist which directly connects to the high content instruments and that does the image analysis itself, but what the result of it is, again, a data table and the data table is really the result of the, for example, a normal staining that you have and this is then transferred into VitroVivo the same way as a Carterra file would be read into VitroVivo or a plate reader file would be read into VitroVivo. And then you follow a couple of steps that are unique, again, for high content analysis and we have these apps as well, so it's all in one platform.
0:34:41.5 AL: Thank you. How is data collected and moved from the LSA into a location that can be accessed by VitroVivo? That's a good question.
0:34:51.6 ND: Yeah, that would be a little bit dependent on the setup at the particular site, there's different IT kind of rules and things like that but at the high level the LSA collects data and stores it locally on the instrument PC, from there it can be moved in a manual or automated format to additional locations, but it does collect directly to the instrument PC. We do know of several LSA users that do have sort of automated processes for those files once the run is completed or are pulled up to network locations and then can proceed with more sophisticated analysis like Christoph was demonstrating here, for example, so that that's generally kind of how data is collected on the LSA and then moves out of the LSA environment.
0:35:43.7 AL: Thanks. Space for any more questions coming through? Any others? I think you may be done, but I want to say thank you again to our speakers and to all of our guests. Excellent questions. I really appreciate sending those through. I guess with that we'll close up. Once again thank you on behalf of Carterra, and we hope to see you all in the future.