Get regular updates on latest products.
![](https://cdn.prod.website-files.com/664c4eea8ea55f9b80d4a2c5/664c5e4c4d648e2d40bd179b_StackGenius-Logo-Horizontal-typo-only-600x215%201.webp)
Follow us on social media for weekly video podcasts.
Accelex is THE tool helping big LPs get on top of all the inbound reports and statements. It extracts and transforms the data into any system. A life saver. Thanks to Michael Aldridge and Boris Lavrov for the friendly testride!
Watch this video for a hidden gem that includes Batman and Joker - for real. Youtube.
Transcript by AI
hello welcome to stack genius the podcast for data-driven investment professionals my name is sylvan and today i'm joined by boris and michael i think boris you're london and michael in connecticut is that right Correct. That's right. That's fabulous. So thanks for joining me today. So I was trying to summarize what you guys do in my head. And effectively, you help LPs to manage all the disjointed data reports and investor portals that are out there. But probably I oversimplify a lot. How would you describe what you guys do? so we we definitely do that sylvan and by the way thank you for inviting us on today so it's a real pleasure to be having this conversation with you what we've built at xlx is really an end-to-end solution that helps the investor community in private markets to manage complex and highly unstructured data workflows from collecting documents from various places to accessing deep and complex data inside of those documents and then being able to visualize that in terms of understanding portfolio insights from a performance and exposure and risk point of view. So that's really the genesis of the ExcelX business and how we're helping our clients today. And your clients would be big funds or also small funds? How can I imagine that? Exactly. LPs is a very general term. We think of them as asset owners and asset allocators. In regular parlance, they are pension funds, they are insurance companies, they are sovereign wealth, they are funds of funds, they are secondaries players, all of whom are deploying capital into fund-based investments through GP relationships. They typically have this problem at scale and And for an industry that's clearly been growing significantly year on year for some time now, there's a lot of capital deployed in that, which means a lot of relationships and ultimately a lot of documents to deal with to really get some detailed insights into your overall portfolio. Some of our bigger clients are in not just hundreds, but thousands of funds. So they have this problem at scale. And really the solution that we've been developing over the last few years is built to address those types of clients. That's super helpful. And we will look at the tool in a second together. But is there maybe a critical mass threshold? So, I mean, I'm imagining a number of only 10 funds you're invested in probably doesn't make sense to use your solution. What would you think is the lower threshold? 50. I mean, if I had to pull a number out of the air, 50. is it will be a good kind of start point. I think we do have a couple of clients at a smaller level than that, but we have some that are far, far, far larger in terms of the size of their investment portfolio. But right around 50 funds is when our technology can become very, very powerful. Below that, it's certainly something people can and do address. uh using you know manual resources that makes sense so and so what is the input to your system is it emails or is it the uh lp portals how do i have to imagine how yeah i mean boris will show you in a moment how that works but essentially there are a number of ways to automate the harvesting of documents. There are some major industry data vaults that are well known to everybody in this industry that we have commercial relationships with and can access those on a very regular basis, downloading the documents, understanding what type they are, what manager they relate to, the category of document, the period, the date, etc. So those are significant levels of automation we're delivering to relationships with those portals. but also email kind of internal folder systems where clients may be doing some of this themselves. So there's a number of technological capabilities that you have to automate a significant chunk of the document acquisition process. But as anyone in the private markets will tell you, None of this stuff is 100% automatable. So there's always some human in the loop, whether it's with the document acquisition or the data extraction. And again, our modular approach does allow for that. Whether we're supporting the document acquisition process or our clients are doing it, there's a significant chunk of it which is automatable. And then a tale of specific portals or manager portals or other places that require multi-factor authentication that are very hard to automate. in the private markets. That's helpful. I mean, maybe to make it more visual, let's look at it and then it becomes clear to the audience as well. So I am putting that on right now. And it's not working. See, technology is not working. So now it is. Now you know we're live. Okay, good. Exactly. Very good. So you're live, Boris. Brilliant. Okay, excellent. Well, hopefully our technology works today. Let's see. I'll really show you, try and quickly show you the overall journey for a typical client of ours. So imagine if you're a large LP, the first thing that we already touched upon, the first thing that we do is we actually help our clients acquire aggregate and categorize and kind of tag and sort their documentation so there's no data in private markets famously so typically everything between a limited partner and general partner gets exchanged in the form of these documents and some of our larger clients you know they receive tens of thousands of documents on a quarterly basis and so the first thing that we do is we aggregate them that we actually go and download them using in many cases apis so for example some something like sunguard or intralinks we will use apis to download in a secure and reliable fashion all of that documentation and we will just first of all store all this but But what we will also start to do, as you can see in this document feed, is that we will start to categorize this documentation by using various metadata. And that we can, of course, also lift out of portals, but we can also do it ourselves using our own classification data extraction technology. So we will try to find what's the fund, what's the document type, what's the period of the report, allow users to review all of this and allow people to actually say, okay, we'll archive this document or actually move it to extract data from the document. That's the first piece that we a piece of operational burden that we really take away from our users. We automatically categorize around 25 or more different types of documents. So even if we are not extracting the data from them automatically, we can still harvest those documents, wrap them with metadata, tag them and store them away, whether they're tax forms or you know, exactly the agreement. So there's a very wide range of document collection and management we can do on behalf of our clients. Exactly. So anything from ESG reporting, financial statements, general information, legal documents, all of those kind of things we can categorize. But really, the data that we are then interested in and most of our clients are interested in is really performance data. So and there are four main types of documents that we then route to our extraction engines, which contain that performance data. And that is performance reports, cash in the notes, fund financial statements, and also capital account statements. So all of that actually then feeds our clients and our own analytical environment. And in this environment, we also allow users to go ahead and validate the data that is then extracted from from those documents. So what I'll show you here is an example of a performance report for Wayne Enterprise Capital Partners. And the first thing you'll see you'll see here is on the left hand side, you have a data grid that actually represents in this validation step the assets in that document. So all of these take that Wayne Enterprises has invested in Joker Incorporated, right? Yes, these are these are all highly real, real assets, of course. And yeah, you can you can see here that what I've done here is I've actually clicked on this locate button. And every time you do that, you can do that for every single data point that we extract from from every single document. And of course, that will bring us to the particular place in the document where we're locating this information. And of course, I can also inspect the rest of the document as well. And every time I see a green box, it will bring me to and highlight the actual data point on the left-hand side of the screen as well. So it works both ways. makes it very easy to navigate and validate any of this information. So here we don't have any exceptions. As you can see, we have this kind of exception-based ribbon where maybe we have a new company, which we don't know about because the fund has made some investments, or maybe we found some duplicates which we need to resolve. In this case, everything's clean, so I'll proceed to the next step. And the next step is, of course, Interesting one, which is, well, OK, so now show me the data that we've extracted for this fund and for these companies. Again, on the top here, you can see a ribbon of different data types that we support. This is really governed by, first of all, the library of metrics and data points that we know and understand and are able to standardize across the different GPs. And we do that automatically. Again, using our data extraction and standardization data science stack. And here, we're looking at asset investment metrics, which is really a statement of investment. That's where typically these metrics are reported. And actually, this is exactly the page where we're picking those data points up from. So a few points to note here. We are not just pulling this data off of a page. As you can see here, we have an unrealized value for this company of 2.3. But of course, it's stated in millions. It's stated in US dollars. It's attached to a particular asset that we're tracking for this fund. So all of that rich context of information is captured here by our extraction engine, and that's what makes our solution so powerful is that we can bring all of this intelligence and understanding of the investment network together. And you can see that we're actually extracting all of these different data points from this table. I can highlight all of these as metrics, and I can just visually see that everything seems to be captured. I can spot check a couple of these metrics, and I can move on. Of course, there's a number of other data points that we're capturing. Again, we're capturing everything down to in this report to what we call asset performance metrics, which are really company financials. So if you want to really start to think about valuations and independent valuation of these assets as well, looking at revenues, EBITDA, multiples, et cetera. If your GP reports this data, then you can take this data out effectively out of this documentation and you can look at your own proprietary valuations as well. Again, fund level metrics and asset static metrics, all of these things are also things that we support today. And of course, this is an exception based workflow. So if there are some specific fields, for example, that we're not extracting, you can go and review those as well. Maybe those are not stated in the document, or maybe actually there are some fields where you see a discrepancy between the data you already have in the system and a new value that we're suggesting. Maybe the industry of this company has changed. and I can always review all of these changes, either confirm these changes or ignore these changes, et cetera. In this case, I can confirm this and then move on with my workflow. I know, for example, that these other companies are not reporting detailed information in this report, so I can just move on, finish this document, extract it, send that data downstream or submit it for approval. If I have some sort of make a check for eyes workflow, this is particularly relevant for things for different processes like cash flow and payments, payments processes where obviously this kind of check is very relevant to our clients. I have to assume that the data then goes downstream. Do you have pre-built integrations or is it documented API? How does that work? Yeah, that's a great question. So it really depends. So we have built integrations with a number of systems. So we have integrations Typically, we integrate with systems like Investram, systems like SimCorp. And really, there's a variety of ways in which we can get data out of the system. I mean, the simplest one is really just to take this data and then generate a quick report or a custom report that sends this to an Excel loader, which is what a lot of our clients still use today. To be honest with you, with all of these systems, a lot of them don't necessarily have good APIs to then load data into them as well. So there's a variety of options to get data out. And of course, one of the ways you can consume that data is you don't have to get it out of the system at all. You can use our own internal portfolio analytics to actually go and visualize that data and start to work with that information natively here in Excelix. Understood. That's what you're opening right now, right? Exactly. So let me maybe show you quickly how that works as well. While we do that, so I'm also trying to imagine, I mean, I now understand you are a super specialized ETL sort of middleware that helps people sift through all the data. And I also understood what your speed spot is. So how does pricing work generally? So you don't have to say a number, but do people pay by seed or throughput or how does it work? Yeah, sure. So it's fairly simple. So what we usually price on are really two things. The number of funds that our clients have invested in and also the number of commitments that they have in their portfolio. So depending on the use case, of course, they will be receiving thousands and thousands of capital accounts, cash flows, and performance reports, but really the volume of those is driven by those two factors, number of commitments and number of funds. That's helpful. Yeah, it sort of interrupted you. Let's spend 30 seconds also on the demo of the analytics part. That would be cool. Great. Sure. Yeah, essentially what you see here is the home screen of analytics. And one of the easiest ways to start to look at what we offer here is just to do a search, right? So for example, we looked at Wayne Enterprises previously. And so you can see that if I search for Wayne Enterprise, I'll get a range answers on what kind of entities I have in my portfolio connected to Wayne Enterprises. I'm going to pick one of them and I'm going to say, okay, I want to look at Wayne Enterprises Capital Five. And what I will get is really a position of that entity in my investment network. I have an investment entity and maybe this is my pension fund or fund of funds that's invested into Wayne Enterprise 5. I can navigate to that investment entity or I can also navigate to all the assets that Wayne Enterprise is invested in as well on the right hand side. And I can also see my exposure to those assets as well and some very high level performance indicators for that entity. I can also click on this locate button again, and it will take me to the tear sheet page for this fund and tell me, okay, well, this is the performance of the fund. This is how it's changed in the period as well. So this is how my IRR or TVPI or DPI has changed. You start to see some summary and time series around the performance evolution and the J curve of that fund. So all of these analytics start to come and the data that we're extracting from those documents start to come into play and start to make sense in the context of your portfolio. I can also go and look at the underlyings for that fund as well and see, okay, these are the These are my underlyings, these are the companies that are contained in the fund itself. And if I go to, for example, a time series where we've actually extracted that data then i can i can start to see where where that data is coming from and then finally one thing that i i can do is i can always go back and i can go to the uh back to the source of that data Really powerful kind of audit capability that our clients love being able to say, hey, you know, this revenue number looks weird. And all of a sudden, not just the document, but the page and the location. So really powerful set of analytics, as well as, you know, full data lineage from the original unstructured source all the way through to your kind of portfolio reporting and exposure, which is great. Yeah, I can totally see that. And maybe for my clarity, again, so the, the associates or analysts of the LPS, they would be the users that work with the interface, or is it a managed service that you provide? How does it work? It can be both. So just from a probably a good point to just touch on our go to market strategy here, because what we're doing is we are engaging directly with large LPs that are licensing our platform into their organization for a combination of both operational leverage, i.e. making their middle and back office teams more efficient, accessing better data at higher quality more regularly, but then, of course, exposing that data from an investment performance point of view through to their kind of front office investment colleagues. So we're a fortunate provider in that we're delivering you know, operational efficiency and hopefully better investment outcomes through the same set of technology. However, the go to market model does not only rely on us securing clients directly like big pension funds and funder funds and insurance companies. We also work with service providers of different shapes and sizes. They could be fund administrators, custodians, large fintechs, VPO firms that are effectively leveraging our components as part of a you know potentially a broader go-to-market offering so in that case very often the partner is doing the work you've just seen boris walk through on behalf of the lp client so we go direct and indirectly through our kind of alliance partners Yeah, I can definitely see how this boosts the BPO margin for them. That makes a lot of sense. Really cool. So we're at the end of our planned time. So I definitely thank you a lot for showing us Aesthetics. Really cool product. I like it. I can see a lot of value in it. And so the only thing that is left to do is probably make fun of the London rain, I guess. Right. So I had a colleague once when I worked in Richmond with eBay, he said, you can only tell the seasons by the temperature of the rain in London. So what is the temperature of the rain today is the question. Actually today, this is, highly inappropriate because the sun is shining for some reason. What are you doing in the office? There you go. The sun is shining in Connecticut as well. It's raining in Connecticut. As you can clearly work out we've, as a business, we've grown up as a COVID baby. So we have people everywhere. Boris is in London, I'm in Connecticut, we have a team in Toronto, we have another one in Paris, one in Tunisia, one in Serbia. So we've definitely coalesced around a few key locations, London being our HQ, but we are fortunate to have colleagues all over the planet along with our clients. So it's very helpful. And what's your favorite restaurant in Connecticut? What do you eat in Connecticut? Yeah, great question. There are some phenomenal restaurants in the town where I live. One of my favorites is a Mexican place called Don Memo. And actually, I had a couple of Boris's colleagues from London, my colleagues too, over last week. And I took them there. And I think they've gone away thinking that Westport, Connecticut is the best place to eat Mexican food. Absolutely outstanding. So an Englishman living in Connecticut goes to a Mexican restaurant. That totally makes sense. It's global. It's global. Fabulous. This has been fun. So thanks for the enjoyable conversation, showing us the product and Godspeed to you guys. Thank you. Appreciate the time. Take care. Bye now. Bye now.
StackGenius’ Founder Silvan worked for Silicon Valley Corporates for 10 years. Afterwards he spent another 10 years founding Machine Learning companies in Europe. When his last company was sold in an Asset Deal in May 2024 he thought about building a “datanative” Micro VC. But he realized that he doesn’t know enough about investing. But he knew enough about building coherent tech stacks and applying machine learning. This is how StackGenius came to life. A hyper-specialized system integrator that helps investment teams of all shapes and sizes to build Alpha with technology.
Subscribe to receive the latest blog posts to your inbox every week.
Get regular updates on latest products.
The What, the How and a bit of Why