Published on the Value Lab 12/3/21
This article is about GSI Technologies (NASDAQ: GSIT)but it is also just as much about what Google (NASDAQ: GOOG) is doing with its search algorithms. I am a Data Scientist and not a Computer Scientist, so not an expert on hardware, but as I understand it the movement of Google from using BERT to multimodal methods in search would make the Gemini APU relevant if they could commercialize it soon enough. With the Gemini APU APIs being built out now, hopefully they’ll be in time to catch this wind. GSIT is currently valued at only a hundred million in market cap, but if it became a major hardware provider to Google, there is probably at least a 5x opportunity here, if not 50x considering the breadth of applications and the growth in those applications. With the Gemini APUs ideal for search applications as well as recommendation discussed in previous articles, we think it could be a revolutionary play for yet another reason.
What is Google Doing That GSIT Could Help With?
Google is moving from using BERT to using a multi-modal system for understanding the information contained in a query.
What is BERT though?
My explanation for the layman is something like this. If you represent a word by some unique matrix of numbers, that key would not depend on the context of the word. Of course, the context of the word matters a lot. Think ‘man bites dog’ versus ‘dog bites man’. BERT trains itself to guess what words are missing in sentences where things are randomly left out. Then it can understand future queries by training paying attention to all the elements in the sentence with reference to each other element in the sentence. Every time you need to train this model, it requires a series of matrix operations, and many permutations of these matrix operations for each word in a sentence because it’s paying attention to the relationships between words. On top of the representation of words already being a matrix, this results in a bunch of matrix operations. And this is happening at many heads in parallel and through many layers. So the operations are already complex on their own, and there is a matrix of these operations too, so there is a lot of dimension here that will require reaching into the memory and making computation. All this results in a model that can generate language, extremely convincingly where you would absolutely not know that it’s an AI, meaning that it also has an almost human understanding of the intention of a query. This is what makes Google a good search engine. You don’t need to query in a perfect way, the engine will infer meaning from your query to give you the results you want.
Now, imagine that in addition to all these operations, you add a whole other dimension to this. Imagine combining the meaning of words with the appearance of images. Much like words, images also have matrix representations. They are three dimensional. Two dimensions are the space, and the last dimension is the color channel (RBG). The size of the image determines the number of pixels, each with an intensity value for red, green and blue. Lots of numbers and potentially big matrices. What MUM is, the multi-modal model Google wants to use now instead of BERT, is to combine an understanding of the meaning of a sentence with the values of pixel intensities in each channel that make up an appearance of an image. So in addition to all those matrix operations related to words, we massively increase the complexity of learning by combining that now with image matrices as well, which have a lot of values in them! In the future, even audio information could be given numerical representations and be further combined into these multi-modal systems to combine data of various types to enhance Google’s understanding of queries.
GPUs and even TPUs work off Von Neumann architecture. This has inherent limitations where frequent reaches into the memory before performing calculations creates a bottleneck. The GSIT patent acquired from MikaMonu means this is no longer a problem, with memory in-place operations being possible. Because of all the computations that Google’s models already do from memory, and with multi-modal systems likely to increase the complexity of the calculations in pretty much an exponential fashion, the Gemini APU which doesn’t rely on Von Neumann architecture could change the game . The company remains self-financing, and we hope that they can start delivering a commercial product soon. As long as their patent can stay protected, and that they create a product that can be commercially shipped, our understanding is that it will be useful for some of the most valuable applications in the world.
Of course, we don’t know what will happen, and we are not experts in hardware. But we know enough to understand what this APU might be able to do for these companies. With massively faster and less energy intensive calculations, Google, Netflix (NASDAQ: NFLX) and Amazon (NASDAQ: AMZN) should be lining up at their doorstep once the Gemini is ready to be sold.
It’s hard to say exactly how big the opportunity is. The Gemini is going to be orders of magnitude faster at certain applications, and likely to reduce energy usage consequently by 60-70%. Supposing the obtainable market can be built starting from Netflix’s cloud computing costs in a bottom-up approach, consider their expense of around $ 30 million per month, or $ 360 million per year for AWS needs. Netflix is about 50% of the streaming market, so for the needs of streaming companies, which is substantially recommendation engines, the market value for Gemini might be around $ 720 million in streaming. Of course, streaming and their recommendation engines are just a small subset of the recommendation engines running on servers across all of ecommerce. Then there’s also search, so looking at just streaming understates things. But supposing the $ 720 million as a super conservative figure and supposing a 10% operating margin, which is what GSIT had for its legacy SRAM business, you get a $ 72 million EBIT. GSI Technology is a $ 100 million company, so that means a 1.4x multiple on this EBIT forecast. It’s very low, with semiconductor companies easily doing a 15x EBIT multiple which would value GSIT at $ 1 billion, suggesting a potential 10x opportunity. It could end up being much higher than this considering we only considered server costs with respect to streaming.
In any case, while GSIT gets its Gemini ready for production in foundries and shipping to servers of hopefully marquee customers like Amazon and Google, they are still managing to keep somewhat above water in terms of cash burn with their legacy businesses. Without R&D, their operating income would be about $ 3 million, but they are in cash burn territory with the R&D being above $ 20 million at this point while trying to develop APIs and libraries for the Gemini. So equity raises are in the cards with additional paid in capital growing by 20% since last year. Dilution is certainly non-negligible here, which is a risk. But the APIs are being worked on as we speak, and the company hopes to be able to get its product out to first customers in Q1 2022. After that, it shouldn’t be more than a couple of years of dilution before the product is fully launched, hopefully as semiconductor shortages ease. With 10x, or perhaps 5x after two years of dilution, being the highly conservative estimate of upside, and markets available beyond streaming like for search and more complex Google multimodal algorithms, the opportunity remains very compelling as a small, speculative exposure.