Cadence Design Systems, Inc. (NASDAQ:CDNS) Q1 2024 Earnings Call Transcript

Page 1 of 3

Cadence Design Systems, Inc. (NASDAQ:CDNS) Q1 2024 Earnings Call Transcript April 22, 2024

Cadence Design Systems, Inc. beats earnings expectations. Reported EPS is $1.17, expectations were $1.13. Cadence Design Systems, Inc. isn’t one of the 30 most popular stocks among hedge funds at the end of the third quarter (see the details here).

Operator: Good afternoon. My name is Regina and I will be your conference operator today. At this time, I would like to welcome everyone to the Cadence First Quarter 2024 Earnings Conference Call. All lines have been placed on mute to prevent any background noise. After the speakers’ remarks, there will be a question-and-answer session. [Operator Instructions] Thank you. I will now turn the call over to Richard Gu, Vice President of Investor Relations for Cadence. Please go ahead.

Richard Gu : Thank you, operator. I’d like to Welcome everyone to our First Quarter of 2024 Earnings Conference Call. I’m joined today by Anirudh Devgan, President and Chief Executive Officer and John Wall, Senior Vice President and Chief Financial Officer. The webcast of this call and a copy of today’s prepared remarks will be available on our website cadence.com. Today’s discussion will contain forward-looking statements, including our outlook on future business and operating results. Due to risks and uncertainties, actual results may differ materially from those projected or implied in today’s discussion. For information on factors that could cause actual results to differ, please refer to our SEC filings, including our most recent Forms 10-K and 10-Q, CFO Commentary, and today’s earnings release.

All forward-looking statements during this call are based on estimates and information available to us as of today, and we disclaim any obligation to update them. In addition, we’ll present certain non-GAAP measures which should not be considered in isolation from or as a substitute for GAAP results. Reconciliation of GAAP to non-GAAP measures are included in today’s earnings release. For the Q&A session today, We would ask that you observe a limit of one question and one follow-up. Now I’ll turn the call over to Anirudh.

Anirudh Devgan : Thank you, Richard. Good afternoon, everyone. And thank you for joining us today. I’m pleased to report that Cadence had a strong start to the year delivering solid results for the first quarter of 2024. We came in at the upper end of our guidance range on all key financial metrics and are raising our financial outlook for the year. We exited Q1 with a better than expected record backlog of $6 billion, which sets us up nicely for the year and beyond. John will provide more details in a moment. Long-term trends of hyperscale computing, autonomous driving, and 5G, all turbocharged by AI super-cycle, are fueling strong broad-based design activity. We continue to execute our long-standing Intelligent system design strategy as we systematically build out our portfolio to deliver differentiated end-to-end solutions to our growing customer base.

Technology leadership is foundational to Cadence and we are excited by the momentum of our product advancement over the last few years, and the promise of our newly unveiled products. Generative AI is reshaping the entire chip and system development process. And our Cadence.AI portfolio provides customers with the most comprehensive and impactful solutions for chip-to-systems intelligent design acceleration. Built upon AI-enhanced core design engines, our GenAI solution boosted by foundational LLM co-pilot are delivering unparalleled productivity, quality of results and time to market benefit for our customers. Last week at CadenceLIVE Silicon Valley, several customers including Intel, Broadcom, Qualcomm, Juniper, and Arm shared their remarkable successes with solutions in our Cadence.AI portfolio.

Last week, we launched our third-generation dynamic duo, the Palladium Z3 emulation and Protium X3 prototyping platform to address the insatiable demand for higher performance and increased capacity hardware accelerated verification solutions. Building upon the successes of the industry leading Z2, X2 systems, this new platform set a new standard of excellence, delivering more than twice the capacity and 50% higher performance per rack than the previous generation. Palladium Z3 is powered by our next generation custom processor and was designed with Cadence AI tools and IP. The Z3 system is future proof with its massive 48 billion gate capacity, enabling emulation of the industry’s largest design for the next several generations. The Z3 X3 systems have been deployed at select customers and were endorsed by Nvidia, Arm and AMD at launch.

We also introduced the Cadence Reality Digital Twin Platform which virtualizes the entire data center and uses AI, high-performance computing, and physics-based simulation to significantly improve data center energy efficiency by up to 30%. Additionally, Cadence’s cloud native molecular design platform Orion will be supercharged with Nvidia’s BioNemo and Nvidia microservices for drug discovery to broaden therapeutic design capabilities and shorten time to trusted results. In Q1, we expanded our footprint at several top tier customers and furthered our relationship with key ecosystem partners. We deepened our partnership with IBM across our core EDA and systems portfolio, including a broad proliferation of our digital, analog and verification software and expansion of our 3D-IC packaging and system analysis solutions.

We strengthened our collaboration with Global Foundry through a significant expansion of our EDA and system solutions that will enable GF to develop key digital analog RF/MM-Wave and silicon photonics design for aerospace and defense IoT and automotive end-markets. We announced a collaboration with Arm to develop a chiplet-based reference design and software development platform to accelerate software-defined vehicle innovation. We also further extended our strategic partnership with Dassault Systems, integrating our AI-driven PCB solution with Dassault’s 3DEXPERIENCE Works portfolio, enabling up to a 5x reduction in design turnaround time for solid work customers. Now let’s talk about our key highlights for Q1. Increasing system complexity and growing hyperconvergence between the electrical, mechanical, and physical domain is driving the need for tightly integrated co-design and analysis solutions.

Our System Design and Analysis business delivered steady growth as our AI-driven design optimization platforms integrated with our physics-based analysis solution, continued delivering superior results across multiple end markets. Over the past six years, we have methodically built out our system analysis portfolio. And with the signing of the definitive agreement to acquire BETA CAE, are now extending it to structural analysis, thereby unlocking a multi-billion dollar TAM opportunity. BETA CAE is leading solutions have a particularly strong footprint in the automotive and aerospace verticals, including at customers such as Stellantis, General Motors, Renault, and Lockheed Martin. Our Millennium supercomputing platform, delivering phenomenal performance and scalability for high fidelity simulation is ramping up nicely.

An office of software engineers and designers collaborating on a digital project.

In Q1, a leading automaker expanded its production deployment of Millennium to multiple groups after a successful early access program in which it realized tremendous performance benefits. Allegro X continued its momentum and is now deployed at well over 300 customers. While Allegro X AI, the industry’s first fully automated PCB design engine, is enabling customers to realize significant 4 times to 10 times productivity gain. Samsung used Celsius Studio to uncover early design and analysis insights to precise and rapid thermal simulation for 2.5D and 3D packages, attaining up to a 30% improvement in product development time. And a leading Asian mobile chip company use optimality intelligence system explorer AI technology and Clarity 3D Solver obtaining more than 20 times design productivity improvement.

Ever-increasing complexities in the system verification and software bring-up continue to propel the demand of our functional verification products. With hardware accelerated verification, now a must have part of the customer design flow. On the heels of a record year, our hardware products continue to proliferate at existing customers, while also gaining some notable competitive wins, including at a leading networking company and at a major automotive semiconductor supplier. Demand for hardware was broad-based with the particular strengths seen at hyperscalers and over 85% of the orders during the quarter included both platforms. Our Verisium platform that leverages big data and AI to optimize verification workloads, boost coverage and accelerate root cause analysis of bugs saw accelerating customer adoption.

At CadenceLIVE Silicon Valley, Qualcomm said that they used Verisium [Stem AI] (ph) to increase total design coverage automatically while getting up to a 20x reduction in verification workload runtime. Our Digital IC business had another solid quarter as our digital full flow continued to proliferate at the most advanced nodes. We had strong growth at hyperscalers, and over 50 customers have deployed our digital solutions on three nanometer and below design. Cadence Cerebrus, which leverages Gen.AI to intelligently optimize the digital full flow in a fully automatic manner now has been used in well over 350 tapeouts. Delivering best in class PPA and productivity benefits, it’s fast becoming integral part of the design flow at marquee customers, as well as in DTCO flows for new process nodes at multiple foundries.

In custom IC business, Virtuoso Studio, delivering AI-powered layout automation and optimization continued ramply, strongly, and 18 of the top 20 semi have migrated to this new release in its first year. Our IP business continued to benefit from market opportunities offered by AI and multi-chiplet based architecture. We are seeing strong momentum in interface IPs that are essential to AI use cases, especially HBM, DDR, UCIe, and PCIe at leading edge nodes. In Q1, we partnered with Intel Foundry to provide design software and leading IP solutions at multiple Intel-advanced nodes. Our TenSilica business reached a major milestone of 200 software partners in the Hi-Fi ecosystem, the de facto standard for automotive infotainment and home entertainment.

And we extended our partnership with one of the top hyperscalers in its custom silicon SOC design with our Xtensa NX controller. In summary, I’m pleased with our Q1 results and the continuing momentum of our business. [Piling] (ph) chip and system design complexity and the tremendous potential of AI-driven automation, offer massive opportunities for our computational software to help customer realize these benefits. In addition to our strong business results, I’m proud of our high-performance inclusive culture and thrilled that Cadence was named by Fortune and Great Place to Work as one of the 2024’s 100 best companies to work for, ranking number 9. Now I will turn it over to John to provide more details on the Q1 results and our updated 2024 outlook.

John Wall : Thanks, Anirudh, and good afternoon, everyone. I am pleased to report that Cadence delivered strong results for the first quarter of 2024. First quarter bookings were a record for Q1 and we achieved record Q1 backlog of approximately $6 billion. A good start to the year coupled with some impressive new product launches, sets us up for strong growth momentum in the second half of 2024. Here are some of the financial highlights from the first quarter starting with the P&L. Total revenue was $1.009 billion. GAAP operating margin was 24.8% and non-GAAP operating margin was 37.8%. GAAP EPS was $0.91 and non-GAAP EPS was $1.17. Next, turning to the balance sheet and cash flow, cash balance at quarter end was [$1.012 billion] (ph).

While the principal value of debt outstanding was $650 million. Operating cash flow was $253 million. DSOs were 36 days and we used $125 million to repurchase Cadence shares in Q1. Before I provide our updated outlook, I’d like to share some assumptions that are embedded in our outlook. Given the recent launch of our new hardware systems, we expect the shape of hardware revenue in 2024 to weigh more toward the second half, as our team works to build inventory of the new system. Our updated outlook does not include the impact of our [pending] (ph) BETA CAE acquisition and it contains the usual assumption that export control regulations that exist today remain substantially similar for the remainder of the year. Our updated outlook for fiscal 2024 is revenue in the range of $4.56 billion to $4.62 billion.

GAAP operating margin in the range of 31% to 32%. Non-GAAP operating margin in the range of 42% to 43%. GAAP EPS in the range of $4.04 to $4.14. Non-GAAP EPS in the range of $5.88 to $5.98. Operating cash flow in the range of $1.35 billion to $1.45 billion. And we expect to use at least 50% of our annual free cash flow to repurchase Cadence shares. With that in mind, for Q2 we expect revenue in the range of $1,030 million to $1,050 million. GAAP operating margin in the range of 26.5% to 27.5%. Non-GAAP operating margin in the range of 38.5% to 39.5%. GAAP EPS in the range of $0.73 to $0.77. Non-GAAP EPS in the range of $1.20 to $1.24. And as usual, we’ve published a CFO commentary document on our investor relations website, which includes our outlook for additional items, as well as further analysis and GAAP to non-GAAP reconciliations.

In summary, Cadence continues to lead with innovation and is on track for a strong 2024 as we execute to our intelligent system design strategy. I’d like to close by thanking our customers, partners, and our employees for their continued support. And with that operator, we will now take questions.

Operator: [Operator Instructions] Your first question comes from the line of Joe Vruwink with Baird. Please go ahead.

See also 15 Countries with the Negative Population Growth in the World and 20 Best Evernote Alternatives in 2024.

Q&A Session

Follow Cadence Design Systems Inc (NASDAQ:CDNS)

Joe Vruwink: Great. Hi everyone. Thanks for taking my questions. Maybe just to start with your outlook for the year. Can you perhaps provide maybe your second half assumption before this quarter versus where it stands today in terms of just recalibrating around delivery schedules and maybe a good way to frame it, I think in the past you gave a share of this year’s revenue that was going to come from upfront products. Is that still the right range? But if it is the right range, you can obviously see more is going to end up landing in the second half. And so that kind of puts to your original views or how is that, I guess, skewed relative to what might have been the expectation a quarter ago?

John Wall: That’s a great question, Joe. And I think you’ve hit on the main point there that upfront revenue is driving a lot of the quarter-over-quarter trends this year. That — when I look at last year, you recall that we had a large backlog of hardware orders and we dedicated 100% of the production, hardware production in Q1 to deliver that hardware in Q1 2023. As a result, in Q1 2023, 20% of our Q1 ‘23 revenue was from upfront revenue sources. That in contrast, Q1 this year, it’s only 10% of the total revenue for this Q1 is coming from upfront revenue. But again, last year, and to reflect on where we thought we were this time last quarter, that we still expect that upfront revenue will probably be 15% to 20%. I mean, around the midpoint there is 17.5% in expectation for upfront revenue this year.

And a midpoint of say 82.5% for recurring revenue. That’s still the same as what we thought this time last quarter. That contrast with last year was I think 16% of our revenue was upfront last year. And to put dollar terms on it, last year $650 million of our revenue was upfront. This year, we’re expecting roughly $800 million to be upfront. But the first half versus first half, last year, we had $350 million in the first half and $300 million in the second half because we had prioritized all those shipments in hardware and it skewed the numbers toward the first half last year. So $350 million and $300 million ending with the $650 million of upfront revenue last year. This year it looks more like $250 million and $550 million at the back end.

But I know that’s largely as a result of, we had a record backlog, our record bookings quarter in Q1. We’ve got a substantial backlog in IP that we’re scaling up to deliver, a lot of that revenue falls into the second half. And also we launched these new hardware systems last week. Hardware revenue is expected to be more second half weighted now, because based on what we’ve heard, and I’ll let Anirudh chime in here on the technical aspects of the new hardware systems, but we expect them to be so popular that a lot of demand will shift to those new hardware systems and we’ll have to ramp up production to be able to deliver that demand. So it shifts some of the upfront revenue to the second half. So I think upfront revenue is really driving a lot of the skewed metrics.

Anirudh, do you want to talk about Z3?

Anirudh Devgan: Yeah, absolutely. So we are very proud of the new systems we launched. As you know, we are a leader in hardware-based emulations with Z2 X2. And last time we launched them was in 2021. So that was like a six-year cycle. You know Z1 X1 was 2015 and then Z2 X2 was 2021. So what I’m particularly pleased about is, we have a major, major refresh, you know, it’s a game-changing product, but it was also developed in only three years. So in 2024, we have a new refresh and it’s a significant leap in terms of capacity. And even last week at our CadenceLIVE conference, Nvidia and Jensen talked about how they use Z2 to design their latest chip like Blackwell. And it’s also used by all the major silicon companies and system companies to design their chips.

But what is truly exciting about Z3 and X3 is this a big leap, it’s like Z3 is 4 times or 5 times more capacity than Z2. It’s a much higher performance. So it sets us up nicely for next several years to be able to design the several generations of the world’s largest chips. So that’s the right thing to do. And the reason we can do it in three years versus six years is, we use our own design internally in Cadence for TSMC advanced node. So we’re using all our latest tools, all the latest AI tools, we are using all our IP. There’s a very good validation of our own capabilities that we can accelerate our design process, but really sets up hardware verification and overall verification flow for using the new systems. Now, as a result, normally there is a transition period when you have a new system and we went through that twice already in the last 10 years.

And the customers naturally will go to the new systems and then we build them over next one or two quarters. But that is the right thing to do for the business long-term. The time — it’s good to accelerate the — because these AI chips are getting bigger and bigger, right? So the demand for emulation is getting bigger and bigger, and I can give you more stats later. So we felt that it was important to accelerate the development of the next generation system, to get ready for this coming AI wave for next several years, and we are very well positioned. As a result, it does have some impact on quarter-to-quarter, but that’s well worth it in the long run.

Joe Vruwink: That’s all very helpful, Thank you. Second question, I wanted to ask how — some of the things you just spoke of, but also AI, start to change the frequency of customers engaging with you, how they approach renewals. So you just brought up how the [Harbor] (ph) platforms, the Velocity, there has improved from first generation to next six years. Now we’re down to a three year new product cycle. When I listened to your customers last week talk about AI, they’re not just generating ML models that can be reused, but then, of course, each run becomes better if you’re incorporating prior feedback. So it would just seem like AI itself not only creates stickiness, but there would be an incentive to deploy it maybe more broadly than a customer traditionally would think about deploying new products. Does that mean the average run rates of a renewal ends up becoming much bigger and we’ll start to see that flow in the backlog?

Anirudh Devgan: Yeah, that’s the correct observation. You know, like as you know, what we have said before, AI has a lot of profound impact to Cadence, a lot of benefit to our customers. So there are three main areas. One is, you know, the build out of the AI infrastructure, whether it’s Nvidia or AMD or all the hyperscalers. And we are fortunate to be working with all the leading AI companies. So that’s the first part. And in that part, as they design bigger and bigger chips, because the big thing in AI systems is they are parallel. So they need to be bigger and bigger chips. So the tools have to be more efficient, the hardware platform have to support that. And that’s why the new systems. Now, the second part of AI is applying AI to our own products, which is the Cadence.AI portfolio.

And like you mentioned last week, we had several customers talking about success, you know, with that portfolio, including Intel, you know, like I mentioned Intel, Broadcom, Qualcomm, Juniper, Arm, and the results are significant. So we are no longer in kind of a trial phase of whether these things will work. Now we’re getting pretty significant improvements. Like we mentioned, MediaTek got like 6% power improvement. And one of the hyperscale companies got 8% to 10% power improvement. These are significant numbers. So it is leading to deployment of our AI portfolio. And I think we mentioned like the AI run rate on a trailing 12 months basis is up 3x. And I think design process already was well automated. EDA has a history of automating design over the last 30 years.

So AI is in a unique position because you need the base process to be somewhat automated to apply AI. So we were already well automated and now AI can take it to the next level of automation. So that’s the second part of AI which I’m pretty pleased about, is applying to our own product. And then the third part of AI proliferation is new markets that open up, which things like data center design with reality that we announced or Millennium, which is designing systems with acceleration or digital biology. Those are like a little, they take a little longer to ramp up, but we have these three kinds of impact of AI. The first being direct design of AI chips and systems. Second, applying AI to our own products. And third being new applications of AI.

Joe Vruwink: That’s great. Thank you very much.

Operator: Your next question will come from the line at Charles Shi with Needham & Company. Please go ahead.

Charles Shi: Thanks. Good afternoon. I just wanted to ask about the China revenue in Q1. It looks pretty light. I just wonder whether that’s part of the reason that’s weighing on your Q2. I understand you mentioned that you’re going through that second-gen to third-gen hardware transition right now. Maybe that’s another factor, but from your geographical standpoint, is what’s the outlook for China for the rest of the year and specifically Q2. Thanks.

John Wall: Hi Charles, that’s a great observation. If you recall this time last year we were talking about a very strong Q1 for China for functional verification and for upfront revenue. I think those three things are often linked. You contracted with this year, China is down at 12%. Upfront revenues is lower at 10% compared to 20%. And functional verification, of course, is lapping those really tough comps when we dedicated 100% production to deliveries. I think when you look at China, we’re blessed that we have the geographical diversification that we have across our business. But — what we’re seeing in China is strong design activity. And while the percentage of revenue dropped to 12%, it pretty much goes in-line with a lower hardware, lower functional verification, lower upfront revenue quarter would generally lead to a lower China percentage quarter.

But we have good diversification. While China is coming down, we can see other Asia increasing and our customer base is really mobile. That geographical mix of revenue is based on consumption and where the products are used. But as we do more upfront revenue in the second half, we’d expect the China percentage to increase.

Charles Shi: Thanks. I want to ask another question about the upcoming ramp of the third generation hardware. Where exactly is the nature of the demand? Is it the replacement demand, like your customers replacing your Z2 X2 with the Z3 X3, or you expect that lot more great deal of customers adopting Z3 X3 and more importantly I think you mentioned about 4 times to 5 times capacity increase they can design a larger — much larger chips with a lot more transistors. How much of an ASP uplift you are expecting from the Z3 X3 versus Z2 X2?

Anirudh Devgan: Charles, all good observations. So let me try to answer that one by one. So, I mean, in terms of your last point, normally if the system has more capacity like this one has, it can do more. So it produces, it gives more value to our customers. So we are able to get more value back. So typically newer systems are better that way for us and better for the customer. And to give you an example, I mean, these things are pretty complicated. So, we’ll just take Z3 for example. So Z3 itself, we designed this advanced TSMC chip by ourselves and this is one of the biggest chips that TSMC makes. And one rack will have like, more than a hundred of these chips. And then we can connect like up to 16 racks together. So if you do that, you have thousands of full radical chips emulating — that’s, and these are all liquid cooled connected by optical and InfiniBand interconnect.

So this is like a truly a multi rack supercomputer. And what it can do is just emulate very, very large systems very, very efficiently. So even Z2, like Nvidia talked about it last week, even Blackwell, which is the biggest chip in the world right now with 200 billion transistors, was emulated on few racks of Z2. Okay, so now with 16 rack of Z3, we can emulate chips which are like 5 times bigger than Blackwell, which is already the biggest chips in the world, right? So that gives a lot of runway for our customers because with AI, the key thing is that is the capacity of the chip needs to keep going up, not just a single chip. Look at Blackwell, they have two full radical chips on a package. So as you know, you will see more and more, not just big chips on a single node, but multiple chips in a package for this AI workload and also 3D stacking of those chips.

So what this allows is not just emulating a single large chip, but multiple chips, which is super critical for AI. So I think this is what I feel that this puts us in a very good position for all this AI boom that is happening, not just with our partners like Nvidia and AMD, but also all the hyperscalers companies. And so that will be the primary demand is more capacity chips require more hardware. And then X3 will go for that with the software prototyping which is used on FPGA. And then we have some unique workload capabilities apart from size of these big systems being, the capacity being much better and performance, there are new features for low power, for analog emulation that helps in the mobile market. So we talked about Samsung, working with us, especially on this four state emulation, which is a new capability in emulation over the last 10 years.

So I think it’s just — it’s a combination of new customers, a combination of competitive win, but also continuing to lead in terms of the biggest chips in the world which are required for AI processing now and you know years from now. I think the size of these chips as you know is only going to get bigger in the next few years and we feel that Z3 X3 is already set up for that.

Charles Shi: Thanks.

Operator: Your next question will come from the line of Lee Simpson with Morgan Stanley. Please go ahead.

Lee Simpson: Great, thanks. And thanks very much for squeezing me on. Just wanted to go back to what you said last quarter, if I could. It did seem as though you were saying that there was an element of exclusivity around your partnership with Arm, your EDA partnership around Arm total design. I wondered how that was developing, if indeed you’re collaborating to accelerate the development of custom SoCs using Neoverse. It looks as though it’s pulled in quite a lot of work or continues to pull in quite a lot of work around functional verification. And I guess as we look at now third generation tool sets for Palladium and Protium, leaving aside some of the rack scale development that we’re seeing out there, whether or not Arm’s total design, I guess development work is pulling in or is likely to pull in some of that second half business. That means not just hyperscalers, but perhaps in AI PCs and beyond. Thanks.

Anirudh Devgan: Yeah, thank you for the question. I mean, we are proud to have a very strong partnership with Arm and with our joint customers, Arm and Cadence customers. I think we have had a very strong partnership over the last 10 years, I would like to say, and it’s getting better and better. You know, and yes, we talked about our new partnership on Total Compute. Also, I think this quarter we talked about our partnership with HARMAN Automotive. Because what is interesting to see, which of course you know this already, but Arm continues to do well in mobile, but also now in kind of HPC server and automotive end markets. So we are pleased with that partnership, you know, and they are also doing more subsystems and higher order development and that requires more partnership with Cadence in terms of the backend, Innovus and Digital Flow and also verification with hardware platforms and other verification tools.

Lee Simpson: Great, maybe just a quick follow up. You know, we’ve seen quite a bit of M&A activity from yourselves of late, you know, including the IP house acquisition of Invecas. You’ve had Rambus bought, you’ve now acquired BETA in the computer-aided emulation space for the car. There’s been quite a lot of speculation in the market about the possibility of a transformative deal being done. I guess, given that we have you on the mic here, maybe if you get a sense from yourself, what would be the sort of thing that a business like Cadence could look for? Would you look for a high value and a contiguous vertical to what you’ve already addressed, let’s say in automotive, or would it be something more waterfront, a business that spans several verticals, maybe being more relevant across the industrial software space? Could that be the sort of ambition that Cadence would have given the silicon to systems opportunities that are emerging? Thanks.

Anirudh Devgan: Well, thank you for the question. And a lot of times there are a lot of reports and we don’t normally don’t comment on these reports and people get very creative on these reporting. But What I would like to say is that our strategy hasn’t changed. It’s the same strategy from 2018. First of all, I want to make sure that we are focused in our core business, which is EDA and IP. And, yes, I launched this whole initiative on systems and it’s super critical, you know, chips silicon to systems. But what is one thing that I even mentioned last time, what is different from 2018 to now, is that EDA and IP is much more valuable to the industry. You know, Our core business itself has become much more valuable because of AI.

So our first focus is in our core business. We are leading in our core business. Our first focus is on organic development. That’s what we like. We always say that’s the best way forward. Now, along with that, we will do some, we have done, like you mentioned, some opportunistic M&A, which is usually, I would like to say, the tuck-in M&A in the past. And that adds to our portfolio, it helped us in system analysis. We also did it in IP because I’m very optimistic about IP growth this year. And we talked about our new partnership with Intel Foundry in Q1. Also, we acquired Rambus IP assets, which are HBM. And HBM is of course a critical technology in AI. And we are seeing a lot of growth in HBM this year. Now, if we have booked that business, the deliveries will happen towards second half of the year, as John was saying earlier.

But so that’s the thing. Now in terms of BETA, it made sense because it is a very good technology. It’s the right size for us. And we are focused on finishing that acquisition, and also integrating that — that will take some time. So that’s our primary focus in terms of M&A. And it’s a very good technology. They have very good footprint in automotive and aerospace vertical. So just to clarify, we have the same strategy from ‘18, and that’s doing working as well. It’s primarily organic with very synergistic computational software, mostly tuck-in acquisitions.

Lee Simpson: That’s great. Thank you.

Operator: Your next question comes from the line of Ruben Roy with Stifel. Please go ahead.

Ruben Roy: Thank you. Anirudh, I had a follow up on the Z3 X3 commentary that you had. And one of the things I was thinking about, especially as you talked about the InfiniBand low latency network across the multiple racks of Z3, you had mentioned that you’re up to 85% attach rate of both systems with the Z2 X2. I would imagine that would continue to go up and if you can comment on if the new systems incorporate InfiniBand across Z3 and X3 and if so, do you expect that to be sort of a selling point for your customers that are designing these big chips, which in many cases these days have software development attached to the design process. Do you think that the attach rates continue to move higher for both systems?

Anirudh Devgan: Yes, absolutely. I think I started this in, I forget now, 2016, I think, in a Dynamic Duo are ‘15 and ‘16, which is we have a custom processor for palladium and we use FPGA for Protium. So this is what we call dynamic duo, because then palladium is best in class for chip verification and RTL design, and Protium is best-in-class for software bring up and with the common front end. So as a result, over the years, this has become the right approach. And our customers are fully embracing both these systems as they invariably do both chip development and software development. I mean, perfect example is of course, our long-term development partner, Nvidia. I mean, Nvidia is no longer doing just chip development.

Page 1 of 3