Update: SiFive just announced today that it has partnered with NVIDIA for NVLink Fusion on RISC-V. It’s a big step, and we’ll be covering that separately.
When we cover processors today, we have three main architectures to talk about. The big one is x86, and there’s roughly four companies that make x86 processors (special kudos if you can name them all). But that’s a limited license offering, and only those companies can do it. Then we have Arm, and Arm licenses their architecture, licenses their designs, and there’s over 200 companies that make those chips. The third option is RISC-V.
RISC-V has been in the ecosystem now for almost 20 years, with the promise of offering open source CPU goodness to everyone that wants to use it. Joining me today is Dr. Krste Asanović. He is the co-founder of SiFive, he is also part of the original team that developed the RISC-V ISA. I’ve got a chance to speak to him about what he thinks about his creation, and what it has become, but also the ecosystem in general.
The following is a transcript of the video interview embedded above. Phrases have been adjusted slightly to make them readable!
Ian: You’ve been dealing with computer architectures for perhaps longer than I’ve been alive! You’ve seen it evolve, are you excited about where it’s going?
Krste: I just came back from Hot Chips and it’s incredible to see. I was telling people that I’ve been going to Hot Chips now for 35 years and there’s always something new. There’s always something happening in computer architecture, it’s a really exciting field. It’s been fun to be at it for this long.
Ian: Let’s go back to when you and your team created RISC-V. There were a variety of options at the time and this concept of being open source, and open sourcing the software too, was still part of the miasma, but nobody had really coalesced something. So what was the spark that drove you to go down that route?
Krste: Something to realize, from my various groups is, when I was a grad student at Berkeley, when I was a professor at MIT, coming back to Berkeley as a professor, all these projects – I always liked building silicon prototypes. One thing that was a bit different than some of the other groups was that my groups always used to build research prototypes. So we needed a real instruction set because we’re building a real chip that we had to map software to. Previously we used a lot of industry ISAs for the actual chips. It was often versions of MIPS. Because that was a relatively straightforward RISC design. It was pretty simple to implement.
Ian: And of course MIPS is now a RISC-V IP house.
Krste: Yes, an interesting development. The advantage of using a standard ISA that existed already was all the software was kind of there, and you could just use it. The problem was, as soon as you started doing research, you wanted to modify the ISA. That was kind of what you were doing, and at that point with a lot of the standard software, you lost that advantage. So immediately, we said “Well, we’re building this standard thing, but as soon as we modify it, we lose the reason we’re doing the standard thing”. So you’re jumping through these hoops to make it be compatible with MIPS, but then you lose all of that as soon as you change it.
So after a few generations and many projects, it was in around 2010 at Berkeley, we’re doing the parallel computing lab, looking at parallel computing, and we needed to build a whole lot of new simulation infrastructure because there wasn’t very much and we couldn’t get parallel hardware. Like when Intel came to us, they said, “Okay, we’ll give you the parallel machine. It has two cores or four cores.” It’s not very exciting. We also needed to simulate much bigger systems.
We’re also interested in specialized extensions and architecture extensions, so we looked around and we tried using existing ISAs. The grad students were too smart! They didn’t want to do a new one, they were like “we just want to save the effort, and do something that exists already!”.
Ian: Something about “time-to-market” in there, I think!
Krste: Yeah and time to graduate, so they wanted to graduate. But as we looked at all the existing designs, we looked at all the major RISCs. There was a problem with all of them – architecturally, technically, but also most of them are not open. The problem is if we’d like to build infrastructure and share it with our colleagues in academia, and we couldn’t do that with x86 or with Arm for example. It’s just not possible with the licenses.
Now, there were a few open source ISAs previously, so Spark v8, the 32-bit version had become an IEEE standard. But when Oracle bought Sun and made v9, that was not open, and we wanted a 64-bit architecture for our work. There was also the OpenRISC effort. We looked at that very carefully. At the time it was still only 32-bit and we saw a lot of technical challenges with the design. The group there had really focused on a single core as an artifact, so they’re open-sourcing a single implementation, they hadn’t really thought about making an architecture. And so a combination of looking at all these things and realizing what we needed to do, we ended up deciding we’ll just do a clean slate ISA, it won’t be that hard. That was the lie I managed to convince the grad students of! I said “you know, it’ll just be a summer project, it’ll take a few months!”
And actually it was May 18th, 2010 when we finally said we’ll do our own ISA. In that fall, starting in August/September, we were teaching our class using the first prototype.
So in some sense we did do it as an initial version, and it did only take the summer. The poor undergrads were subjected to this, and we taught the classes. So that was the start of the whole process. So using it in teaching and in research, and having enough students.
I think the thing to realize is we were lucky at Berkeley, we have the critical mass. There’s enough students that we can do the RTL implementations, we can do the Linux port, the GCC port, we can do this multiple times and iterate and get it to a very usable state pretty quickly. That’s just having this critical mass of massively great students who do this.
Ian: There’s two or three questions there. One, roughly what size of class are we talking about of undergrads here? 300?
Krste: No, this is pre “the explosion”. So this is more like 30 or 40 in a class.
Ian: Second is that I think OpenPOWER would’ve been around at that time?
Krste: That was not as open as we wanted.
Ian: Then it sounds like you wanted to design this realistically just for a lot of academic use initially within your own group. How much of an appetite was there at that point to go beyond the boundary of just your small area of research versus a wider area then? Or maybe commercial?
Krste: Just in looking, the other thing you realize if you do a lot of projects over time. I realised in a specific year I’ll be like, “I’m interested in this and this other stuff I’m not interested in”, then it’s five years later and I really wanna work on that other stuff. So you never know what you’re gonna do down the road. Also having done many of these projects, there was always the same stumbling block in doing a new processor. The number one thing is you need a C compiler. Forget about random ideas in fancy architecture – if you don’t have a C compiler, you cannot do anything. And a lot of projects fall down this trap of they can’t do assembly code for everything, you need a system. And what I saw was we could build a very slim base that was just enough to run a C compiler, and that didn’t really encumber the rest of the design space very much at all. Having this base set in place meant you could get your software environment up and running, but it still left you a lot of freedom to optimize with extensions. So that was where the idea of this very slim base came from, minimal imposition on the overall model of the machine, but maximally fast in getting something going, which is critical.
So that was part of the genesis of this modular ISA design. In terms of audience, it was primarily for us, but also us not just now, but us 5-10 years in the future. I didn’t want to redo this, I wanted to think about how this is gonna be something that’s going to be lasting for a long time and you know, to this day this is still widely used in research, right?
Ian: Take me through the steps here. You’ve now got this, you’re using it in teaching. You’ve got grad students who are now developing on it. Using the base, using the utilities that you’ve been building. What’s the moment where there was a spark of an application beyond?
Krste: Well, some of the participants were telling us “You know, this is gonna be really big, you know, because you’re doing this at Berkeley, you have the name, you have the megaphone, everybody’s gonna listen to you if you do these things”. But really it was when we started getting these random emails, these stories, which are true! We’d get an email from a random Indian engineer saying, “Why did you change the spec?” And we’re like “Well, who are you and why do you care? We just did it for a class project. Here’s a problem assignment – imagine we had this instruction, How would your pipeline change?”.
Ian: Oh, so somebody’s who’s maybe seen the class online or got ahold of a set of lecture notes?
Krste: Well, these are people building products and projects elsewhere who are relying on a spec, and we’re just changing it willy-nilly, for problem sets or class assignments, and they’re wondering why we’re changing it. We’re responding with, “well, A, we didn’t realize they were looking at it, and B, why would they care that we were changing it?”.
So we started in 2010, a few years later in the 2013 timeframe, it became clear there’s a lot of people really interested in it outside academia, there’s a lot of pull from outside. We weren’t really actively pushing at that point, but there were a lot of people interested. Because we made everything open, and put it on the web as is the Berkeley style. So everything was there, anybody could use it, and people were using it.
Ian: So you’ve got this pull from outside, is this purely other academics? Or was this more corporate at that time?
Krste: We weren’t quite sure. We got some pulls from some corporate people who were interested. We had industrial sponsors in the lab who worked with us, but also just random emails from outside.
So in August of 2014, we went to Hot Chips. By that time we’d been working on it for four years and it mostly stabilized. We felt this is kind of the right design. We’d iterated many times, multiple chips, multiple compiler ports, multiple exports. So with Dave Patterson, I wrote this position paper, “Instruction Sets Want To Be Free”. We kind of realized there was this angle about “why isn’t there an open standard ISA?”.
In hindsight, it was kind of blindingly obvious that we should be asking this question, but at the time people were asking “why would you want an open standard ISA?”. But we asked back, “why wouldn’t you?”. Just thinking through it is like, well, there are no good reasons. We wrote this position paper saying, “why shouldn’t this be open like every other standard in computing?”.
We did have The Tech Report pick it up, and then The Microprocessor Report picked it up as well. Then Arm wrote a rebuttal, which was a pretty flaccid rebuttal to be honest! We read it and there weren’t really any good arguments in there.
Ian: I guess at that time they were going through the start of the smartphone boom as well.
Krste: The arguments were all around having one company that can centralize and build the ecosystem. You pay them, they’ll build the ecosystem, but you can reuse the IP. That was the essence of their rebuttal, and that a distributed open thing could never build a stable standard that everybody could thrive on. Despite the fact that in every other piece of computing, that’s exactly the model we all use. So we said why should this not be a thing, having the open ISA standard.
We then went to Hot Chips and we made a conscious effort to promote it to industry, and the reception there was wildly above what we expected. Many people in the industry said “yes, this is what we want.” and the reasons were different than what we expected. People were thinking, “oh, it’s all about low cost” but that really wasn’t the primary complaint people had. The primary complaint people had was flexibility. And when it came to licenses [from others], just negotiating the contract took months and months and even years in some cases. They just wanted to move fast and do things their own way. The existing architectures weren’t doing that for them, and that was their main complaint. Cost was a factor too. But I think these other things, we were surprised that there was all this positive feedback at Hot Chips 2014.
Ian: That was 2014. We’re right here now at the SiFive offices. SiFive was founded in 2015. How did investors convince you to create a company that builds IP around the ISA?
Krste: It was a chain of events. We had the first RISC-V workshop in January of 2015, and we were kind of surprised – you can still see the videos up on the web. It sold out very quickly, 140 people showed up, but more than 40 different companies came, which was a big shocker. The other big shocker was that Rumble had already shipped a commercial RISC-V chip inside a game camera, as in a wildlife game tracker. So that was one thing to see that this has a real strong interest. Berkeley has a lot of industrial events, we all have open houses, we invite industry over. One of the folks from Sutter Hill had come by and we’ve been talking about Firebox, and Firebox was a separate research project on looking ahead and saying, “look, these hyperscalers are gonna build their own custom SoCs”. So at the time, this was not a thing, but we were saying, “look, this is clearly gonna happen.”
Ian: This is also kind of pre-AI as well. AlexNet was only 2012?
Krste: Yes for RISC-V itself, we started that before this current wave of AI. But we’d been looking at advances in photonic interconnects, in non-volatile memories and looking at how these big data center clusters are going to get built. We started proposing this idea for Firebox, which is a very big data center architecture. But one component there was that we were going to build custom silicon. We saw that the hyperscalers are going to be moving to custom silicon.
At that time that was a bit controversial, with people asking if they were really going to do that. So the people from Sutter Hill were interested in this, and then later on Stefan Dyckerhoff, who’s still our lead investor here from Sutter Hill – he came by and we had a great afternoon where we talked through the need for specialized computing.
So this was really the foundation of SiFive: a lot of companies are going to need specialized chips for various aspects. RISC-V seems like a great substrate to do all this work on.
That was really the genesis of SiFive and when we started the company, we thought we were not going to do IP because we’d made a very simple ISA which was easy to implement. We had already open sourced a bunch of the cores from Berkeley, how could you make a business doing IP? So we thought we’d focus on doing custom silicon. But very shortly after we started up, we realised we didn’t have the luxury of being a stealth startup. Everybody knew “these RISC-V guys were going to do something”. And all these very large companies came asking for RISC-V cores. At first, we thought “go away, leave us alone! We’re busy doing this other thing!”. But after a while we realised we were a small company, and we have all these big companies interested in RISC-V. So then we made the strategic shift, to do IP. Our goal was to propagate RISC-V everywhere, and to just to get to understand how these companies were building SoCs and get more into the market.
So since then, I think the first 18 months or so, we had three customers as we were learning how to do IP. It’s very different from doing chips! So I worry about the companies who think they can transition from having a chip idea to being an IP company – it’s very different. Building a core for your own design flow and everything is much easier than building a core that many customers can put in their own flows and harden up. And we learned that lesson. It took us a while.
So that first year and a half we had a few customers, but then after that, we then went to over a hundred design wins in the next year or two as we figured out what we’re doing and as a demand grew for RISC-V.
Ian: So I pulled this off the wall because I saw it as I entered. These are some of the chips that you guys have done over the years and I think I have some of these boards at home.
Krste: So one thing to realize, when we started this, there was no RISC-V hardware. So we did the very first commercial RISC-V chip, which was the FE310 chip – we did this to kickstart the ecosystem. Hardware is really needed! So in that first generation, it was really all the tool providers, and those guys doing all the tools that our embedded customers needed.
Then we did the first Linux SoC, the FU540, and that really took off. From when we gave that to the Debian folks, you could just see the chart, they went from having very few packages ported to like 80 or 90% packages ported just within a few weeks of having this board. So you can see the importance in the ecosystem of having these dev boards.
Now since then, we’ve tried to work with our customers through silicon. A lot of customer products are not a good target for a dev board because they have I/O interfaces built for some other application. Finding somebody who has a product that is suitable for dev boards and having them be willing to make it visible to folks who are doing development. But luckily we have some great partners with ESWIN, our great partner there.
Ian: If I remember correctly, for one of your boards you had 2000 units and you thought that would be enough for the developer community. Then they sold out in roughly two weeks?
Krste: Yes! So I think this is still a challenge in RISC-V land is getting enough development hardware to satisfy demand. Particularly right now, if we fast forward to the RVA23 profile that we ratified last year, everybody understands that this is a major milestone for RISC-V, but there’s no silicon available yet. And it’s all coming. I think a lot of the hobbyists are wondering why we don’t have Raspberry Pi cheap boards? And I think they don’t realize that all these boards are loss leaders – everybody’s losing a lot of money on every one of these boards we ship. But we need to do it to seed the ecosystem and get developers going. But it will be coming, and in many different flavors and shapes of RVA23 coming out across many vendors, not just SiFive cores but others. Everybody’s agreed on the standard now to drive the ecosystem forward.
Ian: So you spoke about Profiles, I think we’ll get there in a bit – but the other angle to all of this is RISC-V International. This is the body that was formed to drive the standards forward, so it’s not just you and the team at Berkeley doing adjustments. Can you talk through the initial premise of RISC-V International, how it has evolved, because you’ve been obviously a key part of that since its inception.
Krste: I mentioned we had this very first RISC-V workshop in January of 2015. Like I said, many companies showed up. What became clear is that companies were really keen for this to happen, but they don’t trust the university to manage the standard.
Ian: Is that because academics work on academic time?
Krste: It’s more that academics get distracted! You see a shiny new toy. Like I said, research topics change often, and they say “I’m going to work on this thing now!”.
Industry needed a long lived stable organization to manage the standard. Rick O’Connor was at that first workshop and I worked with him to create the RISC-V foundation in the beginning. What we said was for the first year we’d allow the founding members to join and develop the membership laws. The idea was that by the end of the first year, we would draft the membership agreement and everybody would work together. We anticipated having six or seven companies in this thing. In that first year, at the very end, we had 42 companies. So now you have to imagine we’re drafting a membership agreement with 42 legal departments, redlining it and each of them taking a week.
But the model I had in mind was that I wanted to make sure we created a big tent. I’d rather have all the companies in the tent with minimal obligations on them initially, rather than being very stringent and having everybody outside the tent, and then we don’t have any protection from these big companies. So the response was very positive. We created the foundation and it grew very rapidly in terms of membership. One thing that’s very important to me is that the ISA is free in terms of the license spec. Anybody can download it. I get very tired of going to some website, having to register to get a PDF – we don’t want that. The specs have to be publicly available, the golden model and the compliance suite, so anybody can download and use it, that’s always been part of the charter of the foundation.
Ian: So the foundation is just to build the spec. If we look at RISC-V International as a larger organization, you’ve got bodies looking at high performance cores and vector cores. You’ve got memory and automotive and all these different divisions. The way I see these bodies work is they’re driven by the members. They do what the members want – is that what you’re seeing?
Krste: Yeah, there’s a small staff. But all the work is done mainly by the volunteer members participating. I think the other thing to realize is, RISC-V international is only about the standards, there’s no IP there. I think people get confused and they say RISC-V is like the Linux of microprocessors, and that’s really not true, because Linux is an artifact you can download. You can go to the Linux Foundation, you can download the Linux kernel, the source code. At RISC-V International, there’s no RISC-V cores there, and that was the deliberate decision for two reasons. One is that we didn’t want to endorse any one particular open source implementation, for example. Another one in terms of export control. It’s very important that we only have standards and no IP there. So it’s just purely a standards organization and there’s no IP there at all. That was a deliberate decision when we founded the foundation.
Ian: I remember speaking to the former head of RISC-V International, four or five years ago, and she was asking me about barriers to RISC-V adoption. At the time we’d seen NVIDIA had some success in putting RISC-V cores in their designs, and Western Digital put them into hard drives. I said it feels like, even though it’s this open ISA, there’s no standardization layer. She told me it’s a work in process, we’re calling it Profiles. RVA23, the first major Profile, powered across the line in late 2024. How much of a milestone was that for you?
Krste: It was huge. For me, huge, because I actually wrote the thing personally, but getting the members to agree on it was a big deal. There were some very well publicized battles about what we should be in and should not be in. We were very keen to have Android supported, and you know the Android team are very good at working with us on what features they viewed as essential. Also in their experience with the other ISAs. For example, Android SDK still targets ARM v8.0. So the first one you come out with is basically the one you use from now until infinity. So we wanted to make sure we had a very rich ISA. That was important, hearing from them, their input there and adding some of the security features they wanted into the RVA23 release.
Ian: I think I spoke to Balaji (Ventana Micro) and his team as this was going on because he was involved in that as well.
Krste: I think that’s the thing to realize that it’s all the vendors agreeing. There’s this tension between us wanting to provide a rich set of features to make the implementations be able to work really fast, and the software to be able to rely on them being there. At the same time, the hardware vendors have to sign on to deliver this, and not be left behind. So, “is this feature too hard for us to do in this reasonable timeline? We may wanna push against this feature”. But we got this agreement and that’s the way the foundation works. So we’re very happy that RVA23 happened, and it’s a pretty complete ISA.
Ian: Did machine learning affect development?
Krste: Not dramatically, not for RVA23. It really is for general purpose. The main part that AI affected is in the vector instruction set. RISC-V vector ISAs are a particular thing of mine, I’ve been working on them for many years!
Ian: Do you have positive or negative experiences with them?
Krste: Very positive! I mean Cray is one of my heroes. When I was in grad school reading about the Cray machines, and then I built a vector microprocessor for AI back in the early nineties, that was my PhD project. A lot of people building accelerators don’t realize they should just be building vector machines instead. A lot of accelerator projects are really badly done vector machines is my slightly biased view on this.
Ian: Hot take!
Krste: Hot take indeed! So the vector extension was a big part – in fact we joke about, in RISC-V the V was actually for “vectors”, that was the pun. So vectors are an important part of it, and when we designed the vector instruction extension, which some people complain about now as being too complicated! I think they don’t have quite the right mindset. It was designed to be something that would scale from very small to very large without changing the ISA, also to very cleanly support mixed width operations, which are key for AI. You have these narrow data types accumulating into wider data types and that was built into the vector instruction set. Whereas if you look at the other architectures, they handle that in very clumsy ways. Both the other major architectures, you have to do sort of upper, lower, odd or even. You end up doubling the static code size, doubling the dynamic code size, and it’s just really inefficient use of the hardware.
So in that respect, the vector part was designed with AI in mind, but generally, RVA23 is a very general ISA. But as in all the RISC-V there’s also a lot of space left over where you can have the standard software stack running on the standard ISA. You have a lot of space if you want your custom extensions on the side that don’t interfere with the standard software stack. That was an important part of development and now we’re developing the matrix extensions for AI, and that’s a very fast moving activity right now at RVIA.
Ian: That considers the compute happening inside the core, rather than a direct attached accelerator to the core. The control flow from one to the other, as I understand, that’s not part of the spec yet.
Krste: Well, I think the important thing to realize, RISC-V is a very general ISA that people use both to build the main application core, but also to build accelerators. So, unlike say, x86 or GPU land where “okay, x86 is over here, then you have a GPU over here with a completely different hardware architecture and ISA”. In RISC-V land, the model is, “Okay, we can have the application processes optimized to run a rich OS, hypervisor or whatever. We can then have an AI accelerator built on top of RISC-V vectors again, but now embedded as part of the device. And the nice thing is that they have the same performance model, the same memory model, the same synchronization primitive – so a lot of the headaches you get in transitioning between the two architectures, you just don’t get with RISC-V if you’re using it for both sides. Both a host and a device makes it very seamless.
Ian: When you’re obviously developing the standard and then people decided to go to use it, did you expect it to scale from the microwatt to kilowatt range back at that time?
Krste: Yeah! Technically there’s no reason it shouldn’t. I think people who think otherwise are just not thinking hard enough. I mean, you can make an ISA such that it cannot scale down, but it’s very difficult to build an ISA that cannot scale up because just look at x86, you went from basically printing calculator to server. And it just makes it a little bit less efficient at the high end if you hadn’t thought about that at the beginning.
But I also know the x86 folks had a really hard time scaling down to embedded – they just couldn’t get it down because the ISA is too rich, and as soon as you start throwing things out, like Motorola 68000 when they went to Coldfire, it’s a different ISA, right? Or like ARM did.
Arm has multiple ISAs, which is another thing. They all have internal fragmentation. How many different ISAs does ARM have? Targeting different markets and scenarios. So RISC-V having this modular form of a base that everybody has, then you add on, is part of the story of scaling from the very small to the very large. But it’s also just the fundamental base to the ISA, even if we just kept the base, it’ll be pretty good, even at the high end, and with enough hardware you can make it run pretty fast. But we had the other instructions to help make it go a little bit faster too.
Ian: So that’s one of the criticisms that’s been leveled at RISC-V, the fragmentation aspect. If you have everything open and you can support customer instructions, anybody can do anything and interoperability is just a dream.
Krste: This to me is one of the most nonsensical things because, first of all, people in glass houses shouldn’t throw stones. So let’s go look at x86 – at least two major vendors, two different hypervisor instruction sets, two different IOMMU standards. They managed to fragment with just two vendors! Nevermind historically things like 3DNow! and AVX512 differences. So fragmentation is a bad thing, but these guys just recently, they’ve got together, maybe to try and save dwindling x86 market share. X86 is quite bad even within one vendor – the rants that people have about AVX512. There’s no monotonic understanding of it.
Ian: I’ve got a great picture from our great friend InstLatx64, you probably know the picture I’m talking about. It goes around social media every now and again, showing the fragmented state of AVX512. But surely having only two major fragmentations is easier than a thousand.
Krste: Well then, but the point is, they’re competing. So they didn’t think, at least not until recently, that they had to work together. Now you look at Arm, single vendor – if you look at how many different ISAs they effectively have. There’s a big step obviously between 32-bit and 64-bit, AArch32 and AArch64. But even within AArch32 was the original Thumb, then there was Thumb, then there was Thumb-2, and then they went to AArch64 and the different variants. It’s very hard to target one.
So they’ve developed different ISAs for different use cases at that level. By going to RISC-V, everybody works through RVA. There’s one organization whose job it is to do the common standard. Theoretically, Arm as a company can do that, and until recently, Intel and AMD did not do that. So I’d say we were set up to avoid fragmentation by setting up a standards party to do this.
Now, the thing about different people that can put different subsets of extensions in – that’s the flexibility people wanted. Where you control the software, you don’t care about interoperability. Because if I’m building a toothbrush, I’m gonna optimize this market control for this toothbrush and I’m gonna write the software for the toothbrush. I’m not downloading Ubuntu onto this toothbrush. Well, maybe eventually it will be, but right now I’m not running Doom on my toothbrush. So this is an advantage of RISC-V. It’s modular, you could build the thing you want with just the pieces you want and add new stuff that you want to add. But this modularity, you can also use it with Profiles. I’m gonna fix everything down and say everybody has to have this set of things in there.
So the underlying architecture RISC-V is based in modules that you can put together. You can use it, you can be flexible, or you can be rigid. And market by market, you’ll decide what the right thing to do is. So where you need the interoperability, use a Profile. So RVA is obviously the thing we’ve done for application processors, but now in the embedded space there’s a move now to do an automotive profile, which would be for automotive MCUs. So If people see the need to build a standard RVM profile, we’re calling it for microcontrollers there as well. So the whole point is multiple vendors see the value in not fragmenting and they’re actively working to avoid it versus the other ISA providers who didn’t do this, and so they were competing by fragmenting.
Ian: Most of my time I spend now is in the data center, between host and device. Would you say if you are targeting that, then the application profile pivots to data center, or does the data center need something more specific?
Krste: I think a lot of the basic stuff for the data center is already in RVA23. There are always more features you can add, and so the model in RISC-V is that we do major releases of the Profiles, like RVA23 is a major release. There are point releases coming, which add options, but no new mandatory features. So sometimes it’s what we call an expansion option, like matrix. You don’t always need this, and it’s very contained, so you provide it as an expansion option. But other ones we call development options that are optional for now, but we’re anticipating there’ll be mandatory in a future major release. But the next major release will be a few years away, but we need to get working on the tool and get everybody aligned on it, which is where the development options come in. So for the high-end data center, there are requests for new things all the time. There’s also new AI data types, other things evolve over time that will make it into the next major release. But there are a lot of vendors going after data center right now with RVA23 and they view it’s adequate for a lot of the needs there.
Ian: One of the complaints that was leveled to me this week is when we talk about other industry standards, say PCIe, they develop the spec, it has mandatory and optional parts of it, people go build it and then the compliance testing suite gets put together to enable for interoperability. We are not yet there with the compliance for RVA23. That’s still to come.
Krste: There’s a lot of activity happening there, I just spoke to people this morning about it! So a lot of effort is going to pushing that forward. Everybody understands it needs to be done, and there’s a lot of energy going into that right now.
Ian: I guess the main complaint that I heard from people was just the time to get the compliance test suite ready. Is that just a function of having a standards body with so many people in it?
Krste: It’s a function of “more people need to do some work”!
I think part of it is being organized. I think we are organized, and there’s a concrete effort making progress. So I think I’m pretty confident we’ll get this done in a timely way. The other thing to realize is it is important to have the compliance suite, but all the vendors are carefully checking their designs against the specs as they go through. And the other thing people should realize about fragmentation – the real worry I have is not somebody doing feature A versus feature B and fragmenting. It’s more that people misinterpreting the spec and doing something wrong. They intended to be compliant, but they were not in some detail. Those are kind of the worst kind of bugs. But you know, the vendors are responsible for looking at the spec and making sure they’re compliant.
And the other thing – the other forcing function is the upstream software. So if you download GCC, if something goes wrong in your machine, you have to figure out if it was GCC or was it the machine at fault. So there’s this sort of hidden hand that’s forcing compatibility. Everybody is using all the upstream software, and if it doesn’t work correctly on their machine, they go figure out what it is.
But that’s not enough. I’m not claiming that’s sufficient. I’m just saying that it’s not as bad as you might think just because there’s this forcing function, which is everybody uses the same upstream to drive their machine.
Ian: So from the SiFive point of view, you have a range of cores – everything from embedded to your intelligence side, going into vector. What decides what markets to go after with your new designs?
Krste: Well, we look for markets with customers!
The thing to understand is that we work with almost all of the major semiconductor manufacturers, and most of them have multiple markets they’re in. There’s a level of working with one group within a company, then we find there’s an opportunity, then another group in that company, and we start talking to them and that gives us ideas for future products. So for some, it’s just “land and expand” inside a big company and find other opportunities.
The other way is seeing where the competitors are leaving the market, or whether they’re stressing their customers, and looking for an opportunity. Like I said, I’ve been surprised – doing this company from the very beginning, we have so many inbounds we’ve been fortunate. All the big guys are reaching out to us from the very beginning, from day one, and we still continue to work with all the big folks, because they see us as the leaders in RISC-V. They’re all looking to move to RISC-V, so we get a lot of inbounds. But we have to do the internal analysis of if a market is worth doing, because we have to get a return on engineering investment.
Ian: You’ve got a new set of products coming out that are your RVA23 compliant hardware. Is this just a case of taking existing cores and making them RVA23 compliant, or build new? Do you have go-to market customers with those lined up?
Krste: Customers have been in tap-out with our high-end P870D core, for example, in the data center. That thing scales to 256 core nodes!
But the thing to understand is this is our third generation of out-of-order core. We had the original P550, then we had the P670. Now we’re on this generation with the P870D, and the other one’s already in silicon with customers. So we’ve had several generations of high-end out-of-order cores. Yes, it’s an iteration, but we are using the experience of the previous generations. When you build these cores, you realize with high performance out-of-order cores there is a lot of potential for bugs, and unless you’ve actually done a bunch of silicon iterations and done trillions of cycles of FPGA emulation, unless you’ve done that level of verification, it’s unlikely to work properly. So that’s the level of maturity of our core, and it’s being propagated to the next generation designs. We’re adding the new features for RVA23 as we go.
Ian: When a company announces a part, a lot of people, unless they’re heavily involved in the industry, will ask when it is available. If it’s something like an AMD or an Intel chip, it’s immediately. If it’s an IP company, it’s a bit further down the line, because they have to wait for their customers. Some of the feedback for SiFive has been that when the P550 was announced, it was a good three years between announcement and when we saw silicon. Is that something that you hope gets sped up in future or is that customer dependent?
Krste: One decision is when we decide to announce the product! There are different factors in there. Some products need more lead time than others. But then when we have the production ready RTL going to our customer, the question is how complex is the system they’re building? Their own path to getting to tape-out and to board design and qualifying everything. So it’s unlikely to be less than two years. Sooner would be better.
But you know, one example is some of our intelligence cores, just learning that they’ll be in production cars on the road next year, and the next generation will be on the road the year after. Some parts of the roadmap are moving very fast, in their car developments. So that’s gratifying to see, some parts are moving very fast.
Ian: What gets you up in the morning?
Krste: The alarm clock!
Ian: That’s a smart answer.
Krste: We’re a worldwide organization, so there are a lot of early calls. But I like building machines. I like just solving problems, building big hardware, building small hardware. Any kind of design problem I find interesting. Learning about new domains, and figuring out how to solve them in a general programmable way. Also figuring out how to take any of the random ways to accelerate ideas and figuring out how to put those into a general programmable platform everybody can use and you can leverage in the future.
Ian: Do you miss academia?
Krste: I still go to campus once a week. I still have a few students just graduating. I do miss the grad students, that’s the biggest one. So I do [miss it], but I do still hang out with the folks on campus.
