Seems like a tough call for operating systems to do this when things are moving so fast. With risc-v its probably better to be future looking given current limitations but if a lower spec risc-v exploded in popularity you miss out.
Debian decided, probably very sensibly at the time, to set their minimum target for their 32 bit arm hardfloat distro to armv7. I guess hardly anyone used armv6 with hardware floating point apart from some obscure Broadcom chip. Then the original Raspberry Pi was released, moved an insane number of units, and Debian users would have been stuck with no hardware floating point. Fortunately Mike Thompson recompiled Debian for armv6 with hardfloat and that Debian fork (Raspbian) ended up becoming the basis for the official Raspberry Pi OS.
The original two generations of iPhone were armv6 with hardware floating point, so that always felt to me like the sane baseline. Android wasn't using hardware floating point on armv6, but I think that was only because the compilers they had sucked (an issue that didn't apply to Apple), and many/most of the devices in fact shipped with the same hardware. I dunno... like, I don't know exactly what went into Debian's decision there, but I could see it having been made for the wrong reasons: looking at what software had been deployed rather than what hardware was common?
I was there when people were building a cross distro consensus, and the discussion was as I recall basically about hardware. By definition the software deployed was built using the previous set of distro baselines, and this being Linux the assumption is that you just recompile from source. (There was also ongoing work in parallel to add inline neon asm implementations where needed for feature/performance parity with x86.)
Android and iOS were not relevant at all, since for Android targets Google were free to pick whatever compiler config they liked and Apple is its own thing, and neither group of phones was on the table as targets for Linux distros.
The driver behind picking armv7 was:
- clearly we need some new baseline that isn't the lowest common denominator, so we take advantage of the FPU
- distros don't have the resources to want to build for lots of targets at once
- armv7 will work for new hardware, and there's not that much armv6 stuff out there, so it can live with continuing to use the armv5 builds
- there do seem to be deployed chips with only VFPv3d16 and no Neon (notably the Tegra chips), so we will not require Neon, so they can also use the new baseline
It's just really unfortunate that the rpi chose a trailing edge CPU for essentially "we happened to have this" reasons and then it blew up to become a super popular board because they got the price point and the ecosystem support right.
I might be missing it, but, after going through that entire page, the only things I am seeing that are relevant are the following four sentences, and none of them provide a rationale?
> Currently the Debian armhf port requires at least an Armv7 CPU with Thumb-2 and VFP3D16.
> It might make sense for such a new port -- which would essentially target newer hardware -- to target newer CPUs. For instance, it could target Armv6 or Armv7 SoCs, and VFPv2, VFPv3-D16 or NEON.
> In practice armel will be used for older CPUs (armv4t, armv5, armv6), and armhf for newer CPUs (armv7+VFP).
> Some concern for fast-enough, pretty awesome (600MHz+) Armv6 + VFPv2 processors here - i.MX37 etc. - which will not be supported by armhf default flavour, but.. we will have to live with that
I just read it, seems like an unfortunate chain of events. They tried to look forward a little bit by looking at the current generation of hardware that’s out there, and didn’t anticipate an older chip to become that massively popular.
> "Google is delighted to see the ratification of the RVA23 Profile," said Lars Bergstrom, Director of Engineering, Google. "This profile has been the result of a broad industry collaboration, and is now the baseline requirement for the Android RISC-V Application Binary Interface (ABI)."
The profile includes not just additional instructions but also architectural requirements that can't be emulated. The size of cache lines and reservation sets must be 64 bytes (there is no instruction to query it, like there is on ARM).
Data-independent execution latency is important for protecting cryptography against timing attacks.
Those were already in RVA22, and the difference from that to RVA23 could probably be emulated with traps though.
However, I think that some of the new instructions in RVA23 may potentially become very common in some binaries later on and could possibly trap so often that they would slow down those programs considerably.
The Linux Kernel has math coprocessor emulation (mainly floating point stuff) that can be enabled if your CPU doesn't include it. This was common with consumer CPUs in the 1990s and some embedded CPUs today.
Link here, although I'm sure it existed well before 2.6.12
Honestly, it's because of the "can you do a ton of unpaid work to support my niche, non-commercial application" attitude of the OP, which I find to be extremely distasteful.
It's something I deal with frequently. I should not have taken it out on OP and I agree I could have communicated that much better.
Unfortunately, I can't edit my post or I would rephrase it significantly.
> Honestly, it's because of the "can you do a ton of unpaid work to support my niche, non-commercial application" attitude of the OP, which I find to be extremely distasteful.
I understood their "Can you" as "Can one [theoretically]", more on the curiosity side than on the entitled side.
That's the problem with open source, a bunch of people who once in their life want to "do it right" (right never comes). No adults in the room to say "this is what you got".
From a billion python packages in distribution package managers to broken screen sharing in Wayland, "right" isn't even what anyone wants.
It's worse than that -- there is not a single piece of hardware that implements RVA23 available to be bought on the market today.
There are SoCs on the market that implement RVV (Vector extensions), and SoCs on the market that implement H (Hypervisor extensions).
There are no SoCs on the market that implement both at the same time. And both are mandatory for RVA23.
I'd love to be proven wrong on the hardware availability. If there's hardware to be bought in western countries that implements both RVV and H, please let me know.
> It's worse than that -- there is not a single piece of hardware that implements RVA23 available to be bought on the market today.
I think that's fine, as an outsider without any RISC-V board around, alignment in the future seems better than a board out today given performance is AFAIK still awfully subpar.
As a potential consumer all I want is that by the time RISC-V really hits the market people don't start hitting edge cases like toes on furniture with missing extensions that ended up being critical to properly run the software they need. I don't want another shitshow like USB-C fast-charging where consumers can't easily tell if a cable will work fine or end up in a slow charge fallback.
I'd rather see RISC-V for the more general public coming out later than starting with the wrong foot.
>Still, no consumer based RV23 mini-ITX or micro-ATX or ATX form factor devices.
Sure. But there are RVA22+V such devices. RVA23 will eventually succeed these.
Many IP vendors announced RVA23 cores, but understand that the process from having a core design available for licensing to having a chip is very long, measured in years.
Among the designs that are further down in the pipeline of development, a highlight is Tenstorrent's Ascalon. According to them, a tapeout is "imminent". This was in the RISC-V Summit EU a few weeks ago. That'd mean RVA23 chips competitive with Zen5 in early 2026.
Ascalon is about half as fast as Veyron V2, partially due to lower clock frequency (~2.6 GHz): https://riscv.or.jp/wp-content/uploads/Japan_RISC-V_day_Spri...
It's really more designed as a "we need a decently fast and efficient CPU for our AI accelerator" then a "let's build the fastes CPU possible".
Seems like a tough call for operating systems to do this when things are moving so fast. With risc-v its probably better to be future looking given current limitations but if a lower spec risc-v exploded in popularity you miss out.
Debian decided, probably very sensibly at the time, to set their minimum target for their 32 bit arm hardfloat distro to armv7. I guess hardly anyone used armv6 with hardware floating point apart from some obscure Broadcom chip. Then the original Raspberry Pi was released, moved an insane number of units, and Debian users would have been stuck with no hardware floating point. Fortunately Mike Thompson recompiled Debian for armv6 with hardfloat and that Debian fork (Raspbian) ended up becoming the basis for the official Raspberry Pi OS.
The original two generations of iPhone were armv6 with hardware floating point, so that always felt to me like the sane baseline. Android wasn't using hardware floating point on armv6, but I think that was only because the compilers they had sucked (an issue that didn't apply to Apple), and many/most of the devices in fact shipped with the same hardware. I dunno... like, I don't know exactly what went into Debian's decision there, but I could see it having been made for the wrong reasons: looking at what software had been deployed rather than what hardware was common?
I was there when people were building a cross distro consensus, and the discussion was as I recall basically about hardware. By definition the software deployed was built using the previous set of distro baselines, and this being Linux the assumption is that you just recompile from source. (There was also ongoing work in parallel to add inline neon asm implementations where needed for feature/performance parity with x86.)
Android and iOS were not relevant at all, since for Android targets Google were free to pick whatever compiler config they liked and Apple is its own thing, and neither group of phones was on the table as targets for Linux distros.
The driver behind picking armv7 was:
- clearly we need some new baseline that isn't the lowest common denominator, so we take advantage of the FPU
- distros don't have the resources to want to build for lots of targets at once
- armv7 will work for new hardware, and there's not that much armv6 stuff out there, so it can live with continuing to use the armv5 builds
- there do seem to be deployed chips with only VFPv3d16 and no Neon (notably the Tegra chips), so we will not require Neon, so they can also use the new baseline
It's just really unfortunate that the rpi chose a trailing edge CPU for essentially "we happened to have this" reasons and then it blew up to become a super popular board because they got the price point and the ecosystem support right.
You can look at Debian's reasoning here: https://wiki.debian.org/ArmHardFloatPort. As I understand, the decision was mostly based on hardware.
I might be missing it, but, after going through that entire page, the only things I am seeing that are relevant are the following four sentences, and none of them provide a rationale?
> Currently the Debian armhf port requires at least an Armv7 CPU with Thumb-2 and VFP3D16.
> It might make sense for such a new port -- which would essentially target newer hardware -- to target newer CPUs. For instance, it could target Armv6 or Armv7 SoCs, and VFPv2, VFPv3-D16 or NEON.
> In practice armel will be used for older CPUs (armv4t, armv5, armv6), and armhf for newer CPUs (armv7+VFP).
> Some concern for fast-enough, pretty awesome (600MHz+) Armv6 + VFPv2 processors here - i.MX37 etc. - which will not be supported by armhf default flavour, but.. we will have to live with that
I just read it, seems like an unfortunate chain of events. They tried to look forward a little bit by looking at the current generation of hardware that’s out there, and didn’t anticipate an older chip to become that massively popular.
RVA23 is actually a decent ISA for linux machines for the long term, RVA20 was not.
Presumably there's going to be some hardware releases later this year that Ubuntu has early knowledge of.
Does this line up with what riscv android will also require?
> RVA23 is actually a decent ISA for linux machines for the long term, RVA20 was not.
This is setting it all up to happen again with whatever is found to be wrong with RVA23.
RVA20 was missing generally expected features, RVA23 isnt
RVA30 is N+1, presumably we wont see shipping devices for that until the early 2030s
>Does this line up with what riscv android will also require?
AIUI both Google and Microsoft selected RVA23 as baseline.
Google quote from https://riscv.org/riscv-news/2024/10/risc-v-announces-ratifi...
> "Google is delighted to see the ratification of the RVA23 Profile," said Lars Bergstrom, Director of Engineering, Google. "This profile has been the result of a broad industry collaboration, and is now the baseline requirement for the Android RISC-V Application Binary Interface (ABI)."
Seems unlikely.
> that Ubuntu has early knowledge of.
They aren’t big enough to get advance notice of hardware from any serious SoC makers. So I bet not.
This will keep happening as the omissions in the risc v application processor standard are fleshed out.
I am really hoping there are is some unannounced hardware that Ubuntu is aware of.
Can you write a kernel patch / driver to trap the unsupported instructions and provide software implementations?
The profile includes not just additional instructions but also architectural requirements that can't be emulated. The size of cache lines and reservation sets must be 64 bytes (there is no instruction to query it, like there is on ARM). Data-independent execution latency is important for protecting cryptography against timing attacks.
Those were already in RVA22, and the difference from that to RVA23 could probably be emulated with traps though.
However, I think that some of the new instructions in RVA23 may potentially become very common in some binaries later on and could possibly trap so often that they would slow down those programs considerably.
Rva20 lacks vector support and hypervisor instructions, among other things.
You’re welcome to put a ton of effort in for dogshit performance on a bunch or $35 SBCs but the rest of us will just upgrade
And don’t worry, some vendor won’t come in and magically save you - fedora is eyeing rv22 as their baseline.
The Linux Kernel has math coprocessor emulation (mainly floating point stuff) that can be enabled if your CPU doesn't include it. This was common with consumer CPUs in the 1990s and some embedded CPUs today.
Link here, although I'm sure it existed well before 2.6.12
https://www.kernelconfig.io/config_math_emulation
Can you rephrase your answer in a way that isn't brutally and unnecessarily hostile?
Honestly, it's because of the "can you do a ton of unpaid work to support my niche, non-commercial application" attitude of the OP, which I find to be extremely distasteful.
It's something I deal with frequently. I should not have taken it out on OP and I agree I could have communicated that much better.
Unfortunately, I can't edit my post or I would rephrase it significantly.
Sorry to user "Levitating", I was being a dick.
> Honestly, it's because of the "can you do a ton of unpaid work to support my niche, non-commercial application" attitude of the OP, which I find to be extremely distasteful.
I understood their "Can you" as "Can one [theoretically]", more on the curiosity side than on the entitled side.
That's the problem with open source, a bunch of people who once in their life want to "do it right" (right never comes). No adults in the room to say "this is what you got".
From a billion python packages in distribution package managers to broken screen sharing in Wayland, "right" isn't even what anyone wants.
[flagged]
Still, no consumer based RV23 mini-ITX or micro-ATX or ATX form factor devices.
And Orange PI 2 has a GFX blob issue.
It's worse than that -- there is not a single piece of hardware that implements RVA23 available to be bought on the market today.
There are SoCs on the market that implement RVV (Vector extensions), and SoCs on the market that implement H (Hypervisor extensions).
There are no SoCs on the market that implement both at the same time. And both are mandatory for RVA23.
I'd love to be proven wrong on the hardware availability. If there's hardware to be bought in western countries that implements both RVV and H, please let me know.
> It's worse than that -- there is not a single piece of hardware that implements RVA23 available to be bought on the market today.
I think that's fine, as an outsider without any RISC-V board around, alignment in the future seems better than a board out today given performance is AFAIK still awfully subpar.
As a potential consumer all I want is that by the time RISC-V really hits the market people don't start hitting edge cases like toes on furniture with missing extensions that ended up being critical to properly run the software they need. I don't want another shitshow like USB-C fast-charging where consumers can't easily tell if a cable will work fine or end up in a slow charge fallback.
I'd rather see RISC-V for the more general public coming out later than starting with the wrong foot.
>Still, no consumer based RV23 mini-ITX or micro-ATX or ATX form factor devices.
Sure. But there are RVA22+V such devices. RVA23 will eventually succeed these.
Many IP vendors announced RVA23 cores, but understand that the process from having a core design available for licensing to having a chip is very long, measured in years.
Among the designs that are further down in the pipeline of development, a highlight is Tenstorrent's Ascalon. According to them, a tapeout is "imminent". This was in the RISC-V Summit EU a few weeks ago. That'd mean RVA23 chips competitive with Zen5 in early 2026.
> That'd mean RVA23 chips competitive with Zen5 in early 2026
allegedly competitive, according to the vendor who is not impartial and with no actual benchmarks in existence to prove anything.
None of them are competitive with Zen5 on a per core basis, if you compare the published SPEC results.
Veyron V2 has comparable perf per GHz to Zen4/5, but at a lower clock frequency (N4: 3.25, N3: 3.85): https://www.ventanamicro.com/technology/risc-v-cpu-ip/
Ascalon is about half as fast as Veyron V2, partially due to lower clock frequency (~2.6 GHz): https://riscv.or.jp/wp-content/uploads/Japan_RISC-V_day_Spri... It's really more designed as a "we need a decently fast and efficient CPU for our AI accelerator" then a "let's build the fastes CPU possible".