SD cards have the very great advantage that you can have multiple ones with different OSes or versions and swap them in seconds.
It's fine for a normal user doing a few builds a day. Most files never get written to the physical device, being in tempfs or disk cache and deleted before being flushed to disk.
Because of wear-levelling it's not how many times the busiest file is rewritten, but how many GB are written in total.
512GB cards currently cost $32, $35, or $40 depending on whether you want 100, 150, or 200 MB/s read speeds.
A GCC src tree is 2.5GB, the build tree is 11GB. It's going to take ~40 clean builds to put one write cycle on the whole card, so you'll probably get 100,000 clean builds off a card before you wear it out.
That's around 30 builds per cent.
Feels like a step towards strong arming them into shipping products that can be supported easier/not being left to rot in a drawer.
You either pull the future forward, or drag the past. Because of the small market, they decided to forgo generating legacy concerns before they even started seeing mainstream adoption.
I like the decision (they are choosing a better foundation) but I can see the merits either way.
They recommend 24.04.3 LTS for current hardware. Maybe they just don't want (then) old hardware to be stuck on a non-LTS release.
Leaving RVA23 support until 28.04 LTS would be FAR too long.
It would be nice to see both RVA20 and RVA23 supported in the same OS but the problem is that it's not actually practical to do runtime selection of alternative functions or libraries for all extensions in RVA23. It is possible and sensible for things such as V, perhaps, but extensions such as Zba and Zbb (not in RVA20, but supported by VisionFive 2) and Zicond, Zimop, Zcmop, Zcb have instructions which want to be sprinkled all through just about every function.
You'd have to either deny your main program code the use of all those extensions, or else completely duplicate every program binary.
Also, not in the kernel but in SBI -- in Machine mode not Supervisor mode.
[1] estimate based on how long it takes to trap and emulate misaligned accesses on VisionFive 2.
Had to close the site without reading the article, anyone has alternate links?
I've decided to treat these as signs that the organization running them is either dishonest or incompetent.
Better alternative posts that don't coerce you into agreeing to tracking/etc.:
https://liliputing.com/starfive-visionfive-2-lite-is-a-cheap...
https://www.cnx-software.com/2025/08/07/visionfive-2-lite-lo...
For those who don't, here's a version of the page with no full-screen banner: https://archive.is/bTEse
Not having an option to reject that is as convenient as the one to accept is not compliant with GDPR.
No, this is an example of malicious compliance. There are so many bad GDPR banners because the people creating them want you to be annoyed by them. They want to have the easy path being the one that lets them collect as much data as they can and the most private path is as annoying as they believe they can get away with under the law. They want people complaining that the GDPR did nothing but cause all these annoying banners.
It'd be possible for many if not most web sites to not have such banners at all by simply defaulting to privacy-friendly behaviors, but there's too much money to be had in the behaviors the GDPR seeks to reduce.
This site is not using a loop-hole. It is clearly in violation.
I know ARM chip makers can just rely on the smartphone, tablets and Roku market but since there is no such market for RISC-V they sort of have to be good as SBCs.
My hope is that the situation for RISC-V SBCs would be an improvement over ARM SBCs given that chipmakers wouldn't be able to rely on the smartphone market for customers.
I don't think Raspberry Pi would have been started outside the margins of the smartphone market economies of scale. Sure RPi are pretty big now but the smartphone market created a world where low power CPUs and a lot of other components are available at all. My recollection is that as RPi got further away from standard chips, they struggled balancing retail availability while servicing their commercial contracts.
RISC-V, to me, seems more of an IP hedge for chipmakers who may find themselves constrained in designs or distribution in the future because the IP is controlled by potentially unfriendly companies or jurisdictions. Sure, there are some licensing fees/certifications that are friction, but the goal is independence even at the cost of redundant effort in chip and compiler design.
Even if there are builds or container images for riscv64, they are probably often not tested at all. Sometimes different architectures have weird quirks (unexpected performance issues for some code, unexpected crashes). I guess only very few maintainers will investigate and fix those.
It took quite some time until all software worked perfectly on arm/arm64. When the first Raspberry Pi was released, this was quite a different experience.
My only gripe is that the OpenWRT image (still) doesn't have Wi-Fi support for some reason.
or maybe not. who knows? would be nice if that was front and center on any review, but it's never. which leads me to believe it's choke full of binary garbage.
The RPi4/5 have a flashable bootrom now so they don’t qualify any longer. The 1/2/3 load their second stage bootloader from the micro-SD, their first stage is burned at the factory and cannot be modified. If you remove the SD and physically destroy it, they can not persist state or exfiltrate data.
People generally don't buy instruction sets, they buy solutions.
‘Cheaper’ only at come into view if you’re selling millions of devices, and even then there have been other designs that are similarly open for which you can’t buy really competitive cores.
Waterman, and probably his advisor Patterson, might disagree. The focus of the RISC-V design is avoiding aspects of legacy ISAs that make them harder to implement.
Secondly, for a high performance core, the consensus seems to be that the ISA mostly doesn't matter. The things that make a high performance core are mostly things that happen downstream of the instruction decoders. Heck, even the x86 ISA allows producing some pretty amazingly good cores. Conversely, for a simple in-order cheap core, the ISA matters much more.
Once you've decoded the crazy x86 instructions into µops in the µop cache then, yes, it avoided the worst CISC mistakes of multiple memory accesses (and potential page faults) in one instruction via having only one memory operand per instruction and not having indirect addressing.
Temporally, because (knock on wood) RISC-V is going to take over the RadHard space market between Microchip/NASA’s High Performance Space Computer [1] and the Gaisler chips [2]
In a non-proprietary sense because much NVDA is alleged to be RISC-V
[1] https://www.microchip.com/en-us/products/microprocessors/64-...
[2] https://www.gaisler.com/secondary-product-category/rad-hard-... see GR765 & GR801
1. It is non-trivial additional work to add since these are high frequency signals.
2. It is non-trivial additional work to validate.
3. The hardware PCI-E support is likely buggy because it is not well tested and few want to volunteer to spend time working with the SoC supplier on the bugs.
And there it is. Yes, PCI-E 3.0 from 2010, 15 years ago, involves 4 GHz wire level signals. A 4x PCI-E connector has four differential pairs of this, not cross-talking, not violating EMC limits, etc. This requires excellent layout and high quality PCBs with enough layers.
Never mind 4.0, 5.0...
People just do not appreciate what their expectations entail. A recent discussion about "soldered" RAM in the Framework Desktop thread illustrates this, where someone just can't accept that there are reasons for the things board designers do. After you get done routing the display connector, multiple ethernet, USB, DRAM and all the other high frequency stuff on a couple square inches of low cost PCB, there isn't much room for the stretch goal of PCI-E good enough to get through EMI testing.
It is possible. Raspberry Pi did it. But it's a question of cost, talent and time-to-market.
Though, it looks like on the lite version mentioned here, there is but one PCIe lave available on the slot.
Edit: Adding the size is likely the reason and full regular PCIe is not there. The PCIe card would likely be as big as the board it self. :)
The number of PCIe lanes available is typically defined by the CPU in an SoC context or the lowest common between the CPU and chipset in a traditional motherboard architecture. M.2 defines a physical connector that may connect to different things depended on its intended use. An example is the difference between those intended for SATA or NVMe. Additionally it is common for lower bandwidth peripherals like wifi cards to use an M.2 connector although only be wired into a subset of the board's possible PCIe lanes.
https://www.crucial.com/articles/about-ssd/m2-with-pcie-or-s...
First, SBC processors are just a step up from microcontrollers, they're designed to talk to GPIO devices and servos and UARTs and sensors, not GPUs and network cards. The JH7110 only has a two lanes of PCIe 2.0, one of which is used for USB 3.0 and the other mostly (I assume) to provide an M.2 interface. However, it also has 6 UARTs (Universal Asynchronous Receiver-Transmitter, think RS232 serial), 7 channels of SPI (not counting the QSPI Flash controller directly integrated), 8 channels of PWM, 7 channels of I2C, 64 channels of general-purpose IO, I2S, SPDIF, two channels of CANbus, a directly integrated Ethernet MAC, USB 2.0 host, a MIPI-CSI camera interface and a MIPI-DSI video display block with either HDMI output or for directly driving a 24-channel parallel LCD.
Embedded system designers don't want to add a PCIe-to-RS232 card to their industrial robot, NAS, or video camera, heck, they don't want to add an external GPU. They don't even want to add a separate northbridge/southbridge or PCH, they want a single-chip SOM. Going up and down those layers between PCIe and SATA or USB or Ethernet is expensive in terms of chip count and power.
Second, I don't think they want to deal with the drivers. If you want to plug in your choice of PCIe device - be it a GPU, RAID card, sound card, who knows what - that's a level of compatibility and modularity that SBCs are bad at.
PCI Express x1 requires only 11 data signals plus Vcc and Gnd across 18 pins, but even the v1 spec from 2003 requires a data rate of 2.5GT/s (as opposed to PCI's data rate of only 33 MT/s). This is a much higher rate than most other data signals usually found on these boards, and rates this high have their own challenges in terms of signal routing.
Advertisers consent for the profiling purposes is required to read this page. 185 advertising companies.
WTF.
deivid•4d ago
brucehoult•3h ago
For normal program code it is closer to a Pi 4 than to a Pi 3, similar to the all the very popular A55s board that have come out more recently than the Pi 4.
The only way it is slower than a Pi 3 is if the Pi 3 program is using SIMD (Neon), which the VisionFive 2 lacks.
The worst part of this Pi 3 comparison is that the Pi 3 has only 512MB or 1GB of RAM, which is extremely limiting in the modern world. This RISC-V board comes with a *minimum* of 2GB and is available with 8GB for $37.
The RAM difference alone makes many things possible that are impossible on a Pi 3, and many other things much faster, regardless of the raw CPU speed.
And then you have the M.2 NVMe SSD, something that neither the Pi 3 nor Pi 4 support, which again makes a whole raft of things much faster, even if the single lane means it can "only" do something near 400 MB/s (vs SD cards at 40 or 80 MB/s)