This sounded really interesting... till I read this:
> It’s an AI-native operating system. Artificial neural networks are built in and run locally. The OS understands what applications can do, what they expose, and how they fit together. It can integrate features automatically, without extra code. AI is used to extend your ability, help you understand the system and be your creative aid.
(From https://radiant.computer/system/os/)
That's... kind of a wierd thing to have? Other than that, it actually looks nice.
I don't think anyone denies the current utility of AI. A big problem of the current OSes is that AI features are clumsily bolted on without proper context. If the entire system is designed from the ground up for AI and the model runs locally, perhaps many of the current issues will be diminished.
I do. "AI" is not trustworthy enough to be anything but "clumsily bolted on without proper context."
I also understand that the old BBS way of communicating isn't perfect, but looking into web browsers seems to just be straight up insanity. Surely we can come up with something different now that takes the lessons learned over the past few decades combined with more modern hardware. I don't pretend to know what that would look like, but the idea of being able to fully understand the overall software stack (at least conceptually) is pretty tempting.
I was thinking the same thing. Out of curiosity I pasted it at one of those detection sites and it said 0% AI written, but the tone of vague transcendance certainly got my eyebrow raised.
What clashes for me is that I don't see how that has anything to do with the mission statement about getting away from social media and legacy hardware support. In fact it seems kind of diametrically opposite, suggesting intentionally hand crafted, opinionated architecture and software principles. Nothing about the statement would have lead me to believe that AI is the culmination of the idea.
And again, the statement itself I am fine with! In fact I am against the culture of reflex backlash to vision statements and new ventures. But I did not take the upshot of this particular statement to be that AI was the culmination of the vision.
They haven't done their due diligence: there's already a well-known language named R: https://www.r-project.org/. The prime isn't sufficient disambiguation.
Edit: where did you see it's called "R"? It looks like they call the system language "Radiance" : https://radiant.computer/system/radiance/
I've programmed for a long time, but always struggled with Assembly and C, so take my views with a grain of salt.
An admirable goal. However putting that next to a bunch of AI slop artwork and this statement...
> One of our goals is to explore how an A.I.-native computer system can enhance the creative process, all while keeping data private.
...is comically out of touch.
The intersection between "I want simple and understandable computing systems" and "I want AI" is basically zero. (Yes, I'm sure some of you exist, my point is that you're combining a slim segment of users who want this approach to tech with another slim segment of users who want AI.)
They want to implement custom hardware with support for audio, video, everything, a completely new language, a ground-up OS, and also include AI.
Sounds easy enough.
The image on this page is wild: https://radiant.computer/principles/
Of course, I am intrigued by open architecture. Will they be able to solve graphic card issues though?
If your question is about the general intricacies in graphics that usually have bugs, then I'd say they have a much better chance at solving those issues than other projects that try to support 3rd party graphics hardware.
And as with the text, the art feels AI generated. In fact I even think it's quite beautiful for what it is, but it reminds me of "dark fantasy" AI generated art on Tikok.
I have nothing against an aesthetic vision being the kernel of inspiration for a computing paradigm (I actually think the concept art process is a fantastic way to ignite your visionary mojo, and I'm flashing back to amazing soviet computing design art).
But I worry about the capacity and expertise to be able to follow through given the vagueness of the text and the, at least, strongly-suggestive-of-AI text and art, which might reflect the limited capacity and effort even to generate the website let alone build out any technology.
Somehow this makes me immediately not care about the project; I expect it to be incomplete vibe-coded filler somehow.
Odd what a strong reaction it invokes already. Like: if the author couldn’t be bothered to write this, why waste time reading it? Not sure I support that, but that’s the feeling.
This sounds a lot like a Smalltalk running as the OS until they started talking about implementing a systems language.
I don't see that in this project. This isn't defined by a clean slate. It is defined by properties that it does not want to be.
Off the top of my head I can think of a bunch of hardware architectures that would require all-new software. There would be amazing opportunities for discovery writing software for these things. The core principles of the software for such a machine could be based upon a solid philosophical consideration of what a computer should be. Not just "One that doesn't have social media" but what are truly the needs of the user. This is not a simple problem. If it should facilitate but also protect, when should it say no?
If software can run other software, should there be an independent notion of how that software should be facilitated?
What should happen when the user directs two pieces of software to perform contradictory things? What gets facilitated, what gets disallowed.
I'd love to see some truly radical designs. Perhaps model where processing and memory are one, A:very simple core per 1k of SRAM per 64k of DRAM per megabytes of flash, machines with 2^n cores where each core has a direct data channel to every core with its n-bit core ID being one but different (plus one for all bits different).
A n=32 system would have four billion cores and 4 terabytes if RAM and nearly enough persistent storage but it would take talking through up to 15 intermediaries to communicate between any two arbitrary cores.
You could probably start with a much lower n. Then consider how to write software for it that meets the principles that meets the criteria of how it should behave.
Different, clean slate, not easy.
There are reasons that current architecture are mostly similar to each other, having evolved over decades of learning and research.
> Perhaps model where processing and memory are one, A:very simple core per 1k of SRAM per 64k of DRAM per megabytes of flash,
To serve what goal? Such a design certainly wouldn’t be useful for general purpose computing and it wouldn’t even serve current GPU workloads well.
Any architecture that requires extreme overhauls of how software is designed and can only benefit unique workloads is destined to fail. See Itanium for a much milder example that still couldn’t work.
> machines with 2^n cores where each core has a direct data channel to every core with its n-bit core ID being one but different (plus one for all bits different).
Software isn’t the only place where big-O scaling is relevant.
Fully connected graph topologies are great on paper, but the number of connections scales quadratically. For a 64-core fully connected CPU topology you would need 2,016 separate data buses.
Those data buses take up valuable space. Worse, the majority of them are going to be idle most of the time. It’s extremely wasteful. The die area would be better used for anything else.
> A n=32 system would have four billion cores
A four billion core system would be the poster child for Amdahl’s law and a great example of how not to scale compute.
Let’s not be so critical of companies trying to make practical designs.
I think you're missing the point, and I don't think OP is "being critical of companies making practical designs."
Also, I think OP was imagining some kind of tree based topology, not connected graph since he said:
> ...but it would take talking through up to 15 intermediaries to communicate between any two arbitrary cores.
I'm interested to hear about the plans or capabilities in R' or Radiance for things like concurrent programming, asynchronous/scheduling, futures, and invisible or implied networking.
AI is here and will be a big part of future personal computing. I wonder what type of open source accelerator for neural networks is available as a starting point. Or if such a thing exists.
One of the opportunities for AI is in compression codecs that could provide for very low latency low bandwidth standards for communication and media browsing.
For users, the expectation will shortly be that you can talk to your computer verbally or send it natural language requests to accomplish tasks. It is very interesting to think how this could be integrated into the OS for example as a metadata or interface standard. Something like a very lightweight version of MCP or just a convention for an SDK filename (since software is distributed as source) could allow for agents to be able to use any installed software by default. Built in embeddings or vector index could also be very useful, maybe to filter relevant SDKs for example.
If content centric data is an assumption and so is AI, maybe we can ditch Google and ChatGPT and create a distributed hash embedding table or something for finding or querying content.
It's really fun to dream about idealized or future computers. Congratulations for getting so far into the details of a real system.
One of my more fantasy style ideas for a desktop uses a curved continuous touch screen. The keyboard/touchpad area is a pair of ergonomic concave curves that meet in the middle and level out to horizontal workspaces on the sides. The surface has a SOTA haptic feedback mechanism.
It’s every engineer’s dream - to reinvent the entire stack, and fix society while they’re at it (a world without social media, sign me up!).
Love the retro future vibes, complete with Robert Tinney-like artwork! (He did the famous Byte Magazine covers in the late 70s and early 80s).
https://tinney.net/article-this-1981-computer-magazine-cover...
Why does it always need to be so difficult? We already have the tools. Our methods, constantly changing and translblahbicatin' unto the falnords, snk snk... this kind of contrafabulation needs to cease.
Just sayin'.
IPFS+Lua. It's all we really need.
Yes yes, new languages are best languages, no no, we don't need it to be amazing, just great.
It'll be great.
> Hardware and software must be designed as one
In here, they describe an issue with computers is how they use layers of abstraction, and that actually hides complexity. But...
> Computers should feel like magic
I'm not sure how the authors think "magic" happens, but it's not through simplicity. Early computers were quite simple, but I can guarantee most modern users would not think they were magical to use. Of course, this also conflicts with the idea that...
> Systems must be tractable
Why would a user need to know how every aspect of a computer works if they're "magic" and "just work"?
Anyway, I'm really trying not to be cynical here. This just feels like a list written by someone who doesn't really understand how computers or software came to work the way they do.
Yes, but also I think it can also have a kind of liminal impression of an internal logic.
I wonder why the Unix standard doesn't start dropping old syscalls and standards? Does it have to be strictly backwards compatible?
Also the website is very low contrast (brighten up that background gray a bit!)
The Prime Radiant featured in Foundation.
The task that has been set is gigantic. Despite that, they've decided to make it even harder by designing a new programming language on top of it (this seems to be all the work done to date).
The hardware challenge alone is quite difficult. I don't know why that isn't the focus at this stage. It is as-if the author is suggesting that only the software is a problem, when some of the biggest issues are actually closed hardware. Sure, Linux is not ideal, but its hardly relavent in comparison.
I think this project suffers from doing too much abstract thinking without studying existing concrete realities.
I would suggest tackling one small piece of the problem in the hardware space, or building directly on some of the work others have done.
I don't disagree with the thesis of the project, but I think it's a MUCH bigger project than the author suggests and would/will require a concentrated effort from many groups of people working on many sub-projects.
mwcampbell•1h ago
nicksergeant•1h ago
debo_•1h ago
d-us-vb•1h ago
What is a screen reader but something that can read the screen? It needs metadata from the GUI, which ought to be available if the system is correctly architected. It needs navigation order, which ought to be something that can be added later with a separate metadata channel (since navigation order should be completely decoupled from the implementation of the GUI).
The other topic of accessibility a la Steve Yegge: the entire system should be approachable to non-experts. That's already in their mission statement.
I think that the systems of the past have trained us to expect a lack of dynamism and configurability. There is some value to supporting existing screen-readers, like ORCA, since power users have scripts and whatnot. But my take is that if you provide a good mechanism that supports the primitive functionality and support generalized extensibility, then new and better systems can emerge organically. I don't use accessibility software, but I can't imagine it's perfect. It's probably ripe for its own reformation as well.
throwup238•36m ago
Good screen readers track GUI state which makes it hard to tack on accessibility after the fact. They depend on the identity of the elements on the screen so they can detect relevant changes.
Lerc•39m ago
I think those principles would embody the notion that the same thing cannot serve all people equally. Simultaneously, for people to interact, interoperability is required. For example, I don't think everyone should use the same word processor. It is likely that blind people would be served best by a word processor designed by blind people. Interoperable systems would aim to neither penalise or favour users for using a different program for the same task.
glenstein•28m ago
I'd like to think that prioritizing early phase momentum of computing projects leads to more flowers blooming, and ultimately more accessibility-enabled projects in the long run.