I am perfectly fine for it to remain a closed alpha while Jonathan irons out the design and enacts his vision, but I hope its source gets released or forked as free software eventually.
What I am curious about, which is how I evaluate any systems programming language, is how easy it is to write a kernel with Jai. Do I have access to an asm keyword, or can I easily link assembly files? Do I have access to the linker phase to customize the layout of the ELF file? Does it need a runtime to work? Can I disable the standard library?
Tbh, I think a lot of open source projects should consider following a similar strategy --- as soon as something's open sourced, you're now dealing with a lot of community management work which is onerous.
This is a common misconception. You can release the source code of your software without accepting contributions.
When you're a somewhat famous programmer releasing a long anticipated project, there's going to be a lot of eyes on that project. That's just going to come with hassle.
Well, it is the public internet, people are free to discuss whatever they come across. Just like you're free to ignore all of them, and release your software Bellard-style (just dump the release at your website, see https://bellard.org/) without any bug tracker or place for people to send patches to.
You have the legal right to use, share, modify, and compile, SQlite's source. If it were Source Available, you'd have the right to look at it, but do none of those things.
I doubt we're going to come to an agreement here, though, so I'll leave it at that.
Emphasis on even. It can have such rights, or not, the term may still apply regardless.
Open source is not a philosophy, it is a license.
it's not even contributions, but that other people might start asking for features, discuss direction independently (which is fine, but jblow has been on the record saying that he doesn't want even the distraction of such).
The current idea of doing jai closed sourced is to control the type of people who would be able to alpha test it - people who would be capable of overlooking the jank, but would have feedback for fundamental issues that aren't related to polish. They would also be capable of accepting alpha level completeness of the librries, and be capable of dissecting a compiler bug from their own bug or misuse of a feature etc.
You can't get any of these level of control if the source is opened.
> In order to keep SQLite in the public domain and ensure that the code does not become contaminated with proprietary or licensed content, the project does not accept patches from people who have not submitted an affidavit dedicating their contribution into the public domain.
But it used to read
> In order to keep SQLite in the public domain and ensure that the code does not become contaminated with proprietary or licensed content, the project does not accept patches from unknown persons.
(I randomly picked a date and found https://web.archive.org/web/20200111071813/https://sqlite.or... )
What chii is suggesting is open sourcing Jai now may cause nothing but distractions for the creator with 0 upside. People will write articles about its current state, ask why it's not like their favorite language or doesn't have such-and-such library. They will even suggest the creator is trying to "monopolize" some domain space because that's what programmers do to small open source projects.
That's a completely different situation from Sqlite and Linux, two massively-funded projects so mature and battle-tested that low-effort suggestions for the projects are not taken seriously. If I write an article asking Sqlite to be completely event-source focused in 5 years, I would be rightfully dunked on. Yet look at all the articles asking Zig to be "Rust but better."
I think you can look at any budding language over the past 20 years and see that people are not kind to a single maintainer with an open inbox.
There are positives and negatives to it, I'm not naive to the way the world works. People have free speech and the right to criticise the language, with or without access to the compiler and toolchain itself, you will never stop the tide of crazy.
I personally believe that you can do opensource with strong stewardship even in the face of lunacy, the sqlite contributions policy is a very good example of handling this.
Closed or open, Blow will do what he wants. Waiting for a time when jai is in an "good enough state" will not change any of the insanity that you've mentioned above.
You say this now but between 2013 - around 2023, The definition of Open source is that if you dont engage with the community and dont accept PRs it is not open source. And people will start bad mouth the project around the internet.
Working on a project is hard enough as it is.
Again, not between 2015 - ~2023. And after what happened I dont blame people who dont want to do it.
And in case you somehow thinks I am against you. I am merely pointing out what happened between 2013 - 2023. I believe you were also one of the only few on HN who fought against it.
You'd be really hard pressed to find somebody who doesn't consider SQLite to be open source.
Releasing it when you're not ready to collect any upside from that decision ("simply ignore them") but will incur all the downside from a confused and muddled understanding of what the project is at any given time sounds like a really bad idea.
A lot of things being open sourced are using open source as a marketing ploy. I'm somewhat glad that jai is being developed this way - it's as opinionated as it can be, and with the promise to open source it after completion, i feel it is sufficient.
Keeping things closed source is one way of indicating that. Another is to use a license that contains "THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED [...]" and then let people make their own choices. Just because something is open source doesn't mean it's ready for widespread adoption.
One can be put off by whatever one is put off by. I've gotten to the point where I realized that I don't need to listen to everyone's opinion. Everyone's got some. If one opinion is important, it will like the shared by more than one person. From that it follows that there's no need to subject oneself to specific people one is put off by. Or put another way: if there's an actionable critique, and two people are stating it, and one is a dick and the other isn't, I'll pay attention to the one who isn't a dick. Life's to short to waste it with abrasive people, regardless of whether that is "what is in their heart" or a constructed persona. The worst effect of the "asshole genius" trope is that it makes a lot of assholes think they are geniuses.
Sometimes nobody else shares the opinion and the “abrasive person” is both good-hearted and right in their belief: https://en.m.wikipedia.org/wiki/Ignaz_Semmelweis
Personally, I’d rather be the kind of person who could have evaluated Semmelweis’s claims dispassionately rather than one who reflexively wrote him off because he was strident in his opinions. Doctors of the second type tragically shortened the lives of those under their care!
I don’t see what is on topic or constructive about your outburst.
In a recent interview he mentioned they are aiming for a release later this year: https://youtu.be/jamU6SQBtxk?si=nMTKbJjZ20YFwmaC
> Do I have access to an asm keyword,
Yes, D has a builtin assembler
> or can I easily link assembly files?
Yes
> Do I have access to the linker phase to customize the layout of the ELF file?
D uses standard linkers.
> Does it need a runtime to work?
With the -betterC switch, it only relies on the C runtime
> Can I disable the standard library?
You don't need the C runtime if you don't call any of the functions in it.
That exists; it's called garbage collection.
If you don't want the performance characteristics of garbage collection, something has to give. Either you sacrifice memory safety or you accept a more restrictive paradigm than GC'd languages give you. For some reason, programming language enthusiasts think that if you think really hard, every issue has some solution out there without any drawbacks at all just waiting to be found. But in fact, creating a system that has zero runtime overhead and unlimited aliasing with a mutable heap is as impossible as finding two even numbers whose sum is odd.
There is a prominent contributor to HN whose profile says they dream of a world where all languages offer automatic memory management and I think about that a lot, as a low-level backend engineer. Unless I find myself writing an HFT bot or a kernel, I have zero need to care about memory allocation, cycles, and who owns what.
Productivity >> worrying about memory.
In games you have 16ms to draw billion+ triangles (etc.).
In web, you have 100ms to round-trip a request under abitarily high load (etc.)
Cases where you cannot "stop the world" at random and just "clean up garbage" are quite common in programming. And when they happen in GC'd languages, you're much worse off.
(As with any low-pause collector, the rest of your code is uniformly slower by some percentage because it has to make sure not to step on the toes of the concurrently-running collector.)
In practice it's actually closer to 10ms for large heaps. Large being around 220 GB.
Maybe my definition is bad though.
Which games are these? Are you referring to games written in Unity where the game logic is scripted in C#? Or are you referring to Minecraft Java Edition?
I seriously doubt you would get close to the same performance in a modern AAA title running in a Java/C# based engine.
https://dev.epicgames.com/documentation/en-us/unreal-engine/...
You're right that there is a difference between "engine written largely in C++ and some parts are GC'd" vs "game written in Java/C#", but it's certainly not unheard of to use a GC in games, pervasively in simpler ones (Heck, Balatro is written in Lua!) and sparingly in even more advanced titles.
You can recognize them by their poor performance.
With (1) you get the benefits of GC with, in many cases, a single line of code. This handles a lot of use cases. Of those it doesn't, `defer` is that "other single line".
I think the issue being raised is the "convenience payoff for the syntax/semantics burden". The payoff for temp-alloc and defer is enormous: you make the memory management explicit so you can easily see-and-reason-about the code; and it's a trivial amount of code.
There feels something deeply wrong with RAII-style langauges.. you're having the burden to reason about implicit behaviour, all the while this behaviour saves you nothing. It's the worst of both worlds: hiddenness and burdensomeness.
Not that I'm such a Rust hater, but this is also a simplification of the reality. The term "fighting the borrow checker" is these days a pretty normal saying, and it implies that the borrow checker may be automatic, but 90% of its work is telling you: no, try again. That is hardly "without needing to do much extra at all".
That's what you're missing.
1. The borrow checker is indeed a free lunch 2. Your domain lends itself well to Rust, other domains don't 3. Your code is more complicated than it would be in other languages to please the borrow checker, but you are unaware because its just the natural process of writing code in Rust.
There's probably more things that could be going on, but I think this is clear.
I certainly doubt its #1, given the high volume of very intelligent people that have negative experiences with the borrow checker.
Just like any programming paradigm, it takes time to get used to, and that time varies between people. And just like any programming paradigm, some people end up not liking it.
That doesn't mean it's a "free lunch."
Because this phrase existed, it became the thing people latch onto as a complaint, often even when there is no borrowck problem with what they were writing.
Yes of course when you make lifetime mistakes the borrowck means you have to fix them. It's true that in a sense in a GC language you don't have to fix them (although the consequences can be pretty nasty if you don't) because the GC will handle it - and that in a language like Jai you can just endure the weird crashes (but remember this article, the weird crashes aren't "Undefined Behaviour" apparently, even though that's exactly what they are)
As a Rust programmer I'm comfortable with the statement that it's "without needing to do much extra at all".
NLL's final implementation (Polonius) hasn't landed yet, and many of the original cases that NLL were meant to allow still don't compile. This doesn't come up very often in practice, but it sure sounds like a hole in your argument.
What does come up in practice is partial borrowing errors. It's one of the most common complaints among Rust programmers, and it definitely qualifies as having to fight/refactor to get obviously correct code to compile.
For some people. For example, I personally have never had a partial borrowing error.
> it definitely qualifies as having to fight/refactor to get obviously correct code to compile.
This is not for sure. That is, while it's code that could work, it's not obviously clear that it's correct. Rust cares a lot about the contract of function signatures, and partial borrows violate the signature, that's why they're not allowed. Some people want to relax that restriction. I personally think it's a bad idea.
People want to be able to specify partial borrowing in the signatures. There have been several proposals for this. But so far nothing has made it into the language.
Just to give an example of where I've run into countless partial borrowing problems: Writing a Vulkan program. The usual pattern in C++ etc is to just have a giant "GrahpicsState" struct that contains all the data you need. Then you just pass a reference to that to any function that needs any state. (of course, this is not safe, because you could have accidental mutable aliasing).
But in Rust, that just doesn't work. You get countless errors like "Can't call self.resize_framebuffer() because you've already borrowed self.grass_texture" (even though resize_framebuffer would never touch the grass texture), "Can't call self.upload_geometry() because you've already borrowed self.window.width", and so on.
So instead you end up with 30 functions that each take 20 parameters and return 5 values, and most of the code is shuffling around function arguments
It would be so much nicer if you could instead annotate that resize_framebuffer only borrows self.framebuffer, and no other part of self.
That's correct. That's why I said "Some people want to relax that restriction. I personally think it's a bad idea."
> The usual pattern in C++ etc is to just have a giant "GrahpicsState" struct that contains all the data you need. Then you just pass a reference to that to any function that needs any state.
Yes, I think that this style of programming is not good, because it creates giant balls of aliasing state. I understand that if the library you use requires you to do this, you're sorta SOL, but in the programs I write, I've never been required to do this.
> So instead you end up with 30 functions that each take 20 parameters and return 5 values, and most of the code is shuffling around function arguments
Yes, this is the downstream effects of designing APIs this way. Breaking them up into smaller chunks of state makes it significantly more pleasant.
I am not sure that it's a good idea to change the language to make using poorly designed APIs easier. I also understand that reasonable people differ on this issue.
What they're describing is the downstream effect of not designing APIs that way. If you could have a single giant GraphicsState and define everything as a method on it, you would have to pass around barely any arguments at all: everything would be reachable from the &mut self reference. And either with some annotations or with just a tiny bit of non-local analysis, the compiler would still be able to ensure non-aliasing usage.
"functions that each take 20 parameters and return 5 values" is what you're forced to write in alternative to that, to avoid partial borrowing errors: for example, instead of a self.resize_framebuffer() method, a free function resize_framebuffer(&mut self.framebuffer, &mut self.size, &mut self.several_other_pieces_of_self, &mut self.borrowed_one_by_one).
I agree that the severity of this issue is highly dependent on what you're building, but sometimes you really do have a big ball of mutable state and there's not much you can do about it.
This being said, yes Rust is useful to verify those scenarios because it _does_ specify them, and despite his brash takes on Rust, Jon admits its utility in this regard from time to time.
Nah, it's going to be Undefined. What's going on here is that there's an optimising compiler, and the way compiler optimisation works is you Define some but not all behaviour in your language and the optimiser is allowed to make any transformations which keep the behaviour you Defined.
Jai uses LLVM so in many cases the UB is exactly the same as you'd see in Clang since that's also using LLVM. For example Jai can explicitly choose not to initialize a variable (unlike C++ 23 and earlier this isn't the default for the primitive types, but it is still possible) - in LLVM I believe this means the uninitialized variable is poison. Exactly the same awful surprises result.
1. because it is the kind of optimizing compiler you say it is
2. because it uses LLVM
… there will be undefined behavior.
Unless you worked on Jai, you can’t support point 1. I’m not even sure if you’re right under that presumption, either.
For the rest you need more granular manual memory management, and defer is just a convenience in that case compared to C.
I can have graphs with pointers all over the place during the phase, I don't have to explain anything to a borrow checker, and it's safe as long as you are careful at the phase boundaries.
Note that I almost never have things that need to survive a phase boundary, so in practice the borrow checker is just a nuissance in my work.
There other use cases where this doesn't apply, so I'm not "anti borrow checker", but it's a tool, and I don't need it most of the time.
(To be clear I agree that this is an easy pattern to write correctly without a borrow checker as well. It's just not a good example of something that's any harder to do in Rust, either.)
Edit: reading wavemode comment above "Namely, in Rust it is undefined behavior for multiple mutable references to the same data to exist, ever. And it is also not enough for your program to not create multiple mut - the compiler also has to be able to prove that it can't." that I think was at least one of the problems I had.
> reading wavemode comment above
This is true for `&mut T` but that isn't directly related to arenas. Furthermore, you can have multiple mutable aliased references, but you need to not use `&mut T` while doing so: you can take advantage of some form of internal mutability and use `&T`, for example. What is needed depends on the circumstances.
Namely, in Rust it is undefined behavior for multiple mutable references to the same data to exist, ever. And it is also not enough for your program to not create multiple mut - the compiler also has to be able to prove that it can't.
That rule prevents memory corruption, but it outlaws many programs that break the rule yet actually are otherwise memory safe, and it also outlaws programs that follow the rule but wherein the compiler isn't smart enough to prove that the rule is being followed. That annoyance is the main thing people are talking about when they say they are "fighting the borrow checker" (when comparing Rust with languages like Odin/Zig/Jai).
pcw's comment was about tradeoffs programmers are willing to make -- and paints the picture more black-and-white than the reality; and more black and white than OP.
Feels like there is a beneficial property in there.
That is a great line worth remembering.
This is true but there is a middle ground. You use a reasonably fast (i.e. compiled) GC lang, and write your own allocator(s) inside of it for performance-critical stuff.
Ironically, this is usually the right pattern even in non-GC langs: you typically want to minimize unnecessary allocations during runtime, and leverage techniques like object pooling to do that.
IOW I don't think raw performance is a good argument for not using GC (e.g. gamedev or scientific computing).
Not being able to afford the GC runtime overhead is a good argument (e.g. embedded programs, HFT).
And how would that compiler work? Magic? Maybe clairvoyance?
To me this raises the question of whether this is a growing trend, or whether it's simply that languages staying closed source tends to be a death sentence for them in the long term.
"So, put simply, yes, you can shoot yourself in the foot, and the caliber is enormous. But you’re being treated like an adult the whole time"
That is, those of us who've noticed we make mistakes aren't adults we're children and this is a proper grown-up language -- pretty much the definition of condescending.
The least you can say is that he is _opinionated_. Even his friend Casey Muratori is "friendly" in comparison, at least trying to publish courses to elevate us masses of unworthy typescript coders to the higher planes of programming.
Jblow just want you to feel dumb for not programming right. He's unforgiving, Socrate's style.
The worst thing is : he might be right, most of the time.
We would not know, cause we find him infuriating, and, to be honest, we're just too dumb.
Also a 19,000 line C++ program(this is tiny) does not take 45 minutes unless something is seriously broken, it should be a few seconds at most for full rebuild even with a decent amount of template usage. This makes me suspect this author doesn't have much C++ experience, as this should have been obvious to them.
I do like the build script being in the same language, CMake can just die.
The metaprogramming looks more confusing than C++, why is "sin"/"cos" a string?
Based on this article I'm not sure what Jai's strength is, I would have assumed metaprogramming and SIMD prior, but these are hardly discussed, and the bit on metaprogramming didn't make much sense to me.
Agreed, 45 minutes is insane. In my experience, and this does depend on a lot of variables, 1 million lines of C++ ends up taking about 20 minutes. If we assume this scales linearly (I don't think it does, but let's imagine), 19k lines should take about 20 seconds. Maybe a little more with overhead, or a little less because of less burden on the linker.
There's a lot of assumptions in that back-of-the-envelope math, but if they're in the right ballpark it does mean that Jai has an order of magnitude faster builds.
I'm sure the big win is having a legit module system instead of plaintext header #include
As an industry we need to worry about this more. I get that in business, if you can be less efficient in order to put out more features faster, your dps[0] is higher. But as both a programmer and an end user, I care deeply about efficiency. Bad enough when just one application is sucking up resources unnecessarily, but now it's nearly every application, up to and including the OS itself if you are lucky enough to be a Microsoft customer.
The hardware I have sitting on my desk is vastly more powerful that what I was rocking 10-20 years ago, but the user experience seems about the same. No new features have really revolutionized how I use the computer, so from my perspective all we have done is make everything slower in lockstep with hardware advances.
[0] dollars per second
Not even.
It used to be that when you clicked a button, things happened immediately, instead of a few seconds later as everything freezes up. Text could be entered into fields without inputs getting dropped or playing catch-up. A mysterious unkillable service wouldn't randomly decide to peg your core several times a day. This was all the case even as late as Windows 7.
>Text could be entered into fields without inputs getting dropped or playing catch-up
This has been a complaint since the DOS days that has always been around from my experience. I'm pretty sure it's been industry standard from its inception that most large software providers make the software just fast enough the users don't give up and that's it.
Take something like notepad in opening files. Large files take forever. Yet I can pop open notepad++ from some random small team and it opens the same file quickly.
How does calling an anonymous function in JS cause memory allocations?
Yeah, that's what I figured. I don't know JS internals all too well, so I thought he might be hinting at some unexpected JS runtime quirk.
"... Much like how object oriented programs carry around a this pointer all over the place when working with objects, in Jai, each thread carries around a context stack, which keeps track of some cross-functional stuff, like which is the default memory allocator to ..."
It reminds me of GoLang's context, and it should've existed in any language dealing with multi-threading, as a way of carrying info about parent thread/process (and tokens) for trace propagation, etc.
In JS world async/await was never about performance, it was always about having more readable code than Promise chain spagetti.
Cite?
This problem statement is also such a weird introduction to specifically this new programming language. Yes, compiled languages with no GC are faster than the alternatives. But the problem is and was not the alternatives. Those alternatives fill the vast majority of computing uses and work well enough.
The problem is compiled languages with no GC, before Rust, were bug prone, and difficult to use safely.
So -- why are we talking about this? Because jblow won't stop catastrophizing. He has led a generation of impressionable programmers to believe that we in some dark age of software, when that statement couldn't be further from the truth.
jblow's words are not the Gospel on high.
Have you actually used modern software?
There's a great rant about Visual Studio debugger which in recent versions cannot even update debugged values as you step through the program unlike its predecessors: https://youtu.be/GC-0tCy4P1U?si=t6BsHkHhoRF46mYM
And this is professional software. There's state of personal software is worse. Most programs cannot show a page of text with a few images without consuming gigabytes of RAM and not-insignificant percentages of CPU.
Uh, yes. When was software better (like when was America great)? Do you remember what Windows and Linux and MacOS were like in 90s? What exactly is the software we are comparing?
> There's a great rant about Visual Studio debugger
Yeah, I'm not sure these are "great rants" as you say. Most are about how software with different constraints than video games aren't made with same constraints as video games. Can you believe it?
Modern software is indeed slow especially when you consider how fast modern hardware is.
If you want to feel the difference, try highly optimised software against a popular one. For eg: linux vs windows, windows explorer vs filepilot, zed vs vscode.
Not exactly a surprise? Microsoft made a choice to move to C# and the code was slower? Says precious little about software in general and much more about the constraints of modern development.
> If you want to feel the difference, try highly optimised software against a popular one. For eg: linux vs windows, windows explorer vs filepilot, zed vs vscode.
This reasoning is bonkers. Compare vastly different software with a vastly different design center to something only in the same vague class of systems?
If the question is "Is software getting worse or better?", doesn't it make more sense to compare newer software to the same old software? Again -- do you remember what Windows and Linux and MacOS were like in 90s? Do you not believe they have improved?
I'm sure jblow is having the same fears, and I hope to be wrong.
Still, it's fun to be remembering the first few videos about "hey, I have those ideas for a language". Great that he could afford to work on it.
Sometimes, mandalas are what we need.
But, no, the hubris of the language creator, whose arrogance is probably close to a few nano-Dijkstras, makes it entirely possible that he prefers _not_ releasing a superior language, out of spite for the untermenschen that would "desecrate" it by writing web servers inside it.
So I'm pretty convinced now that he will just never release it except to a couple of happy fews, and then he will die of cardio vascular diseases because he spent too much time sitting in a chair streaming nonsense, and the world will have missed an opportunity.
Then again, I'm just sad.
As John Stewart said: "on the bright side, I'm told that at some point the sun will turn supernova, and we'll all die."
They get a few "true believer" followers, give them special privileges like beta access (this case), special arcane knowledge (see Urbit), or even special standing within the community (also Urbit, although many other languages where the true believers are given authority over community spaces like discord/mailing list/irc etc.).
I don't associate in these spaces because I find the people especially toxic. Usually they are high drama because the focus isn't around technical matters but instead around the cult leader and the drama that surrounds him, defending/attacking his decisions, rationalizing his whims, and toeing the line.
Like this thread, where a large proportion is discussion about Blow as a personality rather than the technical merit of his work. He wants it that way, not so say that his work doesn't have technical merit, but that he'd rather we be talking about him.
Sadly, there exists a breed of developer that is manipulative, obnoxious, and loves to waste time/denigrate someone building something. Relatively few people are genuinely interested (like the OP) in helping to develop the thing, test builds, etc. Most just want to make contributions for their Github profile (assuming OSS) or exercise their internal demons by projecting their insecurities onto someone else.
From all of the JB content I've seen/read, this is a rough approximation of his position. It's far less stressful to just work on the idea in relative isolation until it's ready (by whatever standard) than to deal with the random chaos of letting anyone and everyone in.
This [1] is worth listening to (suspending cynicism) to get at the "why" (my editorialization, not JB).
Personally, I wish more people working on stuff were like this. It makes me far more likely to adopt it when it is ready because I can trust that the appropriate time was put in to building it.
If you consider a small team working on this, developing the language seriously, earnestly, but as a means to an end on the side, I can totally see why they think it may be the best approach to develop the language fully internally. It's an iterative develop-as-you-go approach, you're writing a highly specific opinionated tool for your niche.
So maybe it's best to simply wait until engine + game are done, and they can (depending on the game's success) really devote focus and time on polishing language and compiler up, stabilizing a version 1.0 if you will, and "package" it in an appropriate manner.
Plus: they don't seem to be in the "promote a language for the language's sake" game; it doesn't seem to be about finding the perfect release date, with shiny mascot + discord server + full fledged stdlib + full documentation from day one, to then "hire" redditors and youtubers to spread the word and have an armada of newbie programmers use it to write games... they seem to much rather see it as creating a professional tool aimed at professional programmers, particularly in the domain of high performance compiled languages, particularly for games. People they are targeting will evaluate the language thoroughly when it's out, whether that's in 2019, 2025 or 2028. And whether they are top 10 in some popularity contest or not, I just don't think they're playing by such metrics. The right people will check it out once it's out, I'm sure. And whether such a language will be used or not, will probably, hopefully even, not depend on finding the most hyped point in time to release it.
What if you want to put a resource object (which needs a cleanup on destruction) into a vector then give up ownership of the vector to someone?
I write code in go now after moving from C++ and God do I miss destructors. Saying that defer eliminates need for RAII triggers me so much
I do not subscribe to that idea because with RAII you can still have batched drops, the only difference between the two defaults is that with defer the failure mode is leaks, while with RAII the failure mode is more code than you'd otherwise would have.
IshKebab•1d ago
ramon156•23h ago
troupo•23h ago
rc00•21h ago
That being said, I do see an issue with globally scoped imports. It would be nice to know if imports can be locally scoped into a namespace or struct.
In all, whether it's compete or coexist (I don't believe the compiler for Jon's language can handle other languages so you might use Zig to compile any C or C++ or Zig), it will be nice to see another programming language garner some attention and hopefully quell the hype of others.
deagle50•4h ago