So for there's...
C# scripting (CSI.exe): https://learn.microsoft.com/en-us/archive/msdn-magazine/2016...
PowerShell Add-Type that allows one-file inline C# scripting: https://learn.microsoft.com/en-us/powershell/module/microsof...
The Roslyn .CSX script files (RCSI.exe): https://devblogs.microsoft.com/visualstudio/introducing-the-...
.NET Interactive: https://github.com/dotnet/interactive
... and now this.
It's just standard c# no new dialect, works like a proper 'shell' script and it's not a repl, what am I missing
You seem to be confused. There is nothing being invented in here. What they are announcing is basically an update to the dotnet command line app to support building and running plain vanilla C# programs.
The whole presentation at BUILD was done as if it was a great ideas, they only thought about it now.
It's similar to cmd.exe and conhost etc. It's all tied to decades old legacy baselines that Microsoft just won't or can't let go of.
PowerShell is the ultimate Chatgpt language. For better or worse. Usually worse as most shops end up with "write only" PowerShell scripts running all the glue/infrastructure stuff.
It sounds like it can potentially replace way more than PowerShell. I mean, why would a .NET shop even bother with Python or any form of shell scripts if they can attach a shebang on top of an ad-hoc snippet? And does anyone need to crank out express.js for a test service if they can simply put together a ASP.NET minimal API in a script?
That is why even the languages I dislike and am not a big fan of, have a little place on my toolbox.
Ecosystem means nothing if you have comparable or even better alternatives in a framework of choice.
Also, it's not like the likes of Python don't have their warts. Anyone with a cursory experience with Python is aware of all the multiplatform gotchas it has with basic things like file handing.
I think it's a popular language with scientists despite that because they don't have to care about portability, reproducability or needing your replacement to be able to run it without ever speaking to you.
I use python infrequently enough that every time it's a pain point.
In the early days of npm a lot of install examples would do global installs, you'd often end up with a confusing mess in npm.
Nowadays people are much better at only doing project level installs and even correctly telling you whether to have it as a dev dependency or not.
Some of us have to take other devs into consideration.
Human collaboration is also part of the ecosystem.
I notice another poster said it's a bit slow but for many common use cases even half a second startup time is probably a small price to pay to be able to write in the language you're familiar with, use your standard libraries, etc.
.Net is usually a second tier target but python ALWAYS has first tier support (along with java and usually go).
I'm pleased to report it is usually not possible to do that. It would only create a huge mess. C# is more conducive for anything more than a few methods. And there is almost no barrier. PS is great for smaller ad-hoc stuff, and it is the "script that is on every Windows platform" component similar to what VBScript was a few years ago.
https://learn.microsoft.com/en-us/powershell/module/microsof...
I remember looking on in horror as a QA person who was responsible for the install/deploy of some banking software scrolled through the bash/perl script that installed this thing. I think it had to be 20k+ lines of the gnarliest code I've ever seen. I was the java/.net integration guy who worked with larger customers to integrate systems.
My group insisted we should do single sign in in perl script but I couldn't get the CPAN package working. I had a prototype in java done in an afternoon. I never understood why people loved perl so much. Same with powershell. Shell scripters are a different breed.
The reason being the lack of UNIX skills on the new team, and apparently it was easier to pay for the development effort.
Afterwards there were some questions about the performance decrease.
I had to patiently explain the difference between having a set of scripts orchestrating a workflow of native code applications written in C, and having the JVM do the same work, never long enough for the JIT C2 to kick in.
Half of the properties are XML attributes <tag ID=5> while the other half are child tags <tag><hello>5</hello></tag>
It's okay to read, but basically impossible to write by hand.
The tooling support is nothing like a JSON with JSON schema that gives you full intellisense with property autocomplete plus attribute description. (Imagine VSCode settings)
Is it, though? The guidelines seem to be pretty straight forward: unless you want to target a new language feature, you can just target netstandard or whatever target framework your code requires at the moment. This barely registers as a concern.
https://learn.microsoft.com/en-us/dotnet/standard/net-standa...
> It's okay to read, but basically impossible to write by hand.
Are you sure about that? I mean, what's the best example you can come up with?
This complain is even more baffling considering a) the amount of XML editor helpers out there, b) the fact that most mainstream build systems out there are already XML-based.
Targeting a specific .NET version will break in a year? LTS versions specifically have two years of support. But also .NET has very rarely broken backward compat and you can still easily load libraries built targeting .NET 5 in .NET 9 today. You don't necessarily have to "keep up with the treadmill". It's a good idea: see "free performance boosts when you do". But it isn't required and .NET 5 is still a better target from today's perspective than .NET Standard 2.0. (The biggest instance I know of a backwards compatibility break in .NET was the rescoping between .NET [Framework] 3.5.x and .NET [Framework] 4.0 on what was BCL and what was out/no longer supported and that was still nothing like Python 2 versus 3. I know a lot of people would also count the .NET Framework 4.x and .NET Core 1.0 split, too, which is the reason for the whole mess of things like .NET Standard, but also .NET Standard was the backward compatibility guarantee and .NET Standard 2.0 was its completion point, even though yes there are versions > 2.0, which are even less something anyone needs to worry about today.)
I don't think there are any good project definition files. At least csproj is standardised XML, so your IDE can tell if you're allowed to do something or not before you try to hit build.
As for targeting frameworks and versions, I think that's only a problem on Windows (where you have the built in one and the one(s) you download to run applications) and even then you can just target the latest version of whatever framework you need and compile to a standard executable if you don't want to deal with framework stuff. The frameworks themselves don't have an equivalent in most languages, but that's a feature, not a bug. It's not even C# exclusive, I've had to download specific JREs to run Java code because the standard JRE was missing a few DLLs for instance.
Except this is also worse because this is the same Microsoft commitment to backwards compatibility of "dead languages" that leads to things like the VB6 runtime still being included in Windows 11 despite the real security support for the language itself and writing new applications in it having ended entirely in the Windows XP era. (Or the approximately millions of side-by-side "Visual C++ Redistributables" in every Windows install. Or keeping the Windows Scripting Host and support for terribly old dialects of VBScript and JScript around all these decades later, even after being known mostly as a security vulnerability and malware vector for most of those same decades.)
What do you think happens when you try to use Python 3.9 to run a script that depends on features added in 3.10? This is inherent to anything that requires an interpreter or runtime. Most scripting tools just default "your stuff breaks if you get it wrong" whereas .Net requires you to explicitly define the dependency.
It's gotten real simple in the last few years, and is basically the exact same flowchart as Node.JS:
Do you need LTS support? --> Yes --> target the most recent even number (.NET 8.0 today)
^--> No --> target the most recent version (.NET 9.0 today)
Just like Node, versions cycle every six months (.NET 10, the next LTS, is in Preview [alpha/beta testing] today; the feature being discussed is a part of this preview) and LTS add an extra year and a half security support safety net to upgrade to the next LTS.Everything else can be forgotten. It's no longer needed. It's no longer a thing. It's a dead version for people that need deep legacy support in dark brownfields and nothing more than that.
There's also some issues regarding restoring other projects, but that doesn't appear to be the fault of .csproject files.
P.S.: Having a project be referenced (i.e.: marker attributes) at both the analyzer step _and_ later in the final artifact is something I never got working reliably so far. From what I've read a nuget project could make this easier, but I don't want to do that and would expect there to be a way without using the package-management system.
From your complain, it doesn't seem you're pointing anything wrong with .csproj files. You struggled with a usecases that's far from normal and might not even be right for you. Without details, it's hard to tell if you're missing something or you're blaming the tool instead of focusing on getting things to work.
If a tool (dotnet build) tells me something is wrong I am fine with it. If the same tool works after I added something to the project description and then fails at a random time after without having changed the referenced stuff, then I will happily blame the tool. Especially when commenting out the reference, recompiling until error, and then uncommenting it fixes the issue. While this behavior doesn't necessitate an issue with the files per-se, there is only entity consuming it, so to me there is no distinction.
Just compare a .csproj to something modern like a Cargo.toml and you'll see why someone might think .csproj is awful. It is immediately obvious what each section of a Cargo.toml does, and intuitive how you might edit or extend them.
Also, just talk to a C# developer, I bet over half of them have never even edited a .csproj and only use visual studio as a GUI to configure their projects.
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<TargetFramework>net8.0</TargetFramework>
</PropertyGroup>
</Project>
(It's named "SDK-style" because that Sdk="Sdk.Name" attribute on the Project tag does a ton of heavy lifting and sets a lot of smart defaults.)In the "SDK-Style", files are included by default by wildcards, you no longer need to include every file in the csproj. Adding NuGet references by hand is now just as easy as under an <ItemGroup> add a <PackageReference Include"Package.Name" Version="1.0.0" />. (For a while NuGet references were in their own file, but there would also be a dance of "assembly binding redirects" NuGet used to also need to sometimes include in a csproj. All of that is gone today and much simplified to a single easy to write by hand tag.) Other previously "advanced" things you'd only trust the UI to do are more easily done by hand now, too.
> Also, just talk to a C# developer, I bet over half of them have never even edited a .csproj and only use visual studio as a GUI to configure their projects.
Depends on the era, but in the worst days of the csproj verbosity where every file had to be mentioned (included as an Item) in the csproj I didn't know a single C# developer that hadn't needed to do some XML surgery in a csproj file at some point, though most of that was to fix merge conflicts because back then it was a common source of merge conflicts. Fixing merge conflicts in a csproj used to be a rite of passage for any reasonably sized team. (I do not miss those days and am very happy with the "SDK-Style csproj" today.)
make it work
I really want to like C# but there is a reason why it has no ecosystem outside of enterprise and gaming slop
And it is expressive instead of "mathematical" (like ML languages) which makes productively in code reviews a thing.
It is exactly where it should be.
I have no idea what you are talking about. Care to present a single example?
You don't need to. A few years ago, C# introduced top-level statements. They were introduced in 2020, I think.
https://learn.microsoft.com/en-us/dotnet/csharp/fundamentals...
System.Console.WriteLine("Hello world!");
That's it. No namespaces (which were never required anyway, not even in C# 1.0), no classes, no functions even. If you want to define a function, you can just do so in global scope, and if it's a single expression you don't even need the braces: int Fib(int n) => (n <= 1) ? n : Fib(n - 1) + Fib(n - 2);
OOP is a scam, a useful scam but still a scam, it is in no way easier or better to force _everything_ to be a class
P.S. not a functional programmer at all - except in dotnet (F#) because there's less stuff to get in my way.
You're in luck - C# doesn't force everything to be a class and has many functional programming features. Hell, even C++ has had functions as data for, like, 10 years now.
System programming, Ai, and start ups crud apps?
As much as I like Powershell and Bash, there are some tasks that my brain is wired to solve more efficiently with a C-like syntax language and this fill that gap for me.
Errm, so how is this different from a folder with a project file?
It requires neither a folder nor a project file. Just pass the source file as the argument and you have everything up and running.
>By default, file-based apps use the Microsoft.NET.Sdk SDK.
This behavior is default provided by sdk to keep project files small.
It's excellent for writing small scripts/prototyping where you need access to some Kotlin/JVM feature.
Ruby is still my preferred language for small scripts though - the backticks for running external programs is remarkably ergonomic
https://blog.jetbrains.com/kotlin/2024/11/state-of-kotlin-sc...
"In upcoming .NET 10 previews we’re aiming to improve the experience of working with file-based apps in VS Code, with enhnanced IntelliSense for the new file-based directives, improved performance, and support for debugging."
It’s also been unable to keep up with notebook tech for many potential use cases. I guess it’s a one man show and it shows.
Still, massive hat tip - I use Linqpad every day because it’s super useful for playing with your SQL data.
IE: LINQPad has a great way to visualize results. If dotnet run only outputs text, or otherwise requires a lot of plugins to visualize an object graph, there will still be quite a niche for LINQPad.
In contrast, if all you're using LINQPad for is to double-check syntax, then dotnet run might be a better option. (Sometimes if I'm "in the zone" and unsure about syntax that I use infrequently, I'll write a test LINQPad script.)
I see this as more of a complement to that. However I have worked at places that HATED powershell and we used linqpad for almost all scripting. It worked ok.
Used it 10+ years in production, but usually no one I talk to in .net world has ever heard about it.
Swift does a much better job at this as interprets by default, and a compiled version starts instantaneously. I made a transparent caching layer for your Swift cli apps.Result: instant native tools in one of the best languages out there.
Swift Script Caching Compiler (https://github.com/jrz/tools)
dotnet run doesn't need it, as it already caches the compiled version (you can disable with --no-build or inspect the binaries with --artifacts-path)
My "Swift Script Caching Compiler" compiles and caches, but will stay in interpreted mode the first three runs when you're in an interactive terminal. This allows for a faster dev-run cycle.
Which applications are those? I mean, one example they showcase is launching a basic web service.
Cli scripts/apps should simply be responsive, just like websites and apps shouldn’t be slow to open.
Slower scripts result in distraction, which ultimately leads to hackernews being in front of your nose again.
A few of my examples:
Resizing/positioning windows using shortcut keys. Unbearable if it’s not instantaneous.
Open a terminal based on some criteria. Annoying if I have to wait.
I don't think you're presenting a valid scenario. I mean, the dotnet run file.cs workflow is suited for one-off runs, but even if you're somehow assuming that these hypothetical cold start times are impossible to optimize (big, unrealistic if) then it seems you're missing the fact that this feature also allows you to build the app and generate a stand-alone binary.
So exactly what's the problem?
I recommend you read the article. They explicitly address the usecases of "When your file-based app grows in complexity, or you simply want the extra capabilities afforded in project-based apps".
Imagine cat, ls, cd, grep, mkdir, etc. would all take 500ms.
It’s the same as the electron shit. It’s simply not necessary
Those are all compiled C programs. If you were to run a C compiler before you ran them, they would take 500 milliseconds. But you don't, you compile them ahead of time and store the binaries on disk.
The equivalent is compiling a C# program, which you can, of course, do.
This dismissive “startup time doesn’t matter” outlook is why software written in C# and Java feels awful to use. PowerShell ISE was a laughingstock until Microsoft devoted thousands of man-hours over many years to make the experience less awful.
Long standing complaint about .NET / .NET Core
2017 Github issue: https://github.com/dotnet/core/issues/1060
2018 Github issue: https://github.com/dotnet/core/issues/1968
Regular people complaining, asking, and writing about it for years: https://duckduckgo.com/?t=ffab&q=cold+start+NET.&ia=web
Right up to this thread, today.
Why are you denying that this exists?
neuecc ran benchmark on CLI libs overhead, none reach half a second: https://neuecc.medium.com/consoleappframework-v5-zero-overhe...
> Swift does a much better job at this as interprets by default
The .NET JIT is a tiered JIT, it doesn't immediatly emit code immediatly.
$ time dotnet run hello-world.cs > /dev/null
real 0m1.161s
user 0m0.849s
sys 0m0.122s
$ time dotnet run hello-world.cs > /dev/null
real 0m0.465s
user 0m0.401s
sys 0m0.065s
I had some Windows command-line apps written in C# that always took at least 0.5s to run. It was an annoying distraction. After Microsoft's improvements the same code was running in 0.2s. Still perceptible, but a great improvement. This was on a cheap laptop bought in 2009.
I'm aware that .Net is using a different runtime now, but I'm amazed that it so slow on a high-end modern laptop.
It can easily add hundreds of milliseconds in various situations you can't easily control.
thanks Jared and team, keep up the great work.
It’s good that it allows scripts to run, and does packages. Simple is good
I was just curious and then surprised that it already caches compiled binaries, but that the time remained the same.
time "/Users/bouke/Library/Application Support/dotnet/runfile/hello-world-fc604c4e7d71b490ccde5271268569273873cc7ab51f5ef7dee6fb34372e89a2/bin/debug/hello-world" > /dev/null
real 0m0.051s
user 0m0.029s
sys 0m0.017s
So yeah the overhead of dotnet run is pretty high in this preview version.I’ll try to compare with explicitly compiling to a binary later today.
But that’s the thing. It’s a JIT, running a VM. Swift emits native code. Big difference.
Maybe I’ll add AOT compilation for dotnet then.. Strange they didnt incorporate that though.
> But that’s the thing. It’s a JIT, running a VM. Swift emits native code. Big difference.
It's not only a JIT, you can preJIT with R2R if you need, or precompile with NativeAOT, or i think you can fully interpret with Mono.
Edit: it looks like the issue with the dotnet cli itself, which until now was not on a 'hot path'. `dotnet help` also take half a second to show up. When running a dll directly, I think it doesn't load the cli app and just run the code necessary to start the dll.
dotnet run
> Measure-Command { dotnet run run.cs }
Days : 0
Hours : 0
Minutes : 0
Seconds : 2
Milliseconds : 451
Ticks : 24511653
TotalDays : 2,836996875E-05
TotalHours : 0,00068087925
TotalMinutes : 0,040852755
TotalSeconds : 2,4511653
TotalMilliseconds : 2451,1653
First run was around 2500ms, consecutive runs around 1300ms.Task manager doesn't show it, but process explorer shows kernel processes and the story is quite clear.
508ms with caching, 1090ms with `--no-cache`
But as others already mentioned, optimizing this seems to be pretty high priority...
I ran norton utilities on my pc yesterday and noticed a new service - it was .net runtime. Please note that I am a developer so this may be just to help launch the tools.
So why python is this popular in this domain?
Python caches compiled versions by default. (__pycache__) and simply starts faster.
Python is more a scripting language, similar to Ruby. Swift was late to the game, and is quite strict.
*performance is a feature*
And in particular perceived performance and startup performance (“lag”)
It’s one of the reasons for the success of Chrome, MySQL, Mongodb, and many others.
Because that's what dotnet run does
Anyway, lots of Python scripting can be done with the standard library, without installing any dependencies. I rarely use third-party dependencies in my Python scripts.
2. Same can be said about C# which has really strong and well designed (APIs) standard lib.
https://ttu.github.io/dotnet-script/
Or that they on the wisdom of the C# Language Runtime, decided on an incompatible approach to how F# references dependencies on its scripts.
https://learn.microsoft.com/en-us/dotnet/fsharp/tools/fsharp...
Unless you plan to also support the same approach on VB and F# scripts.
I dread the moment C# introduces sum types (long overdue) and then have them incompatible with F#. On a general note, the total disregard towards F# is very much undeserved. It is far superior to the likes of Python, and would easily be the best in the areas of machine learning and scripting, but apparently nobody inside MS is that visionary.
The fact that Python is the language of choice for ML makes it clear that technical superiority was not necessary or relevant. Python is the worst language around. The lack of better languages was never the problem. Making F# technically better wasn't going to make it the choice for ML.
This is why I always mention, when .NET team complains about adoption among new generations not going as desired they should look into their own employer first.
> they should look into their own employer first.
Indeed, it is a pattern making for some really bad optics.Why people still beat the dead horse of Python (performance) is beyond me.
You say that importing other files is not how c# works, but I think that's not entirely true. If you now treat a file as a project then an import is pretty much a project reference just to a .cs file instead of a .csproj file.
So I'd love to see something like that working: #:reference ./Tools.cs
Which should be the same as a ProjectReference in csproj or for a "real" project something like this: #:reference ./Tools/Tools.csproj
That would also enable things like this in a csproj file: <ProjectReference Include="..\Tools\Helpers.cs" />
E.g. let's say I have a compiled app and I just want to reach inside of it and invoke some method.
https://github.com/dotnet/interactive/blob/main/docs/nuget-o...
Now you've created a new dialect.
Is #:package <pkg> really so much nicer than #r "nuget: <pkg>" as to deserve deviating from .NET Interactive, F# and the existing NuGet picker? (It could be, if properly argued!) Has there been any effort to have the rest of .NET adopt this new syntax or share concerns?
On that note, is any language other than C# supported for `dotnet run`? Is the machinery at least documented for other MSBuild SDKs to use? Considering how dotnet watch keeps breaking for F#, I suspect not.
They do acknowledge it and other efforts:
https://devblogs.microsoft.com/dotnet/announcing-dotnet-run-...
It works if I run `dotnet run <file>`.
Update: It is working now, the file was using CRLF instead of LF.
#!/usr/bin/dotnet run
Console.WriteLine("Hello from a C# script!");
Which just looks like a crime against nature and right order of things.It's really hard to explain to anyone who wasn't born in a monoculture, but I come from an eastern european country where in my childhood there may have been maybe a few hundred truly foreign people living there at any one time, mostly diplomats and their families. Decades later I visited and saw a chinese immigrant speaking my native tongue. You can't imagine how... disconcerting that is. Not bad, I'm not against it, it's just... weird.
This feels exactly like that.
https://learn.microsoft.com/en-us/dotnet/csharp/language-ref...
They've been supported for decades at this point, they're not new. Your suggestion would require more changes to the language than what's actually implemented, afaiu.
I believe your java annotations cannot change the build parameters of the package being currently compiled, which is why you wouldnt be able to do that in java.
btw, anything that is present in the AST could be used for that, but I think the preprocessor directive is the most sensible choice.
I find C# to be remarkably better than javascript, for example, of having a 'right' way to do something.
Even when there are multiple ways the community tends to coalesce around one way.
I never understood why it has to be so hard on Windows to enable users to do just a little bit of scripting like it's not 80's anymore.
It is insane to me how long it takes people to realize that low barriers for execution and experimentation are important.
Imagine if the Commodore 64 or Microsoft Basic required you to scaffold your program with helper files containing very specific and correct information in order to run those programs.
Go prior to modules worked really well this way and I believe Ubuntu was using it like this, but the Go authors came out against using it as a scripting language like this.
Maybe this will make it seem like a more viable approach.
Dotnet is a pig with its dependencies by comparison.
If you are a dotnet dev shop, it is quite likely that dotnet is also a tool that is already there the places you need automation.
Plus, its also the tool that is already there in your team’s skillset.
Pros:
Syntax + access to familiar shell commands
Cons:
Bash scripts are not easy to maintain.
Argument about bash being always there breaks down quickly. Even if you limit yourself to bash+awk+grep, they dont work consistently across different bash flavors or across platforms (mac/win/linuz).
My approach now is to have my programs, in the language most convenient to me, compiled to a bunch of archs and have that run across systems. One time pain, reduces over time.
IMO this is why Perl was invented and you should just use Perl. Bash isn't portable and isn't very safe either. If you're only going to use a couple of commands, there's really no reason to use bash. The usecase, in my head, for bash is using a variety of unix utils. But then those utils aren't portable. Perl is really great here because it's available on pretty much every computer on Earth and is consistent.
A lot of people focus on Perl's bad reputation as one of the first web languages instead of its actual purpose, a better sh/awk/sed. If you're writing shell scripts Perl's a godsend for anything complex.
This can be managed but you have to manage it.
(A fanatical commitment to backwards compatibility can make this a lot easier, but it doesn't seem to me that dotnet has that.)
Besides, the trick is knowing what is POSIX compliant and portable, and what isn't. A lot of things will almost work.
Powershell on UNIX is like Perl on Windows. It works, but it's weird and alien. But the same can be said for .NET, really.
Dotnet with all dependencies you will get in how much time? In like 6 minutes including all the preparations? So the difference between "already there" and dotnet - is 6 minutes. It's hard to imagine a case where this difference matters.
Bash has a huge number of footguns, and a rather unusual syntax. It's not a language where you can safely let a junior developer tweak something in a script and expect it to go well. Using a linter like ShellCheck is essentially a hard requirement.
If you're working in a team where 99.9% of the code you're maintaining is C#, having a handful of load-bearing Bash scripts lying around is probably a Really Bad Idea. Just convert it to C# as well and save everyone a lot of trouble.
I'd say as someone who started with shell/bash in ~ 2000 to cater Linux systems, it's quote usual syntax and I believe that's true for many sysadmins.
No way I'd like to deal with opaque .Net or even Go stuff - incapable of doing "bash -x script.sh" while debugging production systems at 3AM. And non production as well - just loosing my time (and team time) on unusual syntax, getting familiar with nuget and ensuring internet access for that repos and pinging ITSec guys to open access to that repos.
> let a junior developer tweak something in a script and expect it to go well
let developers do their job, writing bash scripts is something extraordinary for dev team to do, just because - where they expected to apply it? I can imagine "lonely dev startups" situations only, where it may be reasonably needed
Theres a fair argument that complex scripts require a complex scripting language, but you have to have a good reason to pick powershell.
Typically, on non-windows, there is not one.
Its the same “tier” as bash; the thing you use because its there and then reach past for hard things. The same reason as bash.
Theres no realistic situation I would (and Ive written a lot of powershell) go, “gee, this bash script/makefile is too big and complex, lets rewrite it in powershell!”
I think PowerShell was totally right to call this out and do it better, even though I don't particularly love the try-catch style of exception handling. False is not an error condition, exceptions are typed, exceptions have messages, etc.
The problem with PowerShell coming from bash etc. is that the authors took one look at Unix shells and ran away screaming. So features that were in the Bourne shell since the late 1970s are missing and the syntax is drastically different for anything non-trivial. Other problems like treating everything as UTF-16 and otherwise mishandling non-PowerShell commands have gotten better though.
[1]: https://learn.microsoft.com/en-us/powershell/module/powershe...
Although it's old and its dependencies are old, it doesn't have a dependency on cargo (it spawns cargo as a subprocess instead of via cargo API directly), so it might still work fine with latest toolchain. I haven't tried myself.
The joke didn’t land, sadly.
I don't know if they still pronounce it that way or not.
https://en.wikipedia.org/wiki/Number_sign#Names
It still is, to this day: if you call an automated system such as voicemail, you may be prompted to "press 'pound'". This is really standardized, AFAICT, and no telephone system has told me to "press hash" or "press the 'number' key" [because that's ambiguous]
cks has a history of #! but not an etymology: https://utcc.utoronto.ca/~cks/space/blog/unix/ExecAndShebang...
He links to Wikipedia which documents a good history: https://en.wikipedia.org/wiki/Shebang_(Unix)#History
I was trying to recall what we called it. I used SVR3, so I would've been using "#!/bin/sh" as early as 1990, and even more on SunOS 4 and other Unix servers.
I can't recall having a name for it until "hash-bang" gained currency later. We knew it was activated by the magic(5) kernel interpretations. I often called "#" as "pound" from the telephone usage, and I recall being tempted to verbalize it in C64 BASIC programming, or shell comment characters, but knowing it was not the same.
"The whole shebang" is a Civil-War-era American idiom that survived with my grandparents, so I was familiar with that meaning. And not really paying attention to Ricky Martin's discography.
Wikipedia says that Larry Wall used it in 1989. I was a fervent follower of Larry Wall in the mid-90s and Perl was my #1 scripting language. If anyone would coin and/or popularize a term like that, it's Just Another Perl Hacker,
Likewise, "bang" came from the "bang path" of UUCP email addresses, or it stood for "not" in C programming, and so "#!/bin/sh" was ambiguously nameless for me, perhaps for a decade.
Come to think of it, vi and vim have a command "!" where you can filter your text through a shell command, or "shell out" from other programs. This is the semantic that makes sense for hash-bangs, but which came first?
"Bang" was in common use by computer users around 1970 when I was working at Tymshare. On the SDS/XDS Sigma 7, there was a command you could use from a Teletype to send a message to the system operator on their Teletype in the computer room. I may have this detail wrong, but I seem to recall that it included your username as a prefix, maybe like this:
GEARY: CAN YOU LOAD TAPE XYZ FOR ME?
What I do remember clearly is that there were also messages originated by the OS itself, and those began with "!!", which we pronounced "bang bang". Because who would ever want to say "exclamation point exclamation point"?The reason this is vivid in my mind is that I eventually found the low-level system call to let me send "system" messages myself. So I used it to prank the operator once in a while with this message:
!! UNDETECTABLE ERROR
I was proud of calling it an "undetectable" error. If it was undetectable, how did the OS detect it?Remember, it was originally release +20 years ago (goddamn, I feel old now); recorded video or even audio over the internet were much, much, MUCH rarer then, when "high-speed" speed internet for a lot of people meant a 56K modem.
Back then, most developer's first exposure to C# then was likely in either print form (books or maybe MSDN magazine).
Also got a few of those magazine CDs, in some box.
It's "sharp" (i.e. higher tone) because it's a higher-level language compared to C and C++.
California, 40yo fwiw
Lame joke aside, I only heard "shebang" prior to around that time, then "hashbang" and now I get a mix of it. Google trends indicates "shebang" always dominated.
.NET Interactive had already added a directive for C# NuGet references, compatible with F#'s, and NuGet's picker had already labeled that syntax "Script & Interactive". But no, they had to invent a new directive.
Having used Turbo Pascal and Turbo C prior to Visual Basic and Visual C, Microsoft's "Visual" IDEs and other Windows based IDEs of the 90's were a step up in ease of use even if they did require more files to build a project.
It his basically:
gcc test.c -o test.exe test.exe.
and
#!/ comple and run gcc test.c -o test.exe test.exe.
Sounds like argument for not sharpening knives because criminals can stab people.
//usr/bin/env rustc "$0" && ./config "$@"; exit
Or //usr/bin/env rustc --edition 2024 "$0" && ./$(basename $0 .rs); rm $(basename $0 .rs); exit
Shell scripts are ugly :)
(If you didn't mean to use them in a shebang but rather as the script body, that's fine but that wouldn't be possible without the polyglot syntax and #allow abuse I posted.)
I would have liked to use `exec -a` to set the executable name (as mentioned in the comments on the article) but alas, that too is not portable.
and now shebang C# scripts,
is everything converging into one meta language?
Since C# single file is new, there is not a ton of code for Copilot to reference so it's probably confused.
void main() {
System.out.println("Hello, World!");
}
This is currently (as of JDK 24) under preview, so such a file needs to be run with "--enable-preview". E.g. $ java --enable-preview HelloWorld.java
Hello, World!
[0] https://openjdk.org/jeps/445 void main() {
IO.println("Hello, World!");
}
JEP 445 has been followed by JEP 512 which contained minor improvements and finalizes the feature [2].If you want to use 3rd-party libraries, I highly recommend trying JBang [3].
[1] https://jdk.java.net/25/release-notes
There is another thing I'm really missing for .NET projects. The possibility to easily define project-specific commands. Something like "npm run <command>"
I am using https://github.com/dotnet-script/dotnet-script without any issues. Skipping an extra step would be cool though.
But then each script has an individual list of dependencies so there should be no need for further scoping like in npm (as in, the compilation of the script is always scoped behind the scenes). In this regard, both should be similar to https://docs.astral.sh/uv/guides/scripts/#declaring-script-d... which I absolutely love.
https://learn.microsoft.com/en-us/dotnet/core/tools/global-t...
I'm sure the tooling is better now, though. I seem to recall visual studio and/or Rider also supporting something like this natively at the time.
Why is it being re-announced now?
Was it only in a beta version and removed to re-appear now? It used the Rosyln compiler to compile on demand and then execute it, and used pragma directives that were compatible with dotnet-script's. I cannot remember what the shebang was set to, but it wasn't the same one as dotnet-script.
dotnetc app.cs
./app
I wish Microsoft would just provide a LSP server for C#. Not just a half proprietary extension for VS Code.
Either way, this is a dream come true. I always hated Powershell and it’s strange fixation with OOP. Modern C# seems to acknowledge that sometimes all we want are functions, scripts, Linq and records.
This feature is likely added to compete with Python, Ruby, etc. The fact you just create a file, write some code and run it.
However, I don't see C# being a competitor of said languages even for simple command lines. If anything, it could be a viable replacement to Powershell or maybe F# especially if you need to link it to other .NET DLLs and 'do things'
I am also interested in the performance difference compared to other languages. I mean even Dlang has a script-like feature for sometime, now.. rdmd. Not sure the status of that, but it still compiled and runs the program. Just seems overkill for something rather simple.
Performance right now is horrible (500ms), they promised to improve it, but let us be honest: They are in-memory scaffolding a project, run msbuild restore/reassess/reuse dependencies, and then compile a dll and then load it into a JIT. that is so many more steps than a interpreter will do.
Absolutely...100% !!
adzm•1d ago
motorest•1d ago
oaiey•1d ago
pjmlp•22h ago
The problem with UNIX culture shops not picking up .NET has everything to do with the Microsoft stigma, and everything the management keeps doing against .NET team efforts, like VSCode vs VS tooling features, C# DevKit license, what frameworks get to be on GNU/Linux, and the current ongoing issues with FOSS on .NET and the role of .NET Foundation.
Minimal APIs and now scripting, which already existed as third party solutions (search for csx), won't sort out those issues.
They can even start by going into Azure and check why there are so many projects now chosing other languages instead of .NET, when working on the open.
This would already be the first place to promote .NET adoption.
tester756•21h ago
Jesus christ, it sounds like a religion
pjmlp•20h ago
The only change through the times is where they take place and how hard, or low, they might get.
HideousKojima•13h ago
GoblinSlayer•1d ago
Digit-Al•1d ago
Incipient•1d ago
If python hadn't (nearly) caught up to c# in typing support, I'd seriously consider moving or at least running it...but as it stands, python has established itself too well for me.
motorest•1d ago
What's the difference between installing .NET or, say, python of node?
noworriesnate•22h ago
GoblinSlayer•22h ago