No, same concept ... you're making a mistake that some call "implementation on the brain". That they're the same concept is why you're able to specify a common operation, SelectNotNull. That you had to provide an explicit type constraint that a compiler should be able to infer doesn't change that.
That said, we already have value types like System.Int32 which inherit from System.ValueType (an abstract type) which inherits from System.Object (a non-abstract reference type), so things are already a bit weird.
But it all works, right? The runtime can do anything it wants to with the IL. The handling of Vector<T> is a good example of this - Locating arbitrary types/namespaces and emitting special instructions based upon the current machine's capabilities. Normalizing value vs reference semantics would be a tiny drop in this bucket.
The blog post doesn't mention this, but whereas if I say my function only takes this 32-bit signed integer value type, a VB.NET caller can't hand it "null" because that's not a 32-bit signed integer - if instead I say it takes string (not the nullable string?) too bad, the VB.NET caller can just pass null anyway because in the CLR there's no distinction.
You're actually expected (if you provide public surface) to write a null test, in your C# code which explicitly says in the function signature that this mustn't be null, otherwise it might blow up at runtime because the CLR doesn't care.
Hence why async/await is such a mess of IL bytecode, as it was implemented in userspace so to say.
Only with .NET Core, they followed other languages in not tying the runtime to the OS, a lesson that Google also had to learn (ART is updatable via PlayStore since Android 12).
For example in Eiffel by default they are references, but can also be turned into value types if the are blesses ones like numeric types, or if the developer tags them as expanded classes, either at definition or declaration type.
Delphi makes the distinction between classical object, and records from Object Pascal (both value based, and explicit pointers are required), or class types, heap only.
Modula-3 classes and records also follow similar approach, OBJECT follows the same semantic model of REF RECORDS.
For more modern examples, also D and Swift follow this approach.
And plenty of other examples for anyone wanting to dive into SIGPLAN.
Back in the Paleolithic, the only way to get a nullable value type was to extremely explicitly box it into a Nullable<T>. The distinction between value and reference types was crystal clear and unmistakable. Boxing values required an active and deliberate decision.
Now I guess we just box everything because null checks are hard or something.
IMO, we should just do away with nullability and use the optional/maybe approach as wrappers for potential values. Null shouldn't really exist ideally, especially in OO because it is a unit value and also the parent value for all objects. I appreciate nullability being introduced but it causes a fork in types by two branches at the top of the hierarchy in most cases and more conceptually just associates these optionals with this flawed concept.
It seems like an inheritance hierarchy build upon abstract classes that provide no usable interface just means "hey we need this inheritance for...reasons... cause its legacy!" on the one hand, and "we can rip all this useless ** out" on the other hand.
Offhand, I'd guess that explicitly inheriting from Object would either do nothing or fail to compile depending on where in the type hierarchy you are.
Similarly, all structs implicitly inherit from ValueType. That's what structs are in C#.
CS0111: Type 'Utils' already defines a member called 'Foo' with the same parameter types
That's because the "type constraints are part of the signature of the method and there is no ambiguity" statement is wrong. They are not.In most situations, you should be able to filter on the source enumerable before mapping making the whole thing more efficient.
Additionally, that `.Cast<TR>(..)` at the end should have been a dead giveaway that you are going down the wrong path here. You are incurring even more CPU and Memory costs as the `.Cast<TR>(..)` call will now iterate through all the items needlessly.[1]
Also, the design of this this method doesn't seem to make much difference to me anyways:
``` var strs = source.SelectNotNull(it => it); ```
vs
``` var strs = source.Where(it => it != null); ```
A lot of other LINQ extension methods allow you to pass in a predicate expression that will be executed on the source enumerable:
``` var str = source.First(it => it != null); ```
[1] https://source.dot.net/#System.Linq/System/Linq/Cast.cs,152b...
``` var strs = source.SelectNotNull(it => it); ```
vs
``` var strs = source.Where(it => it != null); ```
Wouldn't the first be IEnumerable<TR> and the second be IEnumerable<TR?>
I imagine that's the main driver for creating SelectNotNull, so that you get the nonnullable type out of the Linq query
Sure. And now we are fighting the compiler and in the process writing less efficient code.
The compiler gives us a way to deal with this situation. It is all about being absolutely clear with intentions. Yes, Where(..) in my example would return IEnumerable<TR?> but then in subsequent code I can tell the compiler that I know for a fact that TR? is actually TR by using the null forgiving operator (!).
I guess that seems way less clear with intentions to me. If I have an array of potentially null types and I want to filter out the not nulls, I'd much rather have an operation that returns a T[] vs a T?[].
I should also note that I also have a "IEnumerable<T> WhereNotNull(IEnumerable<T>?)" function in my codebase, but I implemented it using a foreach/yield which doesn't suffer from the extra Cast<>()
This isn’t the case. It’s allowed because the question-mark syntax means two different things in value- and reference-type contexts. The signatures really look like this:
public static IEnumerable<TR> SelectNotNull<T, TR>(
this IEnumerable<T> source,
Func<T, TR> fn)
where TR : class // …and the nullability of TR is tracked by the compiler
public static IEnumerable<TR> SelectNotNull<T, TR>(
this IEnumerable<T> source,
Func<T, Nullable<TR>> fn)
where TR : struct
This is an allowable overload.Why? If you have an value, foo, that's declared "int?", (foo is int) evaluates to if there is a value present, and false if there is no value present. The same thing happens if foo is declared as string?.
BTW: I checked if the overload can be avoided by using the "default" keyword. It can't.
manuc66•5mo ago
public static class EnumerableExtensions {
public static IEnumerable<TR> SelectNotNull<T, TR>( this IEnumerable<T> source, Func<T, TR?> fn) where TR : class { return source.Select(fn) .Where(it => it != null) .OfType<T>(); }
public static IEnumerable<TR> SelectNotNull<T, TR>( this IEnumerable<T> source, Func<T, TR?> fn) where TR : struct { return source.Select(fn) .Where(it => it != null) .Select(item => item.Value); } }
Uvix•5mo ago
nick_•5mo ago