// Select all orders for users registered in last year, and compute average earnings per user
SELECT ...
(129304 rows affected)
Is mother duck editor features available on-prem? My understanding is that mother duck is a data warehouse sass.
-Customer software engineer at MotherDuck
SELECT json_serialize_sql('SELECT 2');
[
{
"json_serialize_sql('SELECT 2')": {
"error": false,
"statements": [
{
"node": {
"type": "SELECT_NODE",
"modifiers": [],
"cte_map": {
"map": []
},
"select_list": [
{
"class": "CONSTANT",
"type": "VALUE_CONSTANT",
"alias": "",
"query_location": 7,
"value": {
"type": {
"id": "INTEGER",
"type_info": null
},
"is_null": false,
"value": 2
}
}
],
"from_table": {
"type": "EMPTY",
"alias": "",
"sample": null,
"query_location": 18446744073709551615
},
"where_clause": null,
"group_expressions": [],
"group_sets": [],
"aggregate_handling": "STANDARD_HANDLING",
"having": null,
"sample": null,
"qualify": null
},
"named_param_map": []
}
]
}
}
]
I'm assuming it's more of a user preference like commas in front of the field instead of after field?
SQL is a declarative language. The ordering of the statements was carefully thought through.
I will say it's harmless though, the clauses don't have any dependency in terms of meaning so it's fine to just allow them to be reordered in terms of the meaning of the query, but that's true of lots and lots of things in programming and just having a convention is usually better than allowing anything.
For example, you could totally allow this to be legal:
def
for x in whatever:
print(x)
print_whatever(whatever):
There's nothing ambiguous about it, but why? Like if you are used to seeing it one way it just makes it more confusing to read, and if you aren't used to seeing it the normal way you should at least somewhat master something before you try to improve it through cosmetic tweaks.I think you see this all the time, people try to impose their own comfort onto things for no actual improvement.
First, repeat data analyst queries are a usage driver in SQL DBs. Think iterating the code and executing again.
Another huge factor in the same vein is running dev pipelines with limited data to validate a change works when modelling complex data.
This is currently a FE feature, but underneath lies effective caching.
The underlying tech is driving down usage cost which is a big thing for data practitioners.
ryguyrg•2h ago
Awesome video of feature: https://youtu.be/aFDUlyeMBc8
Disclaimer: I’m a co-founder at MotherDuck.